Sep 12 22:03:15.764296 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 22:03:15.764318 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 12 20:38:46 -00 2025 Sep 12 22:03:15.764328 kernel: KASLR enabled Sep 12 22:03:15.764333 kernel: efi: EFI v2.7 by EDK II Sep 12 22:03:15.764339 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Sep 12 22:03:15.764344 kernel: random: crng init done Sep 12 22:03:15.764351 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 22:03:15.764357 kernel: secureboot: Secure boot enabled Sep 12 22:03:15.764363 kernel: ACPI: Early table checksum verification disabled Sep 12 22:03:15.764370 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 12 22:03:15.764376 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 22:03:15.764382 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:03:15.764388 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:03:15.764394 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:03:15.764401 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:03:15.764408 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:03:15.764414 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:03:15.764420 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:03:15.764427 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:03:15.764433 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:03:15.764439 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 22:03:15.764445 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 22:03:15.764452 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 22:03:15.764458 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 12 22:03:15.764464 kernel: Zone ranges: Sep 12 22:03:15.764471 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 22:03:15.764477 kernel: DMA32 empty Sep 12 22:03:15.764483 kernel: Normal empty Sep 12 22:03:15.764489 kernel: Device empty Sep 12 22:03:15.764495 kernel: Movable zone start for each node Sep 12 22:03:15.764501 kernel: Early memory node ranges Sep 12 22:03:15.764507 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 12 22:03:15.764513 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 12 22:03:15.764519 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 12 22:03:15.764525 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 12 22:03:15.764531 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 12 22:03:15.764537 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 12 22:03:15.764544 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 12 22:03:15.764550 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 12 22:03:15.764556 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 22:03:15.764564 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 22:03:15.764571 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 22:03:15.764577 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 12 22:03:15.764583 kernel: psci: probing for conduit method from ACPI. Sep 12 22:03:15.764591 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 22:03:15.764598 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 22:03:15.764604 kernel: psci: Trusted OS migration not required Sep 12 22:03:15.764611 kernel: psci: SMC Calling Convention v1.1 Sep 12 22:03:15.764617 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 22:03:15.764624 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 22:03:15.764630 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 22:03:15.764637 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 22:03:15.764644 kernel: Detected PIPT I-cache on CPU0 Sep 12 22:03:15.764652 kernel: CPU features: detected: GIC system register CPU interface Sep 12 22:03:15.764658 kernel: CPU features: detected: Spectre-v4 Sep 12 22:03:15.764665 kernel: CPU features: detected: Spectre-BHB Sep 12 22:03:15.764672 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 22:03:15.764678 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 22:03:15.764685 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 22:03:15.764691 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 22:03:15.764697 kernel: alternatives: applying boot alternatives Sep 12 22:03:15.764705 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=319fa5fb212e5dd8bf766d2f9f0bbb61d6aa6c81f2813f4b5b49defba0af2b2f Sep 12 22:03:15.764712 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 22:03:15.764718 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 22:03:15.764726 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 22:03:15.764733 kernel: Fallback order for Node 0: 0 Sep 12 22:03:15.764739 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 12 22:03:15.764746 kernel: Policy zone: DMA Sep 12 22:03:15.764752 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 22:03:15.764758 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 12 22:03:15.764765 kernel: software IO TLB: area num 4. Sep 12 22:03:15.764772 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 12 22:03:15.764778 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 12 22:03:15.764785 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 22:03:15.764791 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 22:03:15.764798 kernel: rcu: RCU event tracing is enabled. Sep 12 22:03:15.764807 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 22:03:15.764825 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 22:03:15.764832 kernel: Tracing variant of Tasks RCU enabled. Sep 12 22:03:15.764838 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 22:03:15.764845 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 22:03:15.764851 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 22:03:15.764858 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 22:03:15.764865 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 22:03:15.764871 kernel: GICv3: 256 SPIs implemented Sep 12 22:03:15.764878 kernel: GICv3: 0 Extended SPIs implemented Sep 12 22:03:15.764884 kernel: Root IRQ handler: gic_handle_irq Sep 12 22:03:15.764892 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 22:03:15.764898 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 22:03:15.764905 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 22:03:15.764911 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 22:03:15.764918 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 12 22:03:15.764925 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 12 22:03:15.764931 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 12 22:03:15.764937 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 12 22:03:15.764944 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 22:03:15.764950 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 22:03:15.764957 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 22:03:15.764963 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 22:03:15.764972 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 22:03:15.764978 kernel: arm-pv: using stolen time PV Sep 12 22:03:15.764986 kernel: Console: colour dummy device 80x25 Sep 12 22:03:15.764993 kernel: ACPI: Core revision 20240827 Sep 12 22:03:15.765000 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 22:03:15.765007 kernel: pid_max: default: 32768 minimum: 301 Sep 12 22:03:15.765013 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 22:03:15.765020 kernel: landlock: Up and running. Sep 12 22:03:15.765026 kernel: SELinux: Initializing. Sep 12 22:03:15.765035 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 22:03:15.765041 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 22:03:15.765048 kernel: rcu: Hierarchical SRCU implementation. Sep 12 22:03:15.765055 kernel: rcu: Max phase no-delay instances is 400. Sep 12 22:03:15.765062 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 22:03:15.765069 kernel: Remapping and enabling EFI services. Sep 12 22:03:15.765076 kernel: smp: Bringing up secondary CPUs ... Sep 12 22:03:15.765082 kernel: Detected PIPT I-cache on CPU1 Sep 12 22:03:15.765089 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 22:03:15.765097 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 12 22:03:15.765109 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 22:03:15.765116 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 22:03:15.765124 kernel: Detected PIPT I-cache on CPU2 Sep 12 22:03:15.765131 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 22:03:15.765138 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 12 22:03:15.765145 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 22:03:15.765152 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 22:03:15.765159 kernel: Detected PIPT I-cache on CPU3 Sep 12 22:03:15.765168 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 22:03:15.765175 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 12 22:03:15.765182 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 22:03:15.765189 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 22:03:15.765196 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 22:03:15.765203 kernel: SMP: Total of 4 processors activated. Sep 12 22:03:15.765210 kernel: CPU: All CPU(s) started at EL1 Sep 12 22:03:15.765216 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 22:03:15.765223 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 22:03:15.765232 kernel: CPU features: detected: Common not Private translations Sep 12 22:03:15.765245 kernel: CPU features: detected: CRC32 instructions Sep 12 22:03:15.765252 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 22:03:15.765259 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 22:03:15.765266 kernel: CPU features: detected: LSE atomic instructions Sep 12 22:03:15.765273 kernel: CPU features: detected: Privileged Access Never Sep 12 22:03:15.765280 kernel: CPU features: detected: RAS Extension Support Sep 12 22:03:15.765287 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 22:03:15.765294 kernel: alternatives: applying system-wide alternatives Sep 12 22:03:15.765303 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 12 22:03:15.765311 kernel: Memory: 2422372K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38976K init, 1038K bss, 127580K reserved, 16384K cma-reserved) Sep 12 22:03:15.765318 kernel: devtmpfs: initialized Sep 12 22:03:15.765325 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 22:03:15.765332 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 22:03:15.765339 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 22:03:15.765345 kernel: 0 pages in range for non-PLT usage Sep 12 22:03:15.765352 kernel: 508560 pages in range for PLT usage Sep 12 22:03:15.765359 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 22:03:15.765368 kernel: SMBIOS 3.0.0 present. Sep 12 22:03:15.765375 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 22:03:15.765381 kernel: DMI: Memory slots populated: 1/1 Sep 12 22:03:15.765388 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 22:03:15.765402 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 22:03:15.765409 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 22:03:15.765416 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 22:03:15.765423 kernel: audit: initializing netlink subsys (disabled) Sep 12 22:03:15.765430 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 12 22:03:15.765439 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 22:03:15.765447 kernel: cpuidle: using governor menu Sep 12 22:03:15.765454 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 22:03:15.765461 kernel: ASID allocator initialised with 32768 entries Sep 12 22:03:15.765468 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 22:03:15.765476 kernel: Serial: AMBA PL011 UART driver Sep 12 22:03:15.765483 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 22:03:15.765490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 22:03:15.765497 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 22:03:15.765506 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 22:03:15.765513 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 22:03:15.765520 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 22:03:15.765527 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 22:03:15.765535 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 22:03:15.765542 kernel: ACPI: Added _OSI(Module Device) Sep 12 22:03:15.765549 kernel: ACPI: Added _OSI(Processor Device) Sep 12 22:03:15.765556 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 22:03:15.765563 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 22:03:15.765572 kernel: ACPI: Interpreter enabled Sep 12 22:03:15.765579 kernel: ACPI: Using GIC for interrupt routing Sep 12 22:03:15.765586 kernel: ACPI: MCFG table detected, 1 entries Sep 12 22:03:15.765593 kernel: ACPI: CPU0 has been hot-added Sep 12 22:03:15.765601 kernel: ACPI: CPU1 has been hot-added Sep 12 22:03:15.765607 kernel: ACPI: CPU2 has been hot-added Sep 12 22:03:15.765614 kernel: ACPI: CPU3 has been hot-added Sep 12 22:03:15.765621 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 22:03:15.765628 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 22:03:15.765637 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 22:03:15.765772 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 22:03:15.765945 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 22:03:15.766016 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 22:03:15.766075 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 22:03:15.766133 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 22:03:15.766143 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 22:03:15.766154 kernel: PCI host bridge to bus 0000:00 Sep 12 22:03:15.766223 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 22:03:15.766293 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 22:03:15.766365 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 22:03:15.766420 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 22:03:15.766508 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 12 22:03:15.766581 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 22:03:15.766645 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 12 22:03:15.766706 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 12 22:03:15.767420 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 22:03:15.767498 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 12 22:03:15.767560 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 12 22:03:15.767621 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 12 22:03:15.767685 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 22:03:15.767741 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 22:03:15.767794 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 22:03:15.767803 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 22:03:15.767832 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 22:03:15.767842 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 22:03:15.767849 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 22:03:15.767856 kernel: iommu: Default domain type: Translated Sep 12 22:03:15.767863 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 22:03:15.767873 kernel: efivars: Registered efivars operations Sep 12 22:03:15.767880 kernel: vgaarb: loaded Sep 12 22:03:15.767886 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 22:03:15.767894 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 22:03:15.767901 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 22:03:15.767908 kernel: pnp: PnP ACPI init Sep 12 22:03:15.767994 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 22:03:15.768006 kernel: pnp: PnP ACPI: found 1 devices Sep 12 22:03:15.768016 kernel: NET: Registered PF_INET protocol family Sep 12 22:03:15.768023 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 22:03:15.768030 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 22:03:15.768037 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 22:03:15.768044 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 22:03:15.768051 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 22:03:15.768059 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 22:03:15.768066 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 22:03:15.768073 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 22:03:15.768081 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 22:03:15.768088 kernel: PCI: CLS 0 bytes, default 64 Sep 12 22:03:15.768095 kernel: kvm [1]: HYP mode not available Sep 12 22:03:15.768102 kernel: Initialise system trusted keyrings Sep 12 22:03:15.768109 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 22:03:15.768116 kernel: Key type asymmetric registered Sep 12 22:03:15.768124 kernel: Asymmetric key parser 'x509' registered Sep 12 22:03:15.768131 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 22:03:15.768138 kernel: io scheduler mq-deadline registered Sep 12 22:03:15.768147 kernel: io scheduler kyber registered Sep 12 22:03:15.768154 kernel: io scheduler bfq registered Sep 12 22:03:15.768161 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 22:03:15.768168 kernel: ACPI: button: Power Button [PWRB] Sep 12 22:03:15.768175 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 22:03:15.768245 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 22:03:15.768257 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 22:03:15.768264 kernel: thunder_xcv, ver 1.0 Sep 12 22:03:15.768271 kernel: thunder_bgx, ver 1.0 Sep 12 22:03:15.768281 kernel: nicpf, ver 1.0 Sep 12 22:03:15.768288 kernel: nicvf, ver 1.0 Sep 12 22:03:15.768367 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 22:03:15.768428 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T22:03:15 UTC (1757714595) Sep 12 22:03:15.768438 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 22:03:15.768445 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 22:03:15.768453 kernel: watchdog: NMI not fully supported Sep 12 22:03:15.768460 kernel: watchdog: Hard watchdog permanently disabled Sep 12 22:03:15.768469 kernel: NET: Registered PF_INET6 protocol family Sep 12 22:03:15.768476 kernel: Segment Routing with IPv6 Sep 12 22:03:15.768483 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 22:03:15.768490 kernel: NET: Registered PF_PACKET protocol family Sep 12 22:03:15.768497 kernel: Key type dns_resolver registered Sep 12 22:03:15.768504 kernel: registered taskstats version 1 Sep 12 22:03:15.768511 kernel: Loading compiled-in X.509 certificates Sep 12 22:03:15.768518 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 2d7730e6d35b3fbd1c590cd72a2500b2380c020e' Sep 12 22:03:15.768526 kernel: Demotion targets for Node 0: null Sep 12 22:03:15.768534 kernel: Key type .fscrypt registered Sep 12 22:03:15.768541 kernel: Key type fscrypt-provisioning registered Sep 12 22:03:15.768549 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 22:03:15.768556 kernel: ima: Allocated hash algorithm: sha1 Sep 12 22:03:15.768563 kernel: ima: No architecture policies found Sep 12 22:03:15.768570 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 22:03:15.768577 kernel: clk: Disabling unused clocks Sep 12 22:03:15.768585 kernel: PM: genpd: Disabling unused power domains Sep 12 22:03:15.768591 kernel: Warning: unable to open an initial console. Sep 12 22:03:15.768600 kernel: Freeing unused kernel memory: 38976K Sep 12 22:03:15.768607 kernel: Run /init as init process Sep 12 22:03:15.768614 kernel: with arguments: Sep 12 22:03:15.768621 kernel: /init Sep 12 22:03:15.768628 kernel: with environment: Sep 12 22:03:15.768635 kernel: HOME=/ Sep 12 22:03:15.768642 kernel: TERM=linux Sep 12 22:03:15.768649 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 22:03:15.768657 systemd[1]: Successfully made /usr/ read-only. Sep 12 22:03:15.768669 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:03:15.768678 systemd[1]: Detected virtualization kvm. Sep 12 22:03:15.768685 systemd[1]: Detected architecture arm64. Sep 12 22:03:15.768692 systemd[1]: Running in initrd. Sep 12 22:03:15.768700 systemd[1]: No hostname configured, using default hostname. Sep 12 22:03:15.768708 systemd[1]: Hostname set to . Sep 12 22:03:15.768715 systemd[1]: Initializing machine ID from VM UUID. Sep 12 22:03:15.768724 systemd[1]: Queued start job for default target initrd.target. Sep 12 22:03:15.768731 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:03:15.768739 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:03:15.768747 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 22:03:15.768755 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:03:15.768763 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 22:03:15.768771 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 22:03:15.768781 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 22:03:15.768789 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 22:03:15.768797 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:03:15.768805 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:03:15.768836 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:03:15.768845 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:03:15.768853 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:03:15.768860 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:03:15.768870 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:03:15.768878 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:03:15.768886 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 22:03:15.768893 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 22:03:15.768901 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:03:15.768909 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:03:15.768917 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:03:15.768924 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:03:15.768932 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 22:03:15.768940 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:03:15.768948 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 22:03:15.768956 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 22:03:15.768964 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 22:03:15.768971 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:03:15.768979 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:03:15.768987 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:03:15.768994 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 22:03:15.769004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:03:15.769012 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 22:03:15.769019 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 22:03:15.769045 systemd-journald[243]: Collecting audit messages is disabled. Sep 12 22:03:15.769066 systemd-journald[243]: Journal started Sep 12 22:03:15.769083 systemd-journald[243]: Runtime Journal (/run/log/journal/67fd53c4f4d54aee857c7709206100c8) is 6M, max 48.5M, 42.4M free. Sep 12 22:03:15.774886 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 22:03:15.774920 kernel: Bridge firewalling registered Sep 12 22:03:15.758268 systemd-modules-load[246]: Inserted module 'overlay' Sep 12 22:03:15.777062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:03:15.775179 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 12 22:03:15.779770 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:03:15.781598 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:03:15.785236 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 22:03:15.788079 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:03:15.796405 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:03:15.797849 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:03:15.801661 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:03:15.808994 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 22:03:15.810464 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:03:15.815107 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:03:15.816962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:03:15.821569 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:03:15.822944 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:03:15.825732 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 22:03:15.840149 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=319fa5fb212e5dd8bf766d2f9f0bbb61d6aa6c81f2813f4b5b49defba0af2b2f Sep 12 22:03:15.854104 systemd-resolved[287]: Positive Trust Anchors: Sep 12 22:03:15.854124 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:03:15.854157 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:03:15.858890 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 12 22:03:15.859807 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:03:15.863945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:03:15.911863 kernel: SCSI subsystem initialized Sep 12 22:03:15.915838 kernel: Loading iSCSI transport class v2.0-870. Sep 12 22:03:15.923849 kernel: iscsi: registered transport (tcp) Sep 12 22:03:15.936847 kernel: iscsi: registered transport (qla4xxx) Sep 12 22:03:15.936867 kernel: QLogic iSCSI HBA Driver Sep 12 22:03:15.953928 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:03:15.974315 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:03:15.977015 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:03:16.019691 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 22:03:16.022022 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 22:03:16.090850 kernel: raid6: neonx8 gen() 15761 MB/s Sep 12 22:03:16.107835 kernel: raid6: neonx4 gen() 15785 MB/s Sep 12 22:03:16.124838 kernel: raid6: neonx2 gen() 13192 MB/s Sep 12 22:03:16.141835 kernel: raid6: neonx1 gen() 10409 MB/s Sep 12 22:03:16.158835 kernel: raid6: int64x8 gen() 6899 MB/s Sep 12 22:03:16.175834 kernel: raid6: int64x4 gen() 7343 MB/s Sep 12 22:03:16.192841 kernel: raid6: int64x2 gen() 6093 MB/s Sep 12 22:03:16.210053 kernel: raid6: int64x1 gen() 5034 MB/s Sep 12 22:03:16.210069 kernel: raid6: using algorithm neonx4 gen() 15785 MB/s Sep 12 22:03:16.227993 kernel: raid6: .... xor() 12359 MB/s, rmw enabled Sep 12 22:03:16.228009 kernel: raid6: using neon recovery algorithm Sep 12 22:03:16.234205 kernel: xor: measuring software checksum speed Sep 12 22:03:16.234224 kernel: 8regs : 21658 MB/sec Sep 12 22:03:16.234239 kernel: 32regs : 21664 MB/sec Sep 12 22:03:16.234835 kernel: arm64_neon : 26553 MB/sec Sep 12 22:03:16.234850 kernel: xor: using function: arm64_neon (26553 MB/sec) Sep 12 22:03:16.287856 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 22:03:16.293651 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:03:16.296403 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:03:16.323334 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 12 22:03:16.327463 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:03:16.329980 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 22:03:16.361331 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 12 22:03:16.384445 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:03:16.386896 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:03:16.438697 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:03:16.442056 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 22:03:16.503890 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 22:03:16.510322 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 22:03:16.512859 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 22:03:16.512900 kernel: GPT:9289727 != 19775487 Sep 12 22:03:16.512911 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 22:03:16.513959 kernel: GPT:9289727 != 19775487 Sep 12 22:03:16.515351 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 22:03:16.515379 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:03:16.515412 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:03:16.515563 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:03:16.519624 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:03:16.522019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:03:16.548926 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 22:03:16.550459 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 22:03:16.554157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:03:16.562265 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 22:03:16.570875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 22:03:16.577069 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 22:03:16.578335 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 22:03:16.581468 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:03:16.583840 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:03:16.585992 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:03:16.588696 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 22:03:16.590557 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 22:03:16.604211 disk-uuid[592]: Primary Header is updated. Sep 12 22:03:16.604211 disk-uuid[592]: Secondary Entries is updated. Sep 12 22:03:16.604211 disk-uuid[592]: Secondary Header is updated. Sep 12 22:03:16.607968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:03:16.608319 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:03:17.615078 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:03:17.616046 disk-uuid[595]: The operation has completed successfully. Sep 12 22:03:17.642555 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 22:03:17.643743 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 22:03:17.667969 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 22:03:17.682849 sh[612]: Success Sep 12 22:03:17.694868 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 22:03:17.694908 kernel: device-mapper: uevent: version 1.0.3 Sep 12 22:03:17.696038 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 22:03:17.702858 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 22:03:17.726557 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 22:03:17.729474 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 22:03:17.742944 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 22:03:17.749833 kernel: BTRFS: device fsid 254e43f1-b609-42b8-bcc5-437252095415 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (624) Sep 12 22:03:17.752060 kernel: BTRFS info (device dm-0): first mount of filesystem 254e43f1-b609-42b8-bcc5-437252095415 Sep 12 22:03:17.752098 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 22:03:17.756363 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 22:03:17.756383 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 22:03:17.757517 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 22:03:17.758877 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:03:17.760332 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 22:03:17.761109 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 22:03:17.762673 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 22:03:17.784844 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Sep 12 22:03:17.787946 kernel: BTRFS info (device vda6): first mount of filesystem 5dadbedd-e975-4944-978a-462cb6ec6aa0 Sep 12 22:03:17.787979 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 22:03:17.791287 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:03:17.791349 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:03:17.795838 kernel: BTRFS info (device vda6): last unmount of filesystem 5dadbedd-e975-4944-978a-462cb6ec6aa0 Sep 12 22:03:17.796322 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 22:03:17.799092 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 22:03:17.871481 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:03:17.876582 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:03:17.900941 ignition[700]: Ignition 2.22.0 Sep 12 22:03:17.900957 ignition[700]: Stage: fetch-offline Sep 12 22:03:17.900988 ignition[700]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:03:17.900995 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:03:17.901069 ignition[700]: parsed url from cmdline: "" Sep 12 22:03:17.901072 ignition[700]: no config URL provided Sep 12 22:03:17.901076 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 22:03:17.901082 ignition[700]: no config at "/usr/lib/ignition/user.ign" Sep 12 22:03:17.901101 ignition[700]: op(1): [started] loading QEMU firmware config module Sep 12 22:03:17.901105 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 22:03:17.911227 ignition[700]: op(1): [finished] loading QEMU firmware config module Sep 12 22:03:17.919361 systemd-networkd[805]: lo: Link UP Sep 12 22:03:17.919374 systemd-networkd[805]: lo: Gained carrier Sep 12 22:03:17.920038 systemd-networkd[805]: Enumeration completed Sep 12 22:03:17.920118 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:03:17.920415 systemd-networkd[805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:03:17.920419 systemd-networkd[805]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:03:17.921149 systemd-networkd[805]: eth0: Link UP Sep 12 22:03:17.921248 systemd-networkd[805]: eth0: Gained carrier Sep 12 22:03:17.921257 systemd-networkd[805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:03:17.921590 systemd[1]: Reached target network.target - Network. Sep 12 22:03:17.934851 systemd-networkd[805]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 22:03:17.962181 ignition[700]: parsing config with SHA512: b3e0d1f18885833d57d64a95f14ba721ad857742bad968a673e9f3a8eef494320d5ff62efcaf688ab1b797f41b8bb389eb1f7301911e918e2670bda1bb883c25 Sep 12 22:03:17.966070 unknown[700]: fetched base config from "system" Sep 12 22:03:17.966081 unknown[700]: fetched user config from "qemu" Sep 12 22:03:17.966471 ignition[700]: fetch-offline: fetch-offline passed Sep 12 22:03:17.966531 ignition[700]: Ignition finished successfully Sep 12 22:03:17.968715 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:03:17.970651 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 22:03:17.971391 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 22:03:18.009063 ignition[813]: Ignition 2.22.0 Sep 12 22:03:18.009079 ignition[813]: Stage: kargs Sep 12 22:03:18.009211 ignition[813]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:03:18.009219 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:03:18.009985 ignition[813]: kargs: kargs passed Sep 12 22:03:18.012691 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 22:03:18.010026 ignition[813]: Ignition finished successfully Sep 12 22:03:18.017380 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 22:03:18.045621 ignition[821]: Ignition 2.22.0 Sep 12 22:03:18.045637 ignition[821]: Stage: disks Sep 12 22:03:18.045766 ignition[821]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:03:18.045775 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:03:18.046528 ignition[821]: disks: disks passed Sep 12 22:03:18.046573 ignition[821]: Ignition finished successfully Sep 12 22:03:18.049446 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 22:03:18.050659 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 22:03:18.052141 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 22:03:18.054018 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:03:18.055796 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:03:18.057773 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:03:18.060338 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 22:03:18.084077 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 22:03:18.087784 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 22:03:18.092136 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 22:03:18.149832 kernel: EXT4-fs (vda9): mounted filesystem a7b592ec-3c41-4dc2-88a7-056c1f18b418 r/w with ordered data mode. Quota mode: none. Sep 12 22:03:18.150719 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 22:03:18.152098 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 22:03:18.154655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:03:18.156350 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 22:03:18.157380 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 22:03:18.157418 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 22:03:18.157454 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:03:18.167470 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 22:03:18.169554 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 22:03:18.173838 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 12 22:03:18.176639 kernel: BTRFS info (device vda6): first mount of filesystem 5dadbedd-e975-4944-978a-462cb6ec6aa0 Sep 12 22:03:18.176673 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 22:03:18.179375 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:03:18.179409 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:03:18.181364 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:03:18.206326 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 22:03:18.209825 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Sep 12 22:03:18.213732 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 22:03:18.217390 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 22:03:18.284282 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 22:03:18.286415 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 22:03:18.288119 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 22:03:18.306846 kernel: BTRFS info (device vda6): last unmount of filesystem 5dadbedd-e975-4944-978a-462cb6ec6aa0 Sep 12 22:03:18.325953 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 22:03:18.341023 ignition[953]: INFO : Ignition 2.22.0 Sep 12 22:03:18.341023 ignition[953]: INFO : Stage: mount Sep 12 22:03:18.343849 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:03:18.343849 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:03:18.343849 ignition[953]: INFO : mount: mount passed Sep 12 22:03:18.343849 ignition[953]: INFO : Ignition finished successfully Sep 12 22:03:18.344569 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 22:03:18.346711 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 22:03:18.758268 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 22:03:18.759794 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:03:18.782842 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 12 22:03:18.782880 kernel: BTRFS info (device vda6): first mount of filesystem 5dadbedd-e975-4944-978a-462cb6ec6aa0 Sep 12 22:03:18.782891 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 22:03:18.786426 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:03:18.786448 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:03:18.787795 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:03:18.823031 ignition[983]: INFO : Ignition 2.22.0 Sep 12 22:03:18.823031 ignition[983]: INFO : Stage: files Sep 12 22:03:18.825263 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:03:18.825263 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:03:18.825263 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 12 22:03:18.825263 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 22:03:18.825263 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 22:03:18.834618 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 22:03:18.834618 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 22:03:18.834618 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 22:03:18.834618 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 22:03:18.834618 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 22:03:18.828547 unknown[983]: wrote ssh authorized keys file for user: core Sep 12 22:03:18.905558 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 22:03:19.226265 systemd-networkd[805]: eth0: Gained IPv6LL Sep 12 22:03:19.547143 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 22:03:19.549633 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 22:03:19.549633 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 22:03:19.747683 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 22:03:19.848913 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 22:03:19.848913 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 22:03:19.853058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 22:03:19.853058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:03:19.853058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:03:19.853058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:03:19.853058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:03:19.853058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:03:19.853058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:03:19.853058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:03:19.853058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:03:19.853058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 22:03:19.870909 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 22:03:19.870909 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 22:03:19.870909 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 22:03:20.245690 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 22:03:20.774736 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 22:03:20.774736 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 22:03:20.778938 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:03:20.778938 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:03:20.778938 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 22:03:20.778938 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 22:03:20.778938 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 22:03:20.778938 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 22:03:20.778938 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 22:03:20.778938 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 22:03:20.794647 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 22:03:20.797112 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 22:03:20.800008 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 22:03:20.800008 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 22:03:20.800008 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 22:03:20.800008 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:03:20.800008 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:03:20.800008 ignition[983]: INFO : files: files passed Sep 12 22:03:20.800008 ignition[983]: INFO : Ignition finished successfully Sep 12 22:03:20.802738 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 22:03:20.806009 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 22:03:20.808199 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 22:03:20.822505 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 22:03:20.822608 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 22:03:20.828039 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 22:03:20.829449 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:03:20.829449 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:03:20.833758 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:03:20.829617 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:03:20.832458 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 22:03:20.835793 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 22:03:20.874426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 22:03:20.874536 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 22:03:20.877528 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 22:03:20.879194 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 22:03:20.881111 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 22:03:20.881901 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 22:03:20.907397 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:03:20.909955 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 22:03:20.931188 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:03:20.932643 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:03:20.934876 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 22:03:20.936772 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 22:03:20.936922 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:03:20.940214 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 22:03:20.944018 systemd[1]: Stopped target basic.target - Basic System. Sep 12 22:03:20.945037 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 22:03:20.946892 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:03:20.949092 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 22:03:20.951158 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:03:20.953565 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 22:03:20.955312 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:03:20.957336 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 22:03:20.959390 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 22:03:20.961264 systemd[1]: Stopped target swap.target - Swaps. Sep 12 22:03:20.962831 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 22:03:20.962975 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:03:20.965489 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:03:20.967517 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:03:20.969511 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 22:03:20.972901 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:03:20.974260 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 22:03:20.974387 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 22:03:20.977305 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 22:03:20.977423 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:03:20.979549 systemd[1]: Stopped target paths.target - Path Units. Sep 12 22:03:20.982383 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 22:03:20.985903 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:03:20.988050 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 22:03:20.990125 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 22:03:20.992064 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 22:03:20.992152 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:03:20.993866 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 22:03:20.994038 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:03:20.995766 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 22:03:20.995898 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:03:20.997892 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 22:03:20.998014 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 22:03:21.000734 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 22:03:21.002908 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 22:03:21.004126 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 22:03:21.004264 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:03:21.006082 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 22:03:21.006179 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:03:21.011689 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 22:03:21.018999 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 22:03:21.027748 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 22:03:21.036577 ignition[1039]: INFO : Ignition 2.22.0 Sep 12 22:03:21.036577 ignition[1039]: INFO : Stage: umount Sep 12 22:03:21.038445 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:03:21.038445 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:03:21.038445 ignition[1039]: INFO : umount: umount passed Sep 12 22:03:21.038445 ignition[1039]: INFO : Ignition finished successfully Sep 12 22:03:21.039302 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 22:03:21.040924 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 22:03:21.043892 systemd[1]: Stopped target network.target - Network. Sep 12 22:03:21.045472 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 22:03:21.045535 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 22:03:21.047285 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 22:03:21.047329 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 22:03:21.048957 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 22:03:21.049006 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 22:03:21.050635 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 22:03:21.050679 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 22:03:21.052646 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 22:03:21.054382 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 22:03:21.060804 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 22:03:21.060941 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 22:03:21.064250 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 22:03:21.064443 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 22:03:21.064548 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 22:03:21.067947 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 22:03:21.068524 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 22:03:21.070609 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 22:03:21.070649 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:03:21.073633 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 22:03:21.074940 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 22:03:21.075006 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:03:21.077239 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 22:03:21.077288 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:03:21.080551 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 22:03:21.080597 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 22:03:21.082786 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 22:03:21.082853 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:03:21.086515 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:03:21.091168 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 22:03:21.091243 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:03:21.102192 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 22:03:21.104944 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 22:03:21.106126 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 22:03:21.106250 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:03:21.108271 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 22:03:21.108350 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 22:03:21.110675 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 22:03:21.110734 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 22:03:21.112127 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 22:03:21.112158 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:03:21.114062 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 22:03:21.114113 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:03:21.116753 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 22:03:21.116803 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 22:03:21.121183 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 22:03:21.121255 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:03:21.124510 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 22:03:21.124568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 22:03:21.127525 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 22:03:21.128648 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 22:03:21.128707 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:03:21.131959 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 22:03:21.132004 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:03:21.135307 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 22:03:21.135355 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:03:21.138862 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 22:03:21.138905 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:03:21.140207 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:03:21.140308 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:03:21.145261 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 22:03:21.145313 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 22:03:21.145342 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 22:03:21.145373 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:03:21.145731 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 22:03:21.145837 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 22:03:21.148215 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 22:03:21.150869 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 22:03:21.168931 systemd[1]: Switching root. Sep 12 22:03:21.202224 systemd-journald[243]: Journal stopped Sep 12 22:03:21.989978 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 12 22:03:21.990021 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 22:03:21.990035 kernel: SELinux: policy capability open_perms=1 Sep 12 22:03:21.990045 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 22:03:21.990057 kernel: SELinux: policy capability always_check_network=0 Sep 12 22:03:21.990068 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 22:03:21.990079 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 22:03:21.990090 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 22:03:21.990100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 22:03:21.990109 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 22:03:21.990119 kernel: audit: type=1403 audit(1757714601.401:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 22:03:21.990132 systemd[1]: Successfully loaded SELinux policy in 53.614ms. Sep 12 22:03:21.990154 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.442ms. Sep 12 22:03:21.990166 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:03:21.990177 systemd[1]: Detected virtualization kvm. Sep 12 22:03:21.990188 systemd[1]: Detected architecture arm64. Sep 12 22:03:21.990198 systemd[1]: Detected first boot. Sep 12 22:03:21.990208 systemd[1]: Initializing machine ID from VM UUID. Sep 12 22:03:21.990228 zram_generator::config[1084]: No configuration found. Sep 12 22:03:21.990245 kernel: NET: Registered PF_VSOCK protocol family Sep 12 22:03:21.990264 systemd[1]: Populated /etc with preset unit settings. Sep 12 22:03:21.990282 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 22:03:21.990300 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 22:03:21.990314 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 22:03:21.990330 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 22:03:21.990344 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 22:03:21.990358 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 22:03:21.990371 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 22:03:21.990385 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 22:03:21.990399 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 22:03:21.990410 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 22:03:21.990421 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 22:03:21.990433 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 22:03:21.990449 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:03:21.990459 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:03:21.990469 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 22:03:21.990480 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 22:03:21.990490 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 22:03:21.990501 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:03:21.990511 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 22:03:21.990521 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:03:21.990536 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:03:21.990546 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 22:03:21.990557 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 22:03:21.990568 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 22:03:21.990578 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 22:03:21.990588 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:03:21.990599 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:03:21.990609 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:03:21.990622 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:03:21.990632 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 22:03:21.990642 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 22:03:21.990652 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 22:03:21.990666 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:03:21.990676 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:03:21.990687 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:03:21.990698 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 22:03:21.990709 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 22:03:21.990721 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 22:03:21.990731 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 22:03:21.990741 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 22:03:21.990752 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 22:03:21.990763 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 22:03:21.990774 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 22:03:21.990785 systemd[1]: Reached target machines.target - Containers. Sep 12 22:03:21.990795 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 22:03:21.990806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:03:21.990834 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:03:21.990846 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 22:03:21.990855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:03:21.990866 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:03:21.990876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:03:21.990891 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 22:03:21.990905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:03:21.990918 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 22:03:21.990934 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 22:03:21.990948 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 22:03:21.990960 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 22:03:21.990973 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 22:03:21.990989 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:03:21.991000 kernel: fuse: init (API version 7.41) Sep 12 22:03:21.991010 kernel: ACPI: bus type drm_connector registered Sep 12 22:03:21.991023 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:03:21.991036 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:03:21.991051 kernel: loop: module loaded Sep 12 22:03:21.991064 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:03:21.991077 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 22:03:21.991119 systemd-journald[1159]: Collecting audit messages is disabled. Sep 12 22:03:21.991143 systemd-journald[1159]: Journal started Sep 12 22:03:21.991164 systemd-journald[1159]: Runtime Journal (/run/log/journal/67fd53c4f4d54aee857c7709206100c8) is 6M, max 48.5M, 42.4M free. Sep 12 22:03:21.767200 systemd[1]: Queued start job for default target multi-user.target. Sep 12 22:03:21.792849 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 22:03:21.793242 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 22:03:21.996906 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 22:03:22.000758 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:03:22.000801 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 22:03:22.002433 systemd[1]: Stopped verity-setup.service. Sep 12 22:03:22.006851 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:03:22.007521 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 22:03:22.008797 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 22:03:22.010025 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 22:03:22.011123 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 22:03:22.012357 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 22:03:22.013656 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 22:03:22.015039 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 22:03:22.017895 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:03:22.019451 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 22:03:22.020857 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 22:03:22.022338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:03:22.022495 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:03:22.023935 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:03:22.024104 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:03:22.025435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:03:22.025582 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:03:22.027107 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 22:03:22.027288 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 22:03:22.028610 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:03:22.028765 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:03:22.030228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:03:22.031767 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:03:22.033396 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 22:03:22.035074 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 22:03:22.047048 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:03:22.049357 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 22:03:22.051317 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 22:03:22.052568 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 22:03:22.052600 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:03:22.054516 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 22:03:22.058945 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 22:03:22.060038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:03:22.061119 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 22:03:22.063057 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 22:03:22.064451 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:03:22.065566 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 22:03:22.066923 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:03:22.067862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:03:22.071249 systemd-journald[1159]: Time spent on flushing to /var/log/journal/67fd53c4f4d54aee857c7709206100c8 is 20.892ms for 890 entries. Sep 12 22:03:22.071249 systemd-journald[1159]: System Journal (/var/log/journal/67fd53c4f4d54aee857c7709206100c8) is 8M, max 195.6M, 187.6M free. Sep 12 22:03:22.100075 systemd-journald[1159]: Received client request to flush runtime journal. Sep 12 22:03:22.100110 kernel: loop0: detected capacity change from 0 to 119368 Sep 12 22:03:22.100123 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 22:03:22.071391 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 22:03:22.075739 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 22:03:22.078861 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:03:22.082113 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 22:03:22.085043 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 22:03:22.087163 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 22:03:22.094620 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 22:03:22.097962 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 22:03:22.106856 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 22:03:22.107260 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 12 22:03:22.107493 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 12 22:03:22.109310 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:03:22.112147 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:03:22.116942 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 22:03:22.120839 kernel: loop1: detected capacity change from 0 to 100632 Sep 12 22:03:22.138397 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 22:03:22.143725 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 22:03:22.146844 kernel: loop2: detected capacity change from 0 to 207008 Sep 12 22:03:22.149040 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:03:22.175849 kernel: loop3: detected capacity change from 0 to 119368 Sep 12 22:03:22.176347 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Sep 12 22:03:22.176366 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Sep 12 22:03:22.179751 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:03:22.183832 kernel: loop4: detected capacity change from 0 to 100632 Sep 12 22:03:22.189885 kernel: loop5: detected capacity change from 0 to 207008 Sep 12 22:03:22.193978 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 22:03:22.194363 (sd-merge)[1225]: Merged extensions into '/usr'. Sep 12 22:03:22.197777 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 22:03:22.199157 systemd[1]: Reloading... Sep 12 22:03:22.268962 zram_generator::config[1252]: No configuration found. Sep 12 22:03:22.344016 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 22:03:22.407004 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 22:03:22.407576 systemd[1]: Reloading finished in 207 ms. Sep 12 22:03:22.429853 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 22:03:22.431334 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 22:03:22.443195 systemd[1]: Starting ensure-sysext.service... Sep 12 22:03:22.445108 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:03:22.453973 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Sep 12 22:03:22.453988 systemd[1]: Reloading... Sep 12 22:03:22.459779 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 22:03:22.459823 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 22:03:22.460062 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 22:03:22.460263 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 22:03:22.460894 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 22:03:22.461090 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 12 22:03:22.461140 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 12 22:03:22.464052 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:03:22.464066 systemd-tmpfiles[1287]: Skipping /boot Sep 12 22:03:22.469754 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:03:22.469771 systemd-tmpfiles[1287]: Skipping /boot Sep 12 22:03:22.504040 zram_generator::config[1314]: No configuration found. Sep 12 22:03:22.628670 systemd[1]: Reloading finished in 174 ms. Sep 12 22:03:22.637306 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 22:03:22.643178 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:03:22.653889 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:03:22.656459 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 22:03:22.666670 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 22:03:22.671963 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:03:22.674438 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:03:22.677119 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 22:03:22.684187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:03:22.689328 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:03:22.692149 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:03:22.695991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:03:22.697377 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:03:22.697505 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:03:22.699864 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 22:03:22.701954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:03:22.702104 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:03:22.711891 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 22:03:22.714101 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:03:22.714274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:03:22.716977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:03:22.717166 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:03:22.718781 systemd-udevd[1361]: Using default interface naming scheme 'v255'. Sep 12 22:03:22.719361 augenrules[1382]: No rules Sep 12 22:03:22.720359 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:03:22.721866 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:03:22.724708 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 22:03:22.731205 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:03:22.732387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:03:22.733432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:03:22.735659 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:03:22.740658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:03:22.743555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:03:22.745048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:03:22.745168 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:03:22.747458 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 22:03:22.760775 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 22:03:22.762927 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 22:03:22.764446 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:03:22.767593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:03:22.767825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:03:22.770153 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:03:22.772943 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:03:22.774856 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:03:22.775018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:03:22.778459 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:03:22.778649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:03:22.784765 systemd[1]: Finished ensure-sysext.service. Sep 12 22:03:22.794005 augenrules[1391]: /sbin/augenrules: No change Sep 12 22:03:22.798389 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 22:03:22.811852 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:03:22.812926 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:03:22.813003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:03:22.815677 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 22:03:22.815945 augenrules[1450]: No rules Sep 12 22:03:22.817165 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:03:22.818912 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:03:22.822805 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 22:03:22.868965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 22:03:22.873045 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 22:03:22.875340 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 22:03:22.902853 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 22:03:22.948611 systemd-networkd[1451]: lo: Link UP Sep 12 22:03:22.948915 systemd-networkd[1451]: lo: Gained carrier Sep 12 22:03:22.949794 systemd-networkd[1451]: Enumeration completed Sep 12 22:03:22.950329 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:03:22.950388 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:03:22.950914 systemd-networkd[1451]: eth0: Link UP Sep 12 22:03:22.951089 systemd-networkd[1451]: eth0: Gained carrier Sep 12 22:03:22.951148 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:03:22.951340 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:03:22.954201 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 22:03:22.957506 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 22:03:22.957909 systemd-networkd[1451]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 22:03:22.968967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:03:22.978694 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 22:03:22.981107 systemd-timesyncd[1457]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 22:03:22.981155 systemd-timesyncd[1457]: Initial clock synchronization to Fri 2025-09-12 22:03:23.304220 UTC. Sep 12 22:03:22.981679 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 22:03:22.986277 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 22:03:23.025138 systemd-resolved[1354]: Positive Trust Anchors: Sep 12 22:03:23.025171 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:03:23.025204 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:03:23.027912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:03:23.031658 systemd-resolved[1354]: Defaulting to hostname 'linux'. Sep 12 22:03:23.033249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:03:23.034640 systemd[1]: Reached target network.target - Network. Sep 12 22:03:23.035714 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:03:23.037047 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:03:23.038255 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 22:03:23.039576 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 22:03:23.041139 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 22:03:23.042373 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 22:03:23.043800 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 22:03:23.045095 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 22:03:23.045132 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:03:23.046084 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:03:23.047997 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 22:03:23.050673 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 22:03:23.053707 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 22:03:23.055310 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 22:03:23.056659 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 22:03:23.069863 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 22:03:23.071296 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 22:03:23.073149 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 22:03:23.074361 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:03:23.075454 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:03:23.076510 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:03:23.076541 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:03:23.077588 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 22:03:23.079693 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 22:03:23.081738 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 22:03:23.083914 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 22:03:23.085839 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 22:03:23.086905 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 22:03:23.087869 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 22:03:23.091972 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 22:03:23.092210 jq[1503]: false Sep 12 22:03:23.095418 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 22:03:23.098129 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 22:03:23.101474 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 22:03:23.103063 extend-filesystems[1504]: Found /dev/vda6 Sep 12 22:03:23.103635 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 22:03:23.104076 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 22:03:23.106436 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 22:03:23.108123 extend-filesystems[1504]: Found /dev/vda9 Sep 12 22:03:23.110542 extend-filesystems[1504]: Checking size of /dev/vda9 Sep 12 22:03:23.112759 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 22:03:23.117647 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 22:03:23.121329 jq[1522]: true Sep 12 22:03:23.121347 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 22:03:23.121603 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 22:03:23.121948 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 22:03:23.122129 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 22:03:23.124095 extend-filesystems[1504]: Resized partition /dev/vda9 Sep 12 22:03:23.124624 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 22:03:23.124798 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 22:03:23.134705 extend-filesystems[1532]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 22:03:23.141855 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 22:03:23.146361 (ntainerd)[1533]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 22:03:23.149159 jq[1531]: true Sep 12 22:03:23.151309 update_engine[1519]: I20250912 22:03:23.150987 1519 main.cc:92] Flatcar Update Engine starting Sep 12 22:03:23.169811 tar[1529]: linux-arm64/LICENSE Sep 12 22:03:23.170073 tar[1529]: linux-arm64/helm Sep 12 22:03:23.183953 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 22:03:23.190787 dbus-daemon[1501]: [system] SELinux support is enabled Sep 12 22:03:23.191236 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 22:03:23.196267 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 22:03:23.196295 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 22:03:23.199046 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 22:03:23.199069 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 22:03:23.201711 extend-filesystems[1532]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 22:03:23.201711 extend-filesystems[1532]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 22:03:23.201711 extend-filesystems[1532]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 22:03:23.210490 extend-filesystems[1504]: Resized filesystem in /dev/vda9 Sep 12 22:03:23.214928 update_engine[1519]: I20250912 22:03:23.203701 1519 update_check_scheduler.cc:74] Next update check in 4m37s Sep 12 22:03:23.203243 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 22:03:23.203491 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 22:03:23.209910 systemd[1]: Started update-engine.service - Update Engine. Sep 12 22:03:23.214019 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 22:03:23.226393 bash[1563]: Updated "/home/core/.ssh/authorized_keys" Sep 12 22:03:23.229988 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 22:03:23.233542 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 22:03:23.235427 systemd-logind[1517]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 22:03:23.237123 systemd-logind[1517]: New seat seat0. Sep 12 22:03:23.241047 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 22:03:23.267522 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 22:03:23.323435 containerd[1533]: time="2025-09-12T22:03:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 22:03:23.324080 containerd[1533]: time="2025-09-12T22:03:23.324043069Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 22:03:23.339299 containerd[1533]: time="2025-09-12T22:03:23.339190472Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.237µs" Sep 12 22:03:23.339299 containerd[1533]: time="2025-09-12T22:03:23.339297633Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 22:03:23.339386 containerd[1533]: time="2025-09-12T22:03:23.339328720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 22:03:23.339537 containerd[1533]: time="2025-09-12T22:03:23.339511996Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 22:03:23.339566 containerd[1533]: time="2025-09-12T22:03:23.339538090Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 22:03:23.339587 containerd[1533]: time="2025-09-12T22:03:23.339569052Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:03:23.339650 containerd[1533]: time="2025-09-12T22:03:23.339626731Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:03:23.339650 containerd[1533]: time="2025-09-12T22:03:23.339647331Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:03:23.339951 containerd[1533]: time="2025-09-12T22:03:23.339928863Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:03:23.339976 containerd[1533]: time="2025-09-12T22:03:23.339950795Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:03:23.339976 containerd[1533]: time="2025-09-12T22:03:23.339967233Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:03:23.340010 containerd[1533]: time="2025-09-12T22:03:23.339979177Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 22:03:23.341370 containerd[1533]: time="2025-09-12T22:03:23.340063199Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 22:03:23.341370 containerd[1533]: time="2025-09-12T22:03:23.340282349Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:03:23.341370 containerd[1533]: time="2025-09-12T22:03:23.340316182Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:03:23.341370 containerd[1533]: time="2025-09-12T22:03:23.340329583Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 22:03:23.341370 containerd[1533]: time="2025-09-12T22:03:23.340367370Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 22:03:23.341370 containerd[1533]: time="2025-09-12T22:03:23.341015746Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 22:03:23.341370 containerd[1533]: time="2025-09-12T22:03:23.341148709Z" level=info msg="metadata content store policy set" policy=shared Sep 12 22:03:23.345605 containerd[1533]: time="2025-09-12T22:03:23.345526453Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 22:03:23.345768 containerd[1533]: time="2025-09-12T22:03:23.345712851Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 22:03:23.345768 containerd[1533]: time="2025-09-12T22:03:23.345739360Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 22:03:23.346093 containerd[1533]: time="2025-09-12T22:03:23.345752761Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 22:03:23.346093 containerd[1533]: time="2025-09-12T22:03:23.346059095Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 22:03:23.346093 containerd[1533]: time="2025-09-12T22:03:23.346078364Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 22:03:23.346093 containerd[1533]: time="2025-09-12T22:03:23.346101502Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 22:03:23.346225 containerd[1533]: time="2025-09-12T22:03:23.346114819Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 22:03:23.346225 containerd[1533]: time="2025-09-12T22:03:23.346126846Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 22:03:23.346225 containerd[1533]: time="2025-09-12T22:03:23.346182279Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 22:03:23.346225 containerd[1533]: time="2025-09-12T22:03:23.346198217Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 22:03:23.346225 containerd[1533]: time="2025-09-12T22:03:23.346212117Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 22:03:23.346418 containerd[1533]: time="2025-09-12T22:03:23.346335675Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 22:03:23.346457 containerd[1533]: time="2025-09-12T22:03:23.346367761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 22:03:23.346484 containerd[1533]: time="2025-09-12T22:03:23.346455529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 22:03:23.346484 containerd[1533]: time="2025-09-12T22:03:23.346474339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 22:03:23.346563 containerd[1533]: time="2025-09-12T22:03:23.346485783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 22:03:23.346563 containerd[1533]: time="2025-09-12T22:03:23.346497020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 22:03:23.346563 containerd[1533]: time="2025-09-12T22:03:23.346508963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 22:03:23.346563 containerd[1533]: time="2025-09-12T22:03:23.346522697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 22:03:23.346563 containerd[1533]: time="2025-09-12T22:03:23.346534141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 22:03:23.346668 containerd[1533]: time="2025-09-12T22:03:23.346587576Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 22:03:23.346668 containerd[1533]: time="2025-09-12T22:03:23.346605720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 22:03:23.346952 containerd[1533]: time="2025-09-12T22:03:23.346877348Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 22:03:23.347007 containerd[1533]: time="2025-09-12T22:03:23.346957874Z" level=info msg="Start snapshots syncer" Sep 12 22:03:23.347007 containerd[1533]: time="2025-09-12T22:03:23.346989877Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 22:03:23.347383 containerd[1533]: time="2025-09-12T22:03:23.347324760Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 22:03:23.347495 containerd[1533]: time="2025-09-12T22:03:23.347391387Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 22:03:23.347644 containerd[1533]: time="2025-09-12T22:03:23.347526681Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 22:03:23.347850 containerd[1533]: time="2025-09-12T22:03:23.347765514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 22:03:23.347891 containerd[1533]: time="2025-09-12T22:03:23.347874340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 22:03:23.347913 containerd[1533]: time="2025-09-12T22:03:23.347894108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 22:03:23.347913 containerd[1533]: time="2025-09-12T22:03:23.347908548Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 22:03:23.347948 containerd[1533]: time="2025-09-12T22:03:23.347922531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 22:03:23.347967 containerd[1533]: time="2025-09-12T22:03:23.347951704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 22:03:23.347967 containerd[1533]: time="2025-09-12T22:03:23.347964480Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 22:03:23.348074 containerd[1533]: time="2025-09-12T22:03:23.348054787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 22:03:23.348099 containerd[1533]: time="2025-09-12T22:03:23.348080047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 22:03:23.348099 containerd[1533]: time="2025-09-12T22:03:23.348093240Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 22:03:23.348150 containerd[1533]: time="2025-09-12T22:03:23.348137186Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:03:23.348171 containerd[1533]: time="2025-09-12T22:03:23.348156912Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:03:23.348171 containerd[1533]: time="2025-09-12T22:03:23.348167399Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:03:23.348248 containerd[1533]: time="2025-09-12T22:03:23.348227909Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:03:23.348275 containerd[1533]: time="2025-09-12T22:03:23.348246261Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 22:03:23.348275 containerd[1533]: time="2025-09-12T22:03:23.348258497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 22:03:23.348275 containerd[1533]: time="2025-09-12T22:03:23.348271023Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 22:03:23.348363 containerd[1533]: time="2025-09-12T22:03:23.348352757Z" level=info msg="runtime interface created" Sep 12 22:03:23.348363 containerd[1533]: time="2025-09-12T22:03:23.348360830Z" level=info msg="created NRI interface" Sep 12 22:03:23.348399 containerd[1533]: time="2025-09-12T22:03:23.348369986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 22:03:23.348399 containerd[1533]: time="2025-09-12T22:03:23.348383386Z" level=info msg="Connect containerd service" Sep 12 22:03:23.348436 containerd[1533]: time="2025-09-12T22:03:23.348412683Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 22:03:23.349478 containerd[1533]: time="2025-09-12T22:03:23.349446464Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 22:03:23.411044 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 22:03:23.420686 containerd[1533]: time="2025-09-12T22:03:23.420532066Z" level=info msg="Start subscribing containerd event" Sep 12 22:03:23.420686 containerd[1533]: time="2025-09-12T22:03:23.420624370Z" level=info msg="Start recovering state" Sep 12 22:03:23.420908 containerd[1533]: time="2025-09-12T22:03:23.420715509Z" level=info msg="Start event monitor" Sep 12 22:03:23.420908 containerd[1533]: time="2025-09-12T22:03:23.420728077Z" level=info msg="Start cni network conf syncer for default" Sep 12 22:03:23.420908 containerd[1533]: time="2025-09-12T22:03:23.420737232Z" level=info msg="Start streaming server" Sep 12 22:03:23.420908 containerd[1533]: time="2025-09-12T22:03:23.420748385Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 22:03:23.420908 containerd[1533]: time="2025-09-12T22:03:23.420755918Z" level=info msg="runtime interface starting up..." Sep 12 22:03:23.420908 containerd[1533]: time="2025-09-12T22:03:23.420761619Z" level=info msg="starting plugins..." Sep 12 22:03:23.420908 containerd[1533]: time="2025-09-12T22:03:23.420773646Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 22:03:23.420908 containerd[1533]: time="2025-09-12T22:03:23.420814888Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 22:03:23.422167 containerd[1533]: time="2025-09-12T22:03:23.422112597Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 22:03:23.422317 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 22:03:23.423921 containerd[1533]: time="2025-09-12T22:03:23.423884852Z" level=info msg="containerd successfully booted in 0.100797s" Sep 12 22:03:23.438951 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 22:03:23.441806 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 22:03:23.466658 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 22:03:23.466882 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 22:03:23.470621 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 22:03:23.500896 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 22:03:23.503717 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 22:03:23.504474 tar[1529]: linux-arm64/README.md Sep 12 22:03:23.506021 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 22:03:23.507393 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 22:03:23.522066 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 22:03:24.092243 systemd-networkd[1451]: eth0: Gained IPv6LL Sep 12 22:03:24.094786 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 22:03:24.097541 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 22:03:24.100426 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 22:03:24.103125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:03:24.113733 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 22:03:24.128945 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 22:03:24.129180 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 22:03:24.131177 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 22:03:24.134499 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 22:03:24.679449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:03:24.681266 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 22:03:24.683593 systemd[1]: Startup finished in 2.053s (kernel) + 5.794s (initrd) + 3.337s (userspace) = 11.184s. Sep 12 22:03:24.683749 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:03:25.060017 kubelet[1632]: E0912 22:03:25.059867 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:03:25.062294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:03:25.062431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:03:25.062735 systemd[1]: kubelet.service: Consumed 754ms CPU time, 256M memory peak. Sep 12 22:03:28.926250 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 22:03:28.927251 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:41164.service - OpenSSH per-connection server daemon (10.0.0.1:41164). Sep 12 22:03:29.025300 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 41164 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:03:29.026913 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:03:29.032609 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 22:03:29.033492 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 22:03:29.038628 systemd-logind[1517]: New session 1 of user core. Sep 12 22:03:29.052935 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 22:03:29.055330 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 22:03:29.075632 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 22:03:29.077805 systemd-logind[1517]: New session c1 of user core. Sep 12 22:03:29.175686 systemd[1651]: Queued start job for default target default.target. Sep 12 22:03:29.197729 systemd[1651]: Created slice app.slice - User Application Slice. Sep 12 22:03:29.197912 systemd[1651]: Reached target paths.target - Paths. Sep 12 22:03:29.198014 systemd[1651]: Reached target timers.target - Timers. Sep 12 22:03:29.199273 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 22:03:29.208255 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 22:03:29.208314 systemd[1651]: Reached target sockets.target - Sockets. Sep 12 22:03:29.208355 systemd[1651]: Reached target basic.target - Basic System. Sep 12 22:03:29.208383 systemd[1651]: Reached target default.target - Main User Target. Sep 12 22:03:29.208408 systemd[1651]: Startup finished in 125ms. Sep 12 22:03:29.208507 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 22:03:29.209705 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 22:03:29.271933 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:41172.service - OpenSSH per-connection server daemon (10.0.0.1:41172). Sep 12 22:03:29.332060 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 41172 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:03:29.333205 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:03:29.337461 systemd-logind[1517]: New session 2 of user core. Sep 12 22:03:29.343021 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 22:03:29.395028 sshd[1665]: Connection closed by 10.0.0.1 port 41172 Sep 12 22:03:29.395427 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Sep 12 22:03:29.405657 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:41172.service: Deactivated successfully. Sep 12 22:03:29.408028 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 22:03:29.408914 systemd-logind[1517]: Session 2 logged out. Waiting for processes to exit. Sep 12 22:03:29.411037 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:41180.service - OpenSSH per-connection server daemon (10.0.0.1:41180). Sep 12 22:03:29.411880 systemd-logind[1517]: Removed session 2. Sep 12 22:03:29.460860 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 41180 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:03:29.462133 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:03:29.466668 systemd-logind[1517]: New session 3 of user core. Sep 12 22:03:29.479998 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 22:03:29.528740 sshd[1674]: Connection closed by 10.0.0.1 port 41180 Sep 12 22:03:29.529210 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Sep 12 22:03:29.543737 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:41180.service: Deactivated successfully. Sep 12 22:03:29.545384 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 22:03:29.547345 systemd-logind[1517]: Session 3 logged out. Waiting for processes to exit. Sep 12 22:03:29.549673 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:41196.service - OpenSSH per-connection server daemon (10.0.0.1:41196). Sep 12 22:03:29.550472 systemd-logind[1517]: Removed session 3. Sep 12 22:03:29.603675 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 41196 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:03:29.604822 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:03:29.608694 systemd-logind[1517]: New session 4 of user core. Sep 12 22:03:29.617987 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 22:03:29.669797 sshd[1683]: Connection closed by 10.0.0.1 port 41196 Sep 12 22:03:29.669669 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Sep 12 22:03:29.682044 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:41196.service: Deactivated successfully. Sep 12 22:03:29.684099 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 22:03:29.684768 systemd-logind[1517]: Session 4 logged out. Waiting for processes to exit. Sep 12 22:03:29.686898 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:41202.service - OpenSSH per-connection server daemon (10.0.0.1:41202). Sep 12 22:03:29.687775 systemd-logind[1517]: Removed session 4. Sep 12 22:03:29.734528 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 41202 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:03:29.735662 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:03:29.740124 systemd-logind[1517]: New session 5 of user core. Sep 12 22:03:29.748016 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 22:03:29.804793 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 22:03:29.805112 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:03:29.819655 sudo[1693]: pam_unix(sudo:session): session closed for user root Sep 12 22:03:29.821057 sshd[1692]: Connection closed by 10.0.0.1 port 41202 Sep 12 22:03:29.821489 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Sep 12 22:03:29.830885 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:41202.service: Deactivated successfully. Sep 12 22:03:29.832553 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 22:03:29.833447 systemd-logind[1517]: Session 5 logged out. Waiting for processes to exit. Sep 12 22:03:29.836929 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:34742.service - OpenSSH per-connection server daemon (10.0.0.1:34742). Sep 12 22:03:29.837414 systemd-logind[1517]: Removed session 5. Sep 12 22:03:29.890604 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 34742 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:03:29.891717 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:03:29.895430 systemd-logind[1517]: New session 6 of user core. Sep 12 22:03:29.901972 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 22:03:29.953416 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 22:03:29.953688 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:03:30.032957 sudo[1704]: pam_unix(sudo:session): session closed for user root Sep 12 22:03:30.038042 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 22:03:30.038309 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:03:30.046819 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:03:30.084255 augenrules[1726]: No rules Sep 12 22:03:30.085366 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:03:30.085622 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:03:30.086483 sudo[1703]: pam_unix(sudo:session): session closed for user root Sep 12 22:03:30.087970 sshd[1702]: Connection closed by 10.0.0.1 port 34742 Sep 12 22:03:30.088437 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Sep 12 22:03:30.094865 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:34742.service: Deactivated successfully. Sep 12 22:03:30.097058 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 22:03:30.097787 systemd-logind[1517]: Session 6 logged out. Waiting for processes to exit. Sep 12 22:03:30.099628 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:34780.service - OpenSSH per-connection server daemon (10.0.0.1:34780). Sep 12 22:03:30.100405 systemd-logind[1517]: Removed session 6. Sep 12 22:03:30.148860 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 34780 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:03:30.150143 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:03:30.154138 systemd-logind[1517]: New session 7 of user core. Sep 12 22:03:30.163978 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 22:03:30.214366 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 22:03:30.214630 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:03:30.481523 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 22:03:30.495134 (dockerd)[1760]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 22:03:30.692668 dockerd[1760]: time="2025-09-12T22:03:30.692608167Z" level=info msg="Starting up" Sep 12 22:03:30.693505 dockerd[1760]: time="2025-09-12T22:03:30.693484167Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 22:03:30.702988 dockerd[1760]: time="2025-09-12T22:03:30.702944381Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 22:03:30.734733 dockerd[1760]: time="2025-09-12T22:03:30.734626987Z" level=info msg="Loading containers: start." Sep 12 22:03:30.744035 kernel: Initializing XFRM netlink socket Sep 12 22:03:30.933978 systemd-networkd[1451]: docker0: Link UP Sep 12 22:03:30.937294 dockerd[1760]: time="2025-09-12T22:03:30.937244858Z" level=info msg="Loading containers: done." Sep 12 22:03:30.950484 dockerd[1760]: time="2025-09-12T22:03:30.950429189Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 22:03:30.950622 dockerd[1760]: time="2025-09-12T22:03:30.950524355Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 22:03:30.950622 dockerd[1760]: time="2025-09-12T22:03:30.950604080Z" level=info msg="Initializing buildkit" Sep 12 22:03:30.977312 dockerd[1760]: time="2025-09-12T22:03:30.977249910Z" level=info msg="Completed buildkit initialization" Sep 12 22:03:30.981982 dockerd[1760]: time="2025-09-12T22:03:30.981948566Z" level=info msg="Daemon has completed initialization" Sep 12 22:03:30.982136 dockerd[1760]: time="2025-09-12T22:03:30.982027925Z" level=info msg="API listen on /run/docker.sock" Sep 12 22:03:30.982227 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 22:03:31.876232 containerd[1533]: time="2025-09-12T22:03:31.875845050Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 22:03:32.510460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506638145.mount: Deactivated successfully. Sep 12 22:03:33.533232 containerd[1533]: time="2025-09-12T22:03:33.533180218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:33.534336 containerd[1533]: time="2025-09-12T22:03:33.534308280Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 12 22:03:33.535253 containerd[1533]: time="2025-09-12T22:03:33.535230094Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:33.537654 containerd[1533]: time="2025-09-12T22:03:33.537603210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:33.539116 containerd[1533]: time="2025-09-12T22:03:33.538925555Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.663038723s" Sep 12 22:03:33.539116 containerd[1533]: time="2025-09-12T22:03:33.538967395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 22:03:33.539520 containerd[1533]: time="2025-09-12T22:03:33.539492477Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 22:03:34.799942 containerd[1533]: time="2025-09-12T22:03:34.799794598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:34.801859 containerd[1533]: time="2025-09-12T22:03:34.801790912Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 12 22:03:34.803501 containerd[1533]: time="2025-09-12T22:03:34.803458678Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:34.805956 containerd[1533]: time="2025-09-12T22:03:34.805927827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:34.807188 containerd[1533]: time="2025-09-12T22:03:34.807010967Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.267487171s" Sep 12 22:03:34.807188 containerd[1533]: time="2025-09-12T22:03:34.807040479Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 22:03:34.807533 containerd[1533]: time="2025-09-12T22:03:34.807510610Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 22:03:35.312941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 22:03:35.314742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:03:35.447953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:03:35.450279 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:03:35.490503 kubelet[2049]: E0912 22:03:35.490457 2049 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:03:35.493572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:03:35.493703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:03:35.494100 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.6M memory peak. Sep 12 22:03:35.895371 containerd[1533]: time="2025-09-12T22:03:35.895325341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:35.895865 containerd[1533]: time="2025-09-12T22:03:35.895832555Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 12 22:03:35.896713 containerd[1533]: time="2025-09-12T22:03:35.896658501Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:35.899554 containerd[1533]: time="2025-09-12T22:03:35.899510318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:35.901443 containerd[1533]: time="2025-09-12T22:03:35.901330893Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.093791144s" Sep 12 22:03:35.901443 containerd[1533]: time="2025-09-12T22:03:35.901362467Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 22:03:35.901804 containerd[1533]: time="2025-09-12T22:03:35.901782780Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 22:03:36.868958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335250711.mount: Deactivated successfully. Sep 12 22:03:37.241479 containerd[1533]: time="2025-09-12T22:03:37.241368106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:37.242419 containerd[1533]: time="2025-09-12T22:03:37.242217203Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 12 22:03:37.243067 containerd[1533]: time="2025-09-12T22:03:37.243035066Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:37.245155 containerd[1533]: time="2025-09-12T22:03:37.245126696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:37.245649 containerd[1533]: time="2025-09-12T22:03:37.245572335Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.343761059s" Sep 12 22:03:37.245649 containerd[1533]: time="2025-09-12T22:03:37.245612262Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 22:03:37.246190 containerd[1533]: time="2025-09-12T22:03:37.246016847Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 22:03:37.747530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2088955346.mount: Deactivated successfully. Sep 12 22:03:38.371373 containerd[1533]: time="2025-09-12T22:03:38.371314532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:38.371805 containerd[1533]: time="2025-09-12T22:03:38.371757454Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 12 22:03:38.372742 containerd[1533]: time="2025-09-12T22:03:38.372695419Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:38.375842 containerd[1533]: time="2025-09-12T22:03:38.375737833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:38.376524 containerd[1533]: time="2025-09-12T22:03:38.376491801Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.130445739s" Sep 12 22:03:38.376630 containerd[1533]: time="2025-09-12T22:03:38.376614707Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 22:03:38.377220 containerd[1533]: time="2025-09-12T22:03:38.377190348Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 22:03:38.826673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount902012.mount: Deactivated successfully. Sep 12 22:03:38.831549 containerd[1533]: time="2025-09-12T22:03:38.831496724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:03:38.832020 containerd[1533]: time="2025-09-12T22:03:38.831989275Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 22:03:38.832952 containerd[1533]: time="2025-09-12T22:03:38.832902425Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:03:38.834623 containerd[1533]: time="2025-09-12T22:03:38.834597294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:03:38.836047 containerd[1533]: time="2025-09-12T22:03:38.836000984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 458.774158ms" Sep 12 22:03:38.836047 containerd[1533]: time="2025-09-12T22:03:38.836032676Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 22:03:38.836431 containerd[1533]: time="2025-09-12T22:03:38.836410927Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 22:03:39.306084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100169111.mount: Deactivated successfully. Sep 12 22:03:40.972910 containerd[1533]: time="2025-09-12T22:03:40.972862295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:40.974562 containerd[1533]: time="2025-09-12T22:03:40.974524443Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 12 22:03:40.975504 containerd[1533]: time="2025-09-12T22:03:40.975459569Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:40.978698 containerd[1533]: time="2025-09-12T22:03:40.978663043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:03:40.979763 containerd[1533]: time="2025-09-12T22:03:40.979726824Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.143229559s" Sep 12 22:03:40.979763 containerd[1533]: time="2025-09-12T22:03:40.979763015Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 22:03:44.879215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:03:44.879377 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.6M memory peak. Sep 12 22:03:44.882270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:03:44.915148 systemd[1]: Reload requested from client PID 2209 ('systemctl') (unit session-7.scope)... Sep 12 22:03:44.915169 systemd[1]: Reloading... Sep 12 22:03:44.990854 zram_generator::config[2255]: No configuration found. Sep 12 22:03:45.181966 systemd[1]: Reloading finished in 266 ms. Sep 12 22:03:45.255462 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 22:03:45.255544 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 22:03:45.255771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:03:45.255832 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95M memory peak. Sep 12 22:03:45.257217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:03:45.371164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:03:45.390254 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:03:45.430216 kubelet[2297]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:03:45.430216 kubelet[2297]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 22:03:45.430216 kubelet[2297]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:03:45.431494 kubelet[2297]: I0912 22:03:45.431435 2297 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:03:45.855354 kubelet[2297]: I0912 22:03:45.855306 2297 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 22:03:45.855354 kubelet[2297]: I0912 22:03:45.855339 2297 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:03:45.855677 kubelet[2297]: I0912 22:03:45.855650 2297 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 22:03:45.883839 kubelet[2297]: E0912 22:03:45.883142 2297 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:03:45.884881 kubelet[2297]: I0912 22:03:45.884791 2297 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:03:45.894045 kubelet[2297]: I0912 22:03:45.894022 2297 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:03:45.896921 kubelet[2297]: I0912 22:03:45.896758 2297 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:03:45.897416 kubelet[2297]: I0912 22:03:45.897380 2297 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:03:45.897601 kubelet[2297]: I0912 22:03:45.897419 2297 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:03:45.897730 kubelet[2297]: I0912 22:03:45.897675 2297 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:03:45.897730 kubelet[2297]: I0912 22:03:45.897686 2297 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 22:03:45.897901 kubelet[2297]: I0912 22:03:45.897884 2297 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:03:45.900329 kubelet[2297]: I0912 22:03:45.900291 2297 kubelet.go:446] "Attempting to sync node with API server" Sep 12 22:03:45.900329 kubelet[2297]: I0912 22:03:45.900319 2297 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:03:45.900633 kubelet[2297]: I0912 22:03:45.900341 2297 kubelet.go:352] "Adding apiserver pod source" Sep 12 22:03:45.900633 kubelet[2297]: I0912 22:03:45.900351 2297 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:03:45.903003 kubelet[2297]: I0912 22:03:45.902973 2297 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:03:45.903069 kubelet[2297]: W0912 22:03:45.903026 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:03:45.903126 kubelet[2297]: E0912 22:03:45.903073 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:03:45.903737 kubelet[2297]: I0912 22:03:45.903674 2297 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 22:03:45.903887 kubelet[2297]: W0912 22:03:45.903661 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:03:45.903887 kubelet[2297]: E0912 22:03:45.903866 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:03:45.903887 kubelet[2297]: W0912 22:03:45.903785 2297 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 22:03:45.904810 kubelet[2297]: I0912 22:03:45.904781 2297 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 22:03:45.904810 kubelet[2297]: I0912 22:03:45.904811 2297 server.go:1287] "Started kubelet" Sep 12 22:03:45.905075 kubelet[2297]: I0912 22:03:45.905034 2297 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:03:45.905116 kubelet[2297]: I0912 22:03:45.905074 2297 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:03:45.905472 kubelet[2297]: I0912 22:03:45.905446 2297 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:03:45.906622 kubelet[2297]: I0912 22:03:45.906593 2297 server.go:479] "Adding debug handlers to kubelet server" Sep 12 22:03:45.910599 kubelet[2297]: I0912 22:03:45.910551 2297 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:03:45.910980 kubelet[2297]: E0912 22:03:45.910691 2297 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864a823b5905c42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 22:03:45.90479469 +0000 UTC m=+0.505894486,LastTimestamp:2025-09-12 22:03:45.90479469 +0000 UTC m=+0.505894486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 22:03:45.911101 kubelet[2297]: I0912 22:03:45.911082 2297 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:03:45.912707 kubelet[2297]: I0912 22:03:45.912650 2297 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 22:03:45.912905 kubelet[2297]: E0912 22:03:45.912890 2297 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:03:45.914777 kubelet[2297]: E0912 22:03:45.914705 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Sep 12 22:03:45.915539 kubelet[2297]: I0912 22:03:45.915519 2297 factory.go:221] Registration of the systemd container factory successfully Sep 12 22:03:45.915788 kubelet[2297]: I0912 22:03:45.915758 2297 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:03:45.917184 kubelet[2297]: I0912 22:03:45.917159 2297 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 22:03:45.917246 kubelet[2297]: I0912 22:03:45.917225 2297 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:03:45.917403 kubelet[2297]: E0912 22:03:45.917370 2297 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 22:03:45.917403 kubelet[2297]: W0912 22:03:45.917365 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:03:45.917531 kubelet[2297]: I0912 22:03:45.917429 2297 factory.go:221] Registration of the containerd container factory successfully Sep 12 22:03:45.917531 kubelet[2297]: E0912 22:03:45.917463 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:03:45.929811 kubelet[2297]: I0912 22:03:45.929788 2297 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 22:03:45.929811 kubelet[2297]: I0912 22:03:45.929806 2297 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 22:03:45.929933 kubelet[2297]: I0912 22:03:45.929835 2297 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:03:45.929933 kubelet[2297]: I0912 22:03:45.929853 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 22:03:45.931704 kubelet[2297]: I0912 22:03:45.931481 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 22:03:45.932008 kubelet[2297]: I0912 22:03:45.931971 2297 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 22:03:45.932063 kubelet[2297]: I0912 22:03:45.932045 2297 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 22:03:45.932563 kubelet[2297]: I0912 22:03:45.932358 2297 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 22:03:45.932563 kubelet[2297]: E0912 22:03:45.932459 2297 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:03:46.000382 kubelet[2297]: I0912 22:03:46.000315 2297 policy_none.go:49] "None policy: Start" Sep 12 22:03:46.000382 kubelet[2297]: I0912 22:03:46.000348 2297 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 22:03:46.000382 kubelet[2297]: I0912 22:03:46.000362 2297 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:03:46.000602 kubelet[2297]: W0912 22:03:46.000424 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:03:46.000602 kubelet[2297]: E0912 22:03:46.000482 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:03:46.005549 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 22:03:46.013181 kubelet[2297]: E0912 22:03:46.013145 2297 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:03:46.017201 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 22:03:46.020637 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 22:03:46.033319 kubelet[2297]: E0912 22:03:46.033285 2297 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 22:03:46.037875 kubelet[2297]: I0912 22:03:46.037845 2297 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 22:03:46.038142 kubelet[2297]: I0912 22:03:46.038047 2297 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:03:46.038142 kubelet[2297]: I0912 22:03:46.038060 2297 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:03:46.038332 kubelet[2297]: I0912 22:03:46.038250 2297 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:03:46.039578 kubelet[2297]: E0912 22:03:46.039435 2297 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 22:03:46.039578 kubelet[2297]: E0912 22:03:46.039481 2297 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 22:03:46.116448 kubelet[2297]: E0912 22:03:46.116337 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Sep 12 22:03:46.139968 kubelet[2297]: I0912 22:03:46.139942 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 22:03:46.140365 kubelet[2297]: E0912 22:03:46.140340 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 12 22:03:46.245730 systemd[1]: Created slice kubepods-burstable-podec2c4eb9056f9fd7ae16f19ddd037637.slice - libcontainer container kubepods-burstable-podec2c4eb9056f9fd7ae16f19ddd037637.slice. Sep 12 22:03:46.266577 kubelet[2297]: E0912 22:03:46.266522 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:03:46.270083 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 22:03:46.275410 kubelet[2297]: E0912 22:03:46.275386 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:03:46.275573 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 22:03:46.277466 kubelet[2297]: E0912 22:03:46.277201 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:03:46.317891 kubelet[2297]: I0912 22:03:46.317845 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec2c4eb9056f9fd7ae16f19ddd037637-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ec2c4eb9056f9fd7ae16f19ddd037637\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:03:46.317891 kubelet[2297]: I0912 22:03:46.317882 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:46.317891 kubelet[2297]: I0912 22:03:46.317899 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:46.317891 kubelet[2297]: I0912 22:03:46.317916 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 22:03:46.317891 kubelet[2297]: I0912 22:03:46.317945 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec2c4eb9056f9fd7ae16f19ddd037637-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec2c4eb9056f9fd7ae16f19ddd037637\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:03:46.318294 kubelet[2297]: I0912 22:03:46.317981 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec2c4eb9056f9fd7ae16f19ddd037637-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec2c4eb9056f9fd7ae16f19ddd037637\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:03:46.318294 kubelet[2297]: I0912 22:03:46.318014 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:46.318294 kubelet[2297]: I0912 22:03:46.318037 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:46.318294 kubelet[2297]: I0912 22:03:46.318069 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:46.342112 kubelet[2297]: I0912 22:03:46.342072 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 22:03:46.342403 kubelet[2297]: E0912 22:03:46.342373 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 12 22:03:46.517957 kubelet[2297]: E0912 22:03:46.517846 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Sep 12 22:03:46.567356 kubelet[2297]: E0912 22:03:46.567223 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:46.568096 containerd[1533]: time="2025-09-12T22:03:46.567848078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ec2c4eb9056f9fd7ae16f19ddd037637,Namespace:kube-system,Attempt:0,}" Sep 12 22:03:46.576105 kubelet[2297]: E0912 22:03:46.576069 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:46.576535 containerd[1533]: time="2025-09-12T22:03:46.576493040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 22:03:46.578659 kubelet[2297]: E0912 22:03:46.578615 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:46.578975 containerd[1533]: time="2025-09-12T22:03:46.578950434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 22:03:46.614140 containerd[1533]: time="2025-09-12T22:03:46.614026249Z" level=info msg="connecting to shim b15cc8dfb8a15185dc8eb91b4f4b2267f51e27839df8d5b4f80ec9994620a9e0" address="unix:///run/containerd/s/6b20a4099cdb0adc464b2ec20b2305568cfc053e2bd04ca5c100cda347ccdeb1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:03:46.616470 containerd[1533]: time="2025-09-12T22:03:46.616408382Z" level=info msg="connecting to shim ff454aa5e90664f555c7b10a659f0e1c6b5a85b6923ba871ab64c2f0be668d11" address="unix:///run/containerd/s/cb8f5e669cc540ff7d39e6084a4679f435da2999b6d8555059600e80051dab65" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:03:46.619345 containerd[1533]: time="2025-09-12T22:03:46.619311209Z" level=info msg="connecting to shim 97c055b768f4720d6dee10fd13e40840e4e40d14e14c463648ba197dca2be6c8" address="unix:///run/containerd/s/c7bb5fd5fbcec66647a9d1f3b7e611b5fcc70a2a1b5986eca244fe4ab5442baa" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:03:46.640973 systemd[1]: Started cri-containerd-ff454aa5e90664f555c7b10a659f0e1c6b5a85b6923ba871ab64c2f0be668d11.scope - libcontainer container ff454aa5e90664f555c7b10a659f0e1c6b5a85b6923ba871ab64c2f0be668d11. Sep 12 22:03:46.643502 systemd[1]: Started cri-containerd-b15cc8dfb8a15185dc8eb91b4f4b2267f51e27839df8d5b4f80ec9994620a9e0.scope - libcontainer container b15cc8dfb8a15185dc8eb91b4f4b2267f51e27839df8d5b4f80ec9994620a9e0. Sep 12 22:03:46.647501 systemd[1]: Started cri-containerd-97c055b768f4720d6dee10fd13e40840e4e40d14e14c463648ba197dca2be6c8.scope - libcontainer container 97c055b768f4720d6dee10fd13e40840e4e40d14e14c463648ba197dca2be6c8. Sep 12 22:03:46.690161 containerd[1533]: time="2025-09-12T22:03:46.690114818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b15cc8dfb8a15185dc8eb91b4f4b2267f51e27839df8d5b4f80ec9994620a9e0\"" Sep 12 22:03:46.692446 kubelet[2297]: E0912 22:03:46.691931 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:46.695157 containerd[1533]: time="2025-09-12T22:03:46.695107753Z" level=info msg="CreateContainer within sandbox \"b15cc8dfb8a15185dc8eb91b4f4b2267f51e27839df8d5b4f80ec9994620a9e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 22:03:46.695530 containerd[1533]: time="2025-09-12T22:03:46.695171031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff454aa5e90664f555c7b10a659f0e1c6b5a85b6923ba871ab64c2f0be668d11\"" Sep 12 22:03:46.697137 kubelet[2297]: E0912 22:03:46.697076 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:46.697231 containerd[1533]: time="2025-09-12T22:03:46.697107531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ec2c4eb9056f9fd7ae16f19ddd037637,Namespace:kube-system,Attempt:0,} returns sandbox id \"97c055b768f4720d6dee10fd13e40840e4e40d14e14c463648ba197dca2be6c8\"" Sep 12 22:03:46.697899 kubelet[2297]: E0912 22:03:46.697853 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:46.699842 containerd[1533]: time="2025-09-12T22:03:46.699233065Z" level=info msg="CreateContainer within sandbox \"ff454aa5e90664f555c7b10a659f0e1c6b5a85b6923ba871ab64c2f0be668d11\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 22:03:46.700068 containerd[1533]: time="2025-09-12T22:03:46.700030356Z" level=info msg="CreateContainer within sandbox \"97c055b768f4720d6dee10fd13e40840e4e40d14e14c463648ba197dca2be6c8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 22:03:46.705989 containerd[1533]: time="2025-09-12T22:03:46.705944853Z" level=info msg="Container b6dd9587f9df38679e43c37308fcee7ed6982026b252dd79268372e4a03abc61: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:03:46.709413 containerd[1533]: time="2025-09-12T22:03:46.709370698Z" level=info msg="Container 8f6b8ece33a2031f36a34f2a4163a6862409b095ef9c93d422680fc24d9cfc9c: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:03:46.717698 containerd[1533]: time="2025-09-12T22:03:46.717545140Z" level=info msg="CreateContainer within sandbox \"b15cc8dfb8a15185dc8eb91b4f4b2267f51e27839df8d5b4f80ec9994620a9e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b6dd9587f9df38679e43c37308fcee7ed6982026b252dd79268372e4a03abc61\"" Sep 12 22:03:46.717802 containerd[1533]: time="2025-09-12T22:03:46.717673580Z" level=info msg="Container 1286b3095c550f251e3c86abf2d328f93faa49e6e6e7f5f3dc49cd4c5c0b3a13: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:03:46.718295 containerd[1533]: time="2025-09-12T22:03:46.718224570Z" level=info msg="StartContainer for \"b6dd9587f9df38679e43c37308fcee7ed6982026b252dd79268372e4a03abc61\"" Sep 12 22:03:46.719370 containerd[1533]: time="2025-09-12T22:03:46.719332201Z" level=info msg="connecting to shim b6dd9587f9df38679e43c37308fcee7ed6982026b252dd79268372e4a03abc61" address="unix:///run/containerd/s/6b20a4099cdb0adc464b2ec20b2305568cfc053e2bd04ca5c100cda347ccdeb1" protocol=ttrpc version=3 Sep 12 22:03:46.741006 systemd[1]: Started cri-containerd-b6dd9587f9df38679e43c37308fcee7ed6982026b252dd79268372e4a03abc61.scope - libcontainer container b6dd9587f9df38679e43c37308fcee7ed6982026b252dd79268372e4a03abc61. Sep 12 22:03:46.743686 kubelet[2297]: I0912 22:03:46.743651 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 22:03:46.744073 kubelet[2297]: E0912 22:03:46.744048 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 12 22:03:46.814144 containerd[1533]: time="2025-09-12T22:03:46.814043947Z" level=info msg="StartContainer for \"b6dd9587f9df38679e43c37308fcee7ed6982026b252dd79268372e4a03abc61\" returns successfully" Sep 12 22:03:46.822480 containerd[1533]: time="2025-09-12T22:03:46.822426378Z" level=info msg="CreateContainer within sandbox \"ff454aa5e90664f555c7b10a659f0e1c6b5a85b6923ba871ab64c2f0be668d11\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8f6b8ece33a2031f36a34f2a4163a6862409b095ef9c93d422680fc24d9cfc9c\"" Sep 12 22:03:46.822979 containerd[1533]: time="2025-09-12T22:03:46.822953123Z" level=info msg="StartContainer for \"8f6b8ece33a2031f36a34f2a4163a6862409b095ef9c93d422680fc24d9cfc9c\"" Sep 12 22:03:46.824453 containerd[1533]: time="2025-09-12T22:03:46.824428842Z" level=info msg="connecting to shim 8f6b8ece33a2031f36a34f2a4163a6862409b095ef9c93d422680fc24d9cfc9c" address="unix:///run/containerd/s/cb8f5e669cc540ff7d39e6084a4679f435da2999b6d8555059600e80051dab65" protocol=ttrpc version=3 Sep 12 22:03:46.825028 containerd[1533]: time="2025-09-12T22:03:46.824999909Z" level=info msg="CreateContainer within sandbox \"97c055b768f4720d6dee10fd13e40840e4e40d14e14c463648ba197dca2be6c8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1286b3095c550f251e3c86abf2d328f93faa49e6e6e7f5f3dc49cd4c5c0b3a13\"" Sep 12 22:03:46.825548 containerd[1533]: time="2025-09-12T22:03:46.825523288Z" level=info msg="StartContainer for \"1286b3095c550f251e3c86abf2d328f93faa49e6e6e7f5f3dc49cd4c5c0b3a13\"" Sep 12 22:03:46.827643 containerd[1533]: time="2025-09-12T22:03:46.827613956Z" level=info msg="connecting to shim 1286b3095c550f251e3c86abf2d328f93faa49e6e6e7f5f3dc49cd4c5c0b3a13" address="unix:///run/containerd/s/c7bb5fd5fbcec66647a9d1f3b7e611b5fcc70a2a1b5986eca244fe4ab5442baa" protocol=ttrpc version=3 Sep 12 22:03:46.853956 systemd[1]: Started cri-containerd-1286b3095c550f251e3c86abf2d328f93faa49e6e6e7f5f3dc49cd4c5c0b3a13.scope - libcontainer container 1286b3095c550f251e3c86abf2d328f93faa49e6e6e7f5f3dc49cd4c5c0b3a13. Sep 12 22:03:46.854854 systemd[1]: Started cri-containerd-8f6b8ece33a2031f36a34f2a4163a6862409b095ef9c93d422680fc24d9cfc9c.scope - libcontainer container 8f6b8ece33a2031f36a34f2a4163a6862409b095ef9c93d422680fc24d9cfc9c. Sep 12 22:03:46.899586 containerd[1533]: time="2025-09-12T22:03:46.899487686Z" level=info msg="StartContainer for \"1286b3095c550f251e3c86abf2d328f93faa49e6e6e7f5f3dc49cd4c5c0b3a13\" returns successfully" Sep 12 22:03:46.901809 containerd[1533]: time="2025-09-12T22:03:46.901737171Z" level=info msg="StartContainer for \"8f6b8ece33a2031f36a34f2a4163a6862409b095ef9c93d422680fc24d9cfc9c\" returns successfully" Sep 12 22:03:46.942332 kubelet[2297]: E0912 22:03:46.942296 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:03:46.942433 kubelet[2297]: E0912 22:03:46.942426 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:46.944995 kubelet[2297]: E0912 22:03:46.944973 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:03:46.945172 kubelet[2297]: E0912 22:03:46.945069 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:46.946646 kubelet[2297]: E0912 22:03:46.946628 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:03:46.946958 kubelet[2297]: E0912 22:03:46.946917 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:47.545959 kubelet[2297]: I0912 22:03:47.545924 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 22:03:47.949350 kubelet[2297]: E0912 22:03:47.949128 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:03:47.949350 kubelet[2297]: E0912 22:03:47.949260 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:47.951156 kubelet[2297]: E0912 22:03:47.951126 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:03:47.951440 kubelet[2297]: E0912 22:03:47.951415 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:48.593085 kubelet[2297]: E0912 22:03:48.593037 2297 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 22:03:48.733746 kubelet[2297]: I0912 22:03:48.733696 2297 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 22:03:48.817941 kubelet[2297]: I0912 22:03:48.817890 2297 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:48.824240 kubelet[2297]: E0912 22:03:48.824211 2297 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:48.824240 kubelet[2297]: I0912 22:03:48.824236 2297 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 22:03:48.825707 kubelet[2297]: E0912 22:03:48.825679 2297 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 22:03:48.825707 kubelet[2297]: I0912 22:03:48.825699 2297 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 22:03:48.827187 kubelet[2297]: E0912 22:03:48.827161 2297 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 22:03:48.904922 kubelet[2297]: I0912 22:03:48.904810 2297 apiserver.go:52] "Watching apiserver" Sep 12 22:03:48.917283 kubelet[2297]: I0912 22:03:48.917252 2297 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 22:03:49.391064 kubelet[2297]: I0912 22:03:49.390956 2297 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:49.392948 kubelet[2297]: E0912 22:03:49.392914 2297 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:49.393076 kubelet[2297]: E0912 22:03:49.393062 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:50.811340 systemd[1]: Reload requested from client PID 2570 ('systemctl') (unit session-7.scope)... Sep 12 22:03:50.811355 systemd[1]: Reloading... Sep 12 22:03:50.869855 zram_generator::config[2613]: No configuration found. Sep 12 22:03:51.038033 systemd[1]: Reloading finished in 226 ms. Sep 12 22:03:51.062426 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:03:51.086836 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 22:03:51.087130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:03:51.087188 systemd[1]: kubelet.service: Consumed 864ms CPU time, 128.4M memory peak. Sep 12 22:03:51.090893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:03:51.219265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:03:51.222566 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:03:51.263410 kubelet[2655]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:03:51.263410 kubelet[2655]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 22:03:51.263410 kubelet[2655]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:03:51.263728 kubelet[2655]: I0912 22:03:51.263456 2655 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:03:51.268715 kubelet[2655]: I0912 22:03:51.268683 2655 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 22:03:51.268715 kubelet[2655]: I0912 22:03:51.268707 2655 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:03:51.270030 kubelet[2655]: I0912 22:03:51.269319 2655 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 22:03:51.270891 kubelet[2655]: I0912 22:03:51.270871 2655 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 22:03:51.272986 kubelet[2655]: I0912 22:03:51.272968 2655 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:03:51.275974 kubelet[2655]: I0912 22:03:51.275954 2655 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:03:51.278504 kubelet[2655]: I0912 22:03:51.278481 2655 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:03:51.278679 kubelet[2655]: I0912 22:03:51.278655 2655 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:03:51.278845 kubelet[2655]: I0912 22:03:51.278678 2655 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:03:51.278916 kubelet[2655]: I0912 22:03:51.278856 2655 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:03:51.278916 kubelet[2655]: I0912 22:03:51.278865 2655 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 22:03:51.278916 kubelet[2655]: I0912 22:03:51.278908 2655 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:03:51.279038 kubelet[2655]: I0912 22:03:51.279027 2655 kubelet.go:446] "Attempting to sync node with API server" Sep 12 22:03:51.279061 kubelet[2655]: I0912 22:03:51.279042 2655 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:03:51.279079 kubelet[2655]: I0912 22:03:51.279062 2655 kubelet.go:352] "Adding apiserver pod source" Sep 12 22:03:51.279079 kubelet[2655]: I0912 22:03:51.279074 2655 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:03:51.279755 kubelet[2655]: I0912 22:03:51.279721 2655 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:03:51.280577 kubelet[2655]: I0912 22:03:51.280185 2655 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 22:03:51.280641 kubelet[2655]: I0912 22:03:51.280621 2655 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 22:03:51.280667 kubelet[2655]: I0912 22:03:51.280644 2655 server.go:1287] "Started kubelet" Sep 12 22:03:51.280972 kubelet[2655]: I0912 22:03:51.280707 2655 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:03:51.283191 kubelet[2655]: I0912 22:03:51.281738 2655 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:03:51.283191 kubelet[2655]: I0912 22:03:51.281949 2655 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:03:51.283191 kubelet[2655]: I0912 22:03:51.282126 2655 server.go:479] "Adding debug handlers to kubelet server" Sep 12 22:03:51.284548 kubelet[2655]: E0912 22:03:51.284500 2655 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 22:03:51.288620 kubelet[2655]: I0912 22:03:51.285693 2655 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:03:51.288620 kubelet[2655]: I0912 22:03:51.285941 2655 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:03:51.288786 kubelet[2655]: I0912 22:03:51.288760 2655 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 22:03:51.290821 kubelet[2655]: E0912 22:03:51.290166 2655 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:03:51.291357 kubelet[2655]: I0912 22:03:51.291172 2655 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 22:03:51.291357 kubelet[2655]: I0912 22:03:51.291271 2655 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:03:51.304017 kubelet[2655]: I0912 22:03:51.303988 2655 factory.go:221] Registration of the systemd container factory successfully Sep 12 22:03:51.304095 kubelet[2655]: I0912 22:03:51.304072 2655 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:03:51.305151 kubelet[2655]: I0912 22:03:51.305100 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 22:03:51.307681 kubelet[2655]: I0912 22:03:51.307663 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 22:03:51.308668 kubelet[2655]: I0912 22:03:51.308651 2655 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 22:03:51.309284 kubelet[2655]: I0912 22:03:51.309262 2655 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 22:03:51.309384 kubelet[2655]: I0912 22:03:51.309374 2655 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 22:03:51.309872 kubelet[2655]: E0912 22:03:51.309852 2655 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:03:51.309953 kubelet[2655]: I0912 22:03:51.309453 2655 factory.go:221] Registration of the containerd container factory successfully Sep 12 22:03:51.337230 kubelet[2655]: I0912 22:03:51.337153 2655 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 22:03:51.337363 kubelet[2655]: I0912 22:03:51.337341 2655 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 22:03:51.337421 kubelet[2655]: I0912 22:03:51.337414 2655 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:03:51.337619 kubelet[2655]: I0912 22:03:51.337599 2655 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 22:03:51.337693 kubelet[2655]: I0912 22:03:51.337671 2655 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 22:03:51.337740 kubelet[2655]: I0912 22:03:51.337733 2655 policy_none.go:49] "None policy: Start" Sep 12 22:03:51.337789 kubelet[2655]: I0912 22:03:51.337782 2655 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 22:03:51.337869 kubelet[2655]: I0912 22:03:51.337860 2655 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:03:51.338028 kubelet[2655]: I0912 22:03:51.338017 2655 state_mem.go:75] "Updated machine memory state" Sep 12 22:03:51.341307 kubelet[2655]: I0912 22:03:51.341290 2655 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 22:03:51.341461 kubelet[2655]: I0912 22:03:51.341443 2655 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:03:51.341493 kubelet[2655]: I0912 22:03:51.341458 2655 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:03:51.341688 kubelet[2655]: I0912 22:03:51.341675 2655 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:03:51.342808 kubelet[2655]: E0912 22:03:51.342790 2655 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 22:03:51.410460 kubelet[2655]: I0912 22:03:51.410426 2655 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:51.410605 kubelet[2655]: I0912 22:03:51.410531 2655 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 22:03:51.410605 kubelet[2655]: I0912 22:03:51.410548 2655 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 22:03:51.445217 kubelet[2655]: I0912 22:03:51.445153 2655 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 22:03:51.451853 kubelet[2655]: I0912 22:03:51.451499 2655 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 22:03:51.451853 kubelet[2655]: I0912 22:03:51.451574 2655 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 22:03:51.592884 kubelet[2655]: I0912 22:03:51.592280 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:51.592884 kubelet[2655]: I0912 22:03:51.592313 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec2c4eb9056f9fd7ae16f19ddd037637-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec2c4eb9056f9fd7ae16f19ddd037637\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:03:51.592884 kubelet[2655]: I0912 22:03:51.592355 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec2c4eb9056f9fd7ae16f19ddd037637-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec2c4eb9056f9fd7ae16f19ddd037637\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:03:51.592884 kubelet[2655]: I0912 22:03:51.592377 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec2c4eb9056f9fd7ae16f19ddd037637-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ec2c4eb9056f9fd7ae16f19ddd037637\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:03:51.592884 kubelet[2655]: I0912 22:03:51.592393 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:51.593066 kubelet[2655]: I0912 22:03:51.592429 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:51.593066 kubelet[2655]: I0912 22:03:51.592467 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:51.593066 kubelet[2655]: I0912 22:03:51.592489 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:03:51.593066 kubelet[2655]: I0912 22:03:51.592515 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 22:03:51.716446 kubelet[2655]: E0912 22:03:51.716334 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:51.716679 kubelet[2655]: E0912 22:03:51.716484 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:51.717466 kubelet[2655]: E0912 22:03:51.717448 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:51.811684 sudo[2692]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 22:03:51.812025 sudo[2692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 22:03:52.116594 sudo[2692]: pam_unix(sudo:session): session closed for user root Sep 12 22:03:52.280793 kubelet[2655]: I0912 22:03:52.280099 2655 apiserver.go:52] "Watching apiserver" Sep 12 22:03:52.291299 kubelet[2655]: I0912 22:03:52.291274 2655 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 22:03:52.323048 kubelet[2655]: I0912 22:03:52.323014 2655 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 22:03:52.324882 kubelet[2655]: E0912 22:03:52.323372 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:52.325206 kubelet[2655]: E0912 22:03:52.325141 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:52.334894 kubelet[2655]: E0912 22:03:52.334866 2655 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 22:03:52.335129 kubelet[2655]: E0912 22:03:52.335103 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:52.366792 kubelet[2655]: I0912 22:03:52.365906 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.365892191 podStartE2EDuration="1.365892191s" podCreationTimestamp="2025-09-12 22:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:03:52.365632853 +0000 UTC m=+1.140281357" watchObservedRunningTime="2025-09-12 22:03:52.365892191 +0000 UTC m=+1.140540695" Sep 12 22:03:52.366792 kubelet[2655]: I0912 22:03:52.366774 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.366764443 podStartE2EDuration="1.366764443s" podCreationTimestamp="2025-09-12 22:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:03:52.357649027 +0000 UTC m=+1.132297531" watchObservedRunningTime="2025-09-12 22:03:52.366764443 +0000 UTC m=+1.141412947" Sep 12 22:03:52.393406 kubelet[2655]: I0912 22:03:52.393174 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.39315589 podStartE2EDuration="1.39315589s" podCreationTimestamp="2025-09-12 22:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:03:52.374050483 +0000 UTC m=+1.148699067" watchObservedRunningTime="2025-09-12 22:03:52.39315589 +0000 UTC m=+1.167804394" Sep 12 22:03:53.325712 kubelet[2655]: E0912 22:03:53.325625 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:53.326083 kubelet[2655]: E0912 22:03:53.325960 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:54.142770 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 12 22:03:54.143956 sshd[1739]: Connection closed by 10.0.0.1 port 34780 Sep 12 22:03:54.145141 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Sep 12 22:03:54.148559 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:34780.service: Deactivated successfully. Sep 12 22:03:54.150495 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 22:03:54.150740 systemd[1]: session-7.scope: Consumed 6.151s CPU time, 258.3M memory peak. Sep 12 22:03:54.151578 systemd-logind[1517]: Session 7 logged out. Waiting for processes to exit. Sep 12 22:03:54.153033 systemd-logind[1517]: Removed session 7. Sep 12 22:03:55.306624 kubelet[2655]: E0912 22:03:55.306551 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:56.900463 kubelet[2655]: I0912 22:03:56.900434 2655 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 22:03:56.900935 containerd[1533]: time="2025-09-12T22:03:56.900792186Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 22:03:56.901295 kubelet[2655]: I0912 22:03:56.901276 2655 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 22:03:57.807588 systemd[1]: Created slice kubepods-besteffort-pod9e1855e1_6224_40d7_bea4_bd52e7b227c1.slice - libcontainer container kubepods-besteffort-pod9e1855e1_6224_40d7_bea4_bd52e7b227c1.slice. Sep 12 22:03:57.821555 systemd[1]: Created slice kubepods-burstable-pode4402442_790b_475f_9d84_0528ccf0a7b7.slice - libcontainer container kubepods-burstable-pode4402442_790b_475f_9d84_0528ccf0a7b7.slice. Sep 12 22:03:57.832112 kubelet[2655]: I0912 22:03:57.832074 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-lib-modules\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832112 kubelet[2655]: I0912 22:03:57.832111 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-xtables-lock\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832243 kubelet[2655]: I0912 22:03:57.832136 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e1855e1-6224-40d7-bea4-bd52e7b227c1-lib-modules\") pod \"kube-proxy-rjlrv\" (UID: \"9e1855e1-6224-40d7-bea4-bd52e7b227c1\") " pod="kube-system/kube-proxy-rjlrv" Sep 12 22:03:57.832243 kubelet[2655]: I0912 22:03:57.832152 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cni-path\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832243 kubelet[2655]: I0912 22:03:57.832167 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4402442-790b-475f-9d84-0528ccf0a7b7-clustermesh-secrets\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832243 kubelet[2655]: I0912 22:03:57.832182 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-config-path\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832243 kubelet[2655]: I0912 22:03:57.832206 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-host-proc-sys-net\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832243 kubelet[2655]: I0912 22:03:57.832222 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-cgroup\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832386 kubelet[2655]: I0912 22:03:57.832237 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjskj\" (UniqueName: \"kubernetes.io/projected/9e1855e1-6224-40d7-bea4-bd52e7b227c1-kube-api-access-cjskj\") pod \"kube-proxy-rjlrv\" (UID: \"9e1855e1-6224-40d7-bea4-bd52e7b227c1\") " pod="kube-system/kube-proxy-rjlrv" Sep 12 22:03:57.832386 kubelet[2655]: I0912 22:03:57.832250 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e1855e1-6224-40d7-bea4-bd52e7b227c1-xtables-lock\") pod \"kube-proxy-rjlrv\" (UID: \"9e1855e1-6224-40d7-bea4-bd52e7b227c1\") " pod="kube-system/kube-proxy-rjlrv" Sep 12 22:03:57.832386 kubelet[2655]: I0912 22:03:57.832283 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-hostproc\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832386 kubelet[2655]: I0912 22:03:57.832298 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4402442-790b-475f-9d84-0528ccf0a7b7-hubble-tls\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832386 kubelet[2655]: I0912 22:03:57.832313 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdbqg\" (UniqueName: \"kubernetes.io/projected/e4402442-790b-475f-9d84-0528ccf0a7b7-kube-api-access-jdbqg\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832386 kubelet[2655]: I0912 22:03:57.832327 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-run\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832493 kubelet[2655]: I0912 22:03:57.832344 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-bpf-maps\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832493 kubelet[2655]: I0912 22:03:57.832363 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-etc-cni-netd\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832493 kubelet[2655]: I0912 22:03:57.832380 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-host-proc-sys-kernel\") pod \"cilium-rv2bm\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " pod="kube-system/cilium-rv2bm" Sep 12 22:03:57.832493 kubelet[2655]: I0912 22:03:57.832398 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e1855e1-6224-40d7-bea4-bd52e7b227c1-kube-proxy\") pod \"kube-proxy-rjlrv\" (UID: \"9e1855e1-6224-40d7-bea4-bd52e7b227c1\") " pod="kube-system/kube-proxy-rjlrv" Sep 12 22:03:58.023634 systemd[1]: Created slice kubepods-besteffort-pod85459d40_62f5_4cc7_840d_d918bd342b02.slice - libcontainer container kubepods-besteffort-pod85459d40_62f5_4cc7_840d_d918bd342b02.slice. Sep 12 22:03:58.034360 kubelet[2655]: I0912 22:03:58.034302 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85459d40-62f5-4cc7-840d-d918bd342b02-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wtfj7\" (UID: \"85459d40-62f5-4cc7-840d-d918bd342b02\") " pod="kube-system/cilium-operator-6c4d7847fc-wtfj7" Sep 12 22:03:58.034360 kubelet[2655]: I0912 22:03:58.034362 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcst7\" (UniqueName: \"kubernetes.io/projected/85459d40-62f5-4cc7-840d-d918bd342b02-kube-api-access-kcst7\") pod \"cilium-operator-6c4d7847fc-wtfj7\" (UID: \"85459d40-62f5-4cc7-840d-d918bd342b02\") " pod="kube-system/cilium-operator-6c4d7847fc-wtfj7" Sep 12 22:03:58.118039 kubelet[2655]: E0912 22:03:58.117924 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:58.118994 containerd[1533]: time="2025-09-12T22:03:58.118925019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rjlrv,Uid:9e1855e1-6224-40d7-bea4-bd52e7b227c1,Namespace:kube-system,Attempt:0,}" Sep 12 22:03:58.125646 kubelet[2655]: E0912 22:03:58.125372 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:58.126091 containerd[1533]: time="2025-09-12T22:03:58.126045412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rv2bm,Uid:e4402442-790b-475f-9d84-0528ccf0a7b7,Namespace:kube-system,Attempt:0,}" Sep 12 22:03:58.146948 containerd[1533]: time="2025-09-12T22:03:58.146894195Z" level=info msg="connecting to shim 1cf095a0982d3960875cbe22fbc8f086e6090a3f6b7d1c5b9382b9fda3f5443f" address="unix:///run/containerd/s/08ec9879f1b07392fe1522c4e00d432222cf4110009c477e1d6746e2a85cb20d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:03:58.148327 kubelet[2655]: E0912 22:03:58.148297 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:58.166568 containerd[1533]: time="2025-09-12T22:03:58.166519214Z" level=info msg="connecting to shim f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d" address="unix:///run/containerd/s/b877e3540afaf59ca0a1dd9b7ddbc1ad98d55f2b482fed4081d00b85d56d22d5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:03:58.181976 systemd[1]: Started cri-containerd-1cf095a0982d3960875cbe22fbc8f086e6090a3f6b7d1c5b9382b9fda3f5443f.scope - libcontainer container 1cf095a0982d3960875cbe22fbc8f086e6090a3f6b7d1c5b9382b9fda3f5443f. Sep 12 22:03:58.189318 systemd[1]: Started cri-containerd-f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d.scope - libcontainer container f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d. Sep 12 22:03:58.215744 containerd[1533]: time="2025-09-12T22:03:58.215706307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rjlrv,Uid:9e1855e1-6224-40d7-bea4-bd52e7b227c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cf095a0982d3960875cbe22fbc8f086e6090a3f6b7d1c5b9382b9fda3f5443f\"" Sep 12 22:03:58.216488 kubelet[2655]: E0912 22:03:58.216454 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:58.216981 containerd[1533]: time="2025-09-12T22:03:58.216931713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rv2bm,Uid:e4402442-790b-475f-9d84-0528ccf0a7b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\"" Sep 12 22:03:58.217920 kubelet[2655]: E0912 22:03:58.217893 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:58.220038 containerd[1533]: time="2025-09-12T22:03:58.219994426Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 22:03:58.220269 containerd[1533]: time="2025-09-12T22:03:58.220226386Z" level=info msg="CreateContainer within sandbox \"1cf095a0982d3960875cbe22fbc8f086e6090a3f6b7d1c5b9382b9fda3f5443f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 22:03:58.230076 containerd[1533]: time="2025-09-12T22:03:58.229991002Z" level=info msg="Container 91fdf98dd6630142e18c3fab08690fd19d0b1fa0eebdb041b4df873d8656232e: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:03:58.238241 containerd[1533]: time="2025-09-12T22:03:58.238195582Z" level=info msg="CreateContainer within sandbox \"1cf095a0982d3960875cbe22fbc8f086e6090a3f6b7d1c5b9382b9fda3f5443f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91fdf98dd6630142e18c3fab08690fd19d0b1fa0eebdb041b4df873d8656232e\"" Sep 12 22:03:58.239891 containerd[1533]: time="2025-09-12T22:03:58.239853926Z" level=info msg="StartContainer for \"91fdf98dd6630142e18c3fab08690fd19d0b1fa0eebdb041b4df873d8656232e\"" Sep 12 22:03:58.241287 containerd[1533]: time="2025-09-12T22:03:58.241261858Z" level=info msg="connecting to shim 91fdf98dd6630142e18c3fab08690fd19d0b1fa0eebdb041b4df873d8656232e" address="unix:///run/containerd/s/08ec9879f1b07392fe1522c4e00d432222cf4110009c477e1d6746e2a85cb20d" protocol=ttrpc version=3 Sep 12 22:03:58.263031 systemd[1]: Started cri-containerd-91fdf98dd6630142e18c3fab08690fd19d0b1fa0eebdb041b4df873d8656232e.scope - libcontainer container 91fdf98dd6630142e18c3fab08690fd19d0b1fa0eebdb041b4df873d8656232e. Sep 12 22:03:58.302205 containerd[1533]: time="2025-09-12T22:03:58.301978145Z" level=info msg="StartContainer for \"91fdf98dd6630142e18c3fab08690fd19d0b1fa0eebdb041b4df873d8656232e\" returns successfully" Sep 12 22:03:58.327026 kubelet[2655]: E0912 22:03:58.326988 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:58.327753 containerd[1533]: time="2025-09-12T22:03:58.327712018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wtfj7,Uid:85459d40-62f5-4cc7-840d-d918bd342b02,Namespace:kube-system,Attempt:0,}" Sep 12 22:03:58.337852 kubelet[2655]: E0912 22:03:58.337530 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:58.338252 kubelet[2655]: E0912 22:03:58.338231 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:58.359515 containerd[1533]: time="2025-09-12T22:03:58.359369938Z" level=info msg="connecting to shim 31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1" address="unix:///run/containerd/s/83443d9e96c5a88d6ff1f53cea81344975fb4fa038d050f8c99ea0b68a7002a3" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:03:58.364804 kubelet[2655]: I0912 22:03:58.364748 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rjlrv" podStartSLOduration=1.364729036 podStartE2EDuration="1.364729036s" podCreationTimestamp="2025-09-12 22:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:03:58.352079389 +0000 UTC m=+7.126727893" watchObservedRunningTime="2025-09-12 22:03:58.364729036 +0000 UTC m=+7.139377540" Sep 12 22:03:58.384999 systemd[1]: Started cri-containerd-31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1.scope - libcontainer container 31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1. Sep 12 22:03:58.426494 containerd[1533]: time="2025-09-12T22:03:58.426452017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wtfj7,Uid:85459d40-62f5-4cc7-840d-d918bd342b02,Namespace:kube-system,Attempt:0,} returns sandbox id \"31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1\"" Sep 12 22:03:58.427573 kubelet[2655]: E0912 22:03:58.427545 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:59.340692 kubelet[2655]: E0912 22:03:59.340525 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:03:59.554590 kubelet[2655]: E0912 22:03:59.554503 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:00.342437 kubelet[2655]: E0912 22:04:00.342379 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:05.324124 kubelet[2655]: E0912 22:04:05.324043 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:06.826081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622719742.mount: Deactivated successfully. Sep 12 22:04:08.067799 containerd[1533]: time="2025-09-12T22:04:08.067718368Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 22:04:08.070919 containerd[1533]: time="2025-09-12T22:04:08.070880362Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.850845552s" Sep 12 22:04:08.070919 containerd[1533]: time="2025-09-12T22:04:08.070919858Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 22:04:08.073454 containerd[1533]: time="2025-09-12T22:04:08.073407980Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:04:08.074252 containerd[1533]: time="2025-09-12T22:04:08.074221067Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:04:08.076886 containerd[1533]: time="2025-09-12T22:04:08.076854248Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 22:04:08.080477 containerd[1533]: time="2025-09-12T22:04:08.080320244Z" level=info msg="CreateContainer within sandbox \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 22:04:08.091627 containerd[1533]: time="2025-09-12T22:04:08.090933799Z" level=info msg="Container 9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:04:08.105877 containerd[1533]: time="2025-09-12T22:04:08.105838602Z" level=info msg="CreateContainer within sandbox \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\"" Sep 12 22:04:08.108290 containerd[1533]: time="2025-09-12T22:04:08.108255896Z" level=info msg="StartContainer for \"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\"" Sep 12 22:04:08.109211 containerd[1533]: time="2025-09-12T22:04:08.109168784Z" level=info msg="connecting to shim 9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff" address="unix:///run/containerd/s/b877e3540afaf59ca0a1dd9b7ddbc1ad98d55f2b482fed4081d00b85d56d22d5" protocol=ttrpc version=3 Sep 12 22:04:08.148006 systemd[1]: Started cri-containerd-9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff.scope - libcontainer container 9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff. Sep 12 22:04:08.174419 containerd[1533]: time="2025-09-12T22:04:08.174382531Z" level=info msg="StartContainer for \"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\" returns successfully" Sep 12 22:04:08.187993 systemd[1]: cri-containerd-9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff.scope: Deactivated successfully. Sep 12 22:04:08.208234 containerd[1533]: time="2025-09-12T22:04:08.208190388Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\" id:\"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\" pid:3078 exited_at:{seconds:1757714648 nanos:207310713}" Sep 12 22:04:08.214500 containerd[1533]: time="2025-09-12T22:04:08.214447348Z" level=info msg="received exit event container_id:\"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\" id:\"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\" pid:3078 exited_at:{seconds:1757714648 nanos:207310713}" Sep 12 22:04:08.242847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff-rootfs.mount: Deactivated successfully. Sep 12 22:04:08.403326 kubelet[2655]: E0912 22:04:08.402981 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:08.405283 containerd[1533]: time="2025-09-12T22:04:08.405226310Z" level=info msg="CreateContainer within sandbox \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 22:04:08.445008 containerd[1533]: time="2025-09-12T22:04:08.444964476Z" level=info msg="Container 3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:04:08.450323 containerd[1533]: time="2025-09-12T22:04:08.450291261Z" level=info msg="CreateContainer within sandbox \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\"" Sep 12 22:04:08.451004 containerd[1533]: time="2025-09-12T22:04:08.450970375Z" level=info msg="StartContainer for \"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\"" Sep 12 22:04:08.451764 containerd[1533]: time="2025-09-12T22:04:08.451728920Z" level=info msg="connecting to shim 3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91" address="unix:///run/containerd/s/b877e3540afaf59ca0a1dd9b7ddbc1ad98d55f2b482fed4081d00b85d56d22d5" protocol=ttrpc version=3 Sep 12 22:04:08.474906 systemd[1]: Started cri-containerd-3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91.scope - libcontainer container 3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91. Sep 12 22:04:08.499158 update_engine[1519]: I20250912 22:04:08.499100 1519 update_attempter.cc:509] Updating boot flags... Sep 12 22:04:08.531600 containerd[1533]: time="2025-09-12T22:04:08.531440067Z" level=info msg="StartContainer for \"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\" returns successfully" Sep 12 22:04:08.576078 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 22:04:08.576295 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:04:08.578195 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:04:08.580176 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:04:08.591012 systemd[1]: cri-containerd-3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91.scope: Deactivated successfully. Sep 12 22:04:08.591270 systemd[1]: cri-containerd-3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91.scope: Consumed 42ms CPU time, 9.5M memory peak, 2.3M written to disk. Sep 12 22:04:08.606300 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:04:08.609345 containerd[1533]: time="2025-09-12T22:04:08.609198706Z" level=info msg="received exit event container_id:\"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\" id:\"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\" pid:3124 exited_at:{seconds:1757714648 nanos:608588901}" Sep 12 22:04:08.609345 containerd[1533]: time="2025-09-12T22:04:08.609260571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\" id:\"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\" pid:3124 exited_at:{seconds:1757714648 nanos:608588901}" Sep 12 22:04:09.409398 kubelet[2655]: E0912 22:04:09.409359 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:09.415965 containerd[1533]: time="2025-09-12T22:04:09.415929105Z" level=info msg="CreateContainer within sandbox \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 22:04:09.630855 containerd[1533]: time="2025-09-12T22:04:09.630453825Z" level=info msg="Container 163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:04:09.789297 containerd[1533]: time="2025-09-12T22:04:09.789165009Z" level=info msg="CreateContainer within sandbox \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\"" Sep 12 22:04:09.790053 containerd[1533]: time="2025-09-12T22:04:09.790019336Z" level=info msg="StartContainer for \"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\"" Sep 12 22:04:09.791454 containerd[1533]: time="2025-09-12T22:04:09.791414430Z" level=info msg="connecting to shim 163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240" address="unix:///run/containerd/s/b877e3540afaf59ca0a1dd9b7ddbc1ad98d55f2b482fed4081d00b85d56d22d5" protocol=ttrpc version=3 Sep 12 22:04:09.838013 systemd[1]: Started cri-containerd-163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240.scope - libcontainer container 163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240. Sep 12 22:04:09.880914 systemd[1]: cri-containerd-163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240.scope: Deactivated successfully. Sep 12 22:04:09.883585 containerd[1533]: time="2025-09-12T22:04:09.883540073Z" level=info msg="StartContainer for \"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\" returns successfully" Sep 12 22:04:09.884252 containerd[1533]: time="2025-09-12T22:04:09.883604017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\" id:\"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\" pid:3196 exited_at:{seconds:1757714649 nanos:883237997}" Sep 12 22:04:09.884252 containerd[1533]: time="2025-09-12T22:04:09.883540913Z" level=info msg="received exit event container_id:\"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\" id:\"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\" pid:3196 exited_at:{seconds:1757714649 nanos:883237997}" Sep 12 22:04:10.087661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount11736034.mount: Deactivated successfully. Sep 12 22:04:10.087803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240-rootfs.mount: Deactivated successfully. Sep 12 22:04:10.413296 kubelet[2655]: E0912 22:04:10.413176 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:10.415520 containerd[1533]: time="2025-09-12T22:04:10.415446173Z" level=info msg="CreateContainer within sandbox \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 22:04:10.434621 containerd[1533]: time="2025-09-12T22:04:10.434577105Z" level=info msg="Container 5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:04:10.442124 containerd[1533]: time="2025-09-12T22:04:10.442081960Z" level=info msg="CreateContainer within sandbox \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\"" Sep 12 22:04:10.442773 containerd[1533]: time="2025-09-12T22:04:10.442742601Z" level=info msg="StartContainer for \"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\"" Sep 12 22:04:10.443530 containerd[1533]: time="2025-09-12T22:04:10.443499397Z" level=info msg="connecting to shim 5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a" address="unix:///run/containerd/s/b877e3540afaf59ca0a1dd9b7ddbc1ad98d55f2b482fed4081d00b85d56d22d5" protocol=ttrpc version=3 Sep 12 22:04:10.467009 systemd[1]: Started cri-containerd-5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a.scope - libcontainer container 5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a. Sep 12 22:04:10.490110 systemd[1]: cri-containerd-5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a.scope: Deactivated successfully. Sep 12 22:04:10.493082 containerd[1533]: time="2025-09-12T22:04:10.493001997Z" level=info msg="received exit event container_id:\"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\" id:\"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\" pid:3238 exited_at:{seconds:1757714650 nanos:492789479}" Sep 12 22:04:10.493082 containerd[1533]: time="2025-09-12T22:04:10.493066980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\" id:\"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\" pid:3238 exited_at:{seconds:1757714650 nanos:492789479}" Sep 12 22:04:10.499550 containerd[1533]: time="2025-09-12T22:04:10.499479557Z" level=info msg="StartContainer for \"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\" returns successfully" Sep 12 22:04:10.510604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a-rootfs.mount: Deactivated successfully. Sep 12 22:04:11.419695 kubelet[2655]: E0912 22:04:11.419521 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:11.424518 containerd[1533]: time="2025-09-12T22:04:11.424384041Z" level=info msg="CreateContainer within sandbox \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 22:04:11.436876 containerd[1533]: time="2025-09-12T22:04:11.436829800Z" level=info msg="Container 41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:04:11.446928 containerd[1533]: time="2025-09-12T22:04:11.446795379Z" level=info msg="CreateContainer within sandbox \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\"" Sep 12 22:04:11.448384 containerd[1533]: time="2025-09-12T22:04:11.447421236Z" level=info msg="StartContainer for \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\"" Sep 12 22:04:11.450455 containerd[1533]: time="2025-09-12T22:04:11.450412794Z" level=info msg="connecting to shim 41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad" address="unix:///run/containerd/s/b877e3540afaf59ca0a1dd9b7ddbc1ad98d55f2b482fed4081d00b85d56d22d5" protocol=ttrpc version=3 Sep 12 22:04:11.475059 systemd[1]: Started cri-containerd-41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad.scope - libcontainer container 41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad. Sep 12 22:04:11.507242 containerd[1533]: time="2025-09-12T22:04:11.507197620Z" level=info msg="StartContainer for \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" returns successfully" Sep 12 22:04:11.622577 containerd[1533]: time="2025-09-12T22:04:11.622531084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" id:\"42ecf4840f4e9ed6ac7210c587c0cef2bcd47dbb8430e47751b1a1f4a3f84510\" pid:3308 exited_at:{seconds:1757714651 nanos:622221696}" Sep 12 22:04:11.660956 kubelet[2655]: I0912 22:04:11.660073 2655 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 22:04:11.698476 systemd[1]: Created slice kubepods-burstable-pod79ab8bdf_a54d_4e78_ac21_02639a1ef82d.slice - libcontainer container kubepods-burstable-pod79ab8bdf_a54d_4e78_ac21_02639a1ef82d.slice. Sep 12 22:04:11.714614 systemd[1]: Created slice kubepods-burstable-pod3f8e8b0f_b11f_46b7_9749_308ab1155d91.slice - libcontainer container kubepods-burstable-pod3f8e8b0f_b11f_46b7_9749_308ab1155d91.slice. Sep 12 22:04:11.733345 kubelet[2655]: I0912 22:04:11.733304 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvv4v\" (UniqueName: \"kubernetes.io/projected/3f8e8b0f-b11f-46b7-9749-308ab1155d91-kube-api-access-xvv4v\") pod \"coredns-668d6bf9bc-brwdl\" (UID: \"3f8e8b0f-b11f-46b7-9749-308ab1155d91\") " pod="kube-system/coredns-668d6bf9bc-brwdl" Sep 12 22:04:11.733345 kubelet[2655]: I0912 22:04:11.733355 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f8e8b0f-b11f-46b7-9749-308ab1155d91-config-volume\") pod \"coredns-668d6bf9bc-brwdl\" (UID: \"3f8e8b0f-b11f-46b7-9749-308ab1155d91\") " pod="kube-system/coredns-668d6bf9bc-brwdl" Sep 12 22:04:11.733482 kubelet[2655]: I0912 22:04:11.733374 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4b8k\" (UniqueName: \"kubernetes.io/projected/79ab8bdf-a54d-4e78-ac21-02639a1ef82d-kube-api-access-t4b8k\") pod \"coredns-668d6bf9bc-gbkm9\" (UID: \"79ab8bdf-a54d-4e78-ac21-02639a1ef82d\") " pod="kube-system/coredns-668d6bf9bc-gbkm9" Sep 12 22:04:11.733482 kubelet[2655]: I0912 22:04:11.733392 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79ab8bdf-a54d-4e78-ac21-02639a1ef82d-config-volume\") pod \"coredns-668d6bf9bc-gbkm9\" (UID: \"79ab8bdf-a54d-4e78-ac21-02639a1ef82d\") " pod="kube-system/coredns-668d6bf9bc-gbkm9" Sep 12 22:04:12.003850 kubelet[2655]: E0912 22:04:12.003598 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:12.005221 containerd[1533]: time="2025-09-12T22:04:12.005184723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gbkm9,Uid:79ab8bdf-a54d-4e78-ac21-02639a1ef82d,Namespace:kube-system,Attempt:0,}" Sep 12 22:04:12.018639 kubelet[2655]: E0912 22:04:12.018593 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:12.021194 containerd[1533]: time="2025-09-12T22:04:12.020807450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brwdl,Uid:3f8e8b0f-b11f-46b7-9749-308ab1155d91,Namespace:kube-system,Attempt:0,}" Sep 12 22:04:12.361004 containerd[1533]: time="2025-09-12T22:04:12.360889879Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:04:12.361801 containerd[1533]: time="2025-09-12T22:04:12.361755205Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 22:04:12.362847 containerd[1533]: time="2025-09-12T22:04:12.362783745Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:04:12.364037 containerd[1533]: time="2025-09-12T22:04:12.364003309Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.287113847s" Sep 12 22:04:12.364223 containerd[1533]: time="2025-09-12T22:04:12.364124869Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 22:04:12.367941 containerd[1533]: time="2025-09-12T22:04:12.367804286Z" level=info msg="CreateContainer within sandbox \"31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 22:04:12.380078 containerd[1533]: time="2025-09-12T22:04:12.380031650Z" level=info msg="Container 2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:04:12.385858 containerd[1533]: time="2025-09-12T22:04:12.385786633Z" level=info msg="CreateContainer within sandbox \"31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\"" Sep 12 22:04:12.386465 containerd[1533]: time="2025-09-12T22:04:12.386410199Z" level=info msg="StartContainer for \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\"" Sep 12 22:04:12.387265 containerd[1533]: time="2025-09-12T22:04:12.387237713Z" level=info msg="connecting to shim 2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d" address="unix:///run/containerd/s/83443d9e96c5a88d6ff1f53cea81344975fb4fa038d050f8c99ea0b68a7002a3" protocol=ttrpc version=3 Sep 12 22:04:12.418046 systemd[1]: Started cri-containerd-2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d.scope - libcontainer container 2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d. Sep 12 22:04:12.429644 kubelet[2655]: E0912 22:04:12.429611 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:12.451888 kubelet[2655]: I0912 22:04:12.451791 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rv2bm" podStartSLOduration=5.594053475 podStartE2EDuration="15.451772695s" podCreationTimestamp="2025-09-12 22:03:57 +0000 UTC" firstStartedPulling="2025-09-12 22:03:58.218976844 +0000 UTC m=+6.993625348" lastFinishedPulling="2025-09-12 22:04:08.076696064 +0000 UTC m=+16.851344568" observedRunningTime="2025-09-12 22:04:12.450905408 +0000 UTC m=+21.225553912" watchObservedRunningTime="2025-09-12 22:04:12.451772695 +0000 UTC m=+21.226421199" Sep 12 22:04:12.487968 containerd[1533]: time="2025-09-12T22:04:12.487922610Z" level=info msg="StartContainer for \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\" returns successfully" Sep 12 22:04:13.432772 kubelet[2655]: E0912 22:04:13.432734 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:13.432772 kubelet[2655]: E0912 22:04:13.432764 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:13.459581 kubelet[2655]: I0912 22:04:13.459443 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wtfj7" podStartSLOduration=2.523420152 podStartE2EDuration="16.459424084s" podCreationTimestamp="2025-09-12 22:03:57 +0000 UTC" firstStartedPulling="2025-09-12 22:03:58.428922922 +0000 UTC m=+7.203571386" lastFinishedPulling="2025-09-12 22:04:12.364926814 +0000 UTC m=+21.139575318" observedRunningTime="2025-09-12 22:04:13.458781442 +0000 UTC m=+22.233429946" watchObservedRunningTime="2025-09-12 22:04:13.459424084 +0000 UTC m=+22.234072628" Sep 12 22:04:14.434459 kubelet[2655]: E0912 22:04:14.434429 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:14.434796 kubelet[2655]: E0912 22:04:14.434569 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:16.396655 systemd-networkd[1451]: cilium_host: Link UP Sep 12 22:04:16.396791 systemd-networkd[1451]: cilium_net: Link UP Sep 12 22:04:16.397358 systemd-networkd[1451]: cilium_host: Gained carrier Sep 12 22:04:16.397504 systemd-networkd[1451]: cilium_net: Gained carrier Sep 12 22:04:16.470544 systemd-networkd[1451]: cilium_vxlan: Link UP Sep 12 22:04:16.470686 systemd-networkd[1451]: cilium_vxlan: Gained carrier Sep 12 22:04:16.497947 systemd-networkd[1451]: cilium_host: Gained IPv6LL Sep 12 22:04:16.721856 kernel: NET: Registered PF_ALG protocol family Sep 12 22:04:17.018053 systemd-networkd[1451]: cilium_net: Gained IPv6LL Sep 12 22:04:17.284187 systemd-networkd[1451]: lxc_health: Link UP Sep 12 22:04:17.292043 systemd-networkd[1451]: lxc_health: Gained carrier Sep 12 22:04:17.575850 kernel: eth0: renamed from tmp5d1b0 Sep 12 22:04:17.575980 kernel: eth0: renamed from tmp2c1ef Sep 12 22:04:17.578188 systemd-networkd[1451]: lxc66b9dbd7789b: Link UP Sep 12 22:04:17.578609 systemd-networkd[1451]: lxc44b9a01892d7: Link UP Sep 12 22:04:17.579052 systemd-networkd[1451]: lxc66b9dbd7789b: Gained carrier Sep 12 22:04:17.580975 systemd-networkd[1451]: lxc44b9a01892d7: Gained carrier Sep 12 22:04:17.721983 systemd-networkd[1451]: cilium_vxlan: Gained IPv6LL Sep 12 22:04:18.134592 kubelet[2655]: E0912 22:04:18.134069 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:18.362994 systemd-networkd[1451]: lxc_health: Gained IPv6LL Sep 12 22:04:18.442819 kubelet[2655]: E0912 22:04:18.442777 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:18.746022 systemd-networkd[1451]: lxc66b9dbd7789b: Gained IPv6LL Sep 12 22:04:18.809931 systemd-networkd[1451]: lxc44b9a01892d7: Gained IPv6LL Sep 12 22:04:19.445891 kubelet[2655]: E0912 22:04:19.445851 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:21.199063 containerd[1533]: time="2025-09-12T22:04:21.198936021Z" level=info msg="connecting to shim 2c1ef95349e7e2e8ac579880ade33aa748ae72a768d357a7001a643b4d8db47c" address="unix:///run/containerd/s/892990f11359d6702ef04b9b8eb48d06d7316c1f302f87317d0b02e0034a7987" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:04:21.199776 containerd[1533]: time="2025-09-12T22:04:21.199743481Z" level=info msg="connecting to shim 5d1b06cdadf3f94b392f6926f7a8298e98ec63da29e050cede1f072e09f9b998" address="unix:///run/containerd/s/64541813a96b64fda488000419c8d5c1b394f129e69f8bdc210f90ea35c4226a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:04:21.222997 systemd[1]: Started cri-containerd-5d1b06cdadf3f94b392f6926f7a8298e98ec63da29e050cede1f072e09f9b998.scope - libcontainer container 5d1b06cdadf3f94b392f6926f7a8298e98ec63da29e050cede1f072e09f9b998. Sep 12 22:04:21.237037 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:04:21.271565 containerd[1533]: time="2025-09-12T22:04:21.271521039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brwdl,Uid:3f8e8b0f-b11f-46b7-9749-308ab1155d91,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d1b06cdadf3f94b392f6926f7a8298e98ec63da29e050cede1f072e09f9b998\"" Sep 12 22:04:21.272406 kubelet[2655]: E0912 22:04:21.272381 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:21.280153 containerd[1533]: time="2025-09-12T22:04:21.280083868Z" level=info msg="CreateContainer within sandbox \"5d1b06cdadf3f94b392f6926f7a8298e98ec63da29e050cede1f072e09f9b998\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:04:21.291134 containerd[1533]: time="2025-09-12T22:04:21.291094402Z" level=info msg="Container b79c740619c35b6a26f5f1b000c39b5dd645247867815d77d8cc00e686ebb04c: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:04:21.291256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042697026.mount: Deactivated successfully. Sep 12 22:04:21.297967 containerd[1533]: time="2025-09-12T22:04:21.297927045Z" level=info msg="CreateContainer within sandbox \"5d1b06cdadf3f94b392f6926f7a8298e98ec63da29e050cede1f072e09f9b998\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b79c740619c35b6a26f5f1b000c39b5dd645247867815d77d8cc00e686ebb04c\"" Sep 12 22:04:21.298633 containerd[1533]: time="2025-09-12T22:04:21.298604396Z" level=info msg="StartContainer for \"b79c740619c35b6a26f5f1b000c39b5dd645247867815d77d8cc00e686ebb04c\"" Sep 12 22:04:21.299670 containerd[1533]: time="2025-09-12T22:04:21.299610460Z" level=info msg="connecting to shim b79c740619c35b6a26f5f1b000c39b5dd645247867815d77d8cc00e686ebb04c" address="unix:///run/containerd/s/64541813a96b64fda488000419c8d5c1b394f129e69f8bdc210f90ea35c4226a" protocol=ttrpc version=3 Sep 12 22:04:21.306014 systemd[1]: Started cri-containerd-2c1ef95349e7e2e8ac579880ade33aa748ae72a768d357a7001a643b4d8db47c.scope - libcontainer container 2c1ef95349e7e2e8ac579880ade33aa748ae72a768d357a7001a643b4d8db47c. Sep 12 22:04:21.332996 systemd[1]: Started cri-containerd-b79c740619c35b6a26f5f1b000c39b5dd645247867815d77d8cc00e686ebb04c.scope - libcontainer container b79c740619c35b6a26f5f1b000c39b5dd645247867815d77d8cc00e686ebb04c. Sep 12 22:04:21.341607 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:55916.service - OpenSSH per-connection server daemon (10.0.0.1:55916). Sep 12 22:04:21.344056 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:04:21.372091 containerd[1533]: time="2025-09-12T22:04:21.372049046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gbkm9,Uid:79ab8bdf-a54d-4e78-ac21-02639a1ef82d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c1ef95349e7e2e8ac579880ade33aa748ae72a768d357a7001a643b4d8db47c\"" Sep 12 22:04:21.372836 kubelet[2655]: E0912 22:04:21.372792 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:21.377064 containerd[1533]: time="2025-09-12T22:04:21.376993868Z" level=info msg="CreateContainer within sandbox \"2c1ef95349e7e2e8ac579880ade33aa748ae72a768d357a7001a643b4d8db47c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:04:21.382405 containerd[1533]: time="2025-09-12T22:04:21.382373507Z" level=info msg="StartContainer for \"b79c740619c35b6a26f5f1b000c39b5dd645247867815d77d8cc00e686ebb04c\" returns successfully" Sep 12 22:04:21.397109 containerd[1533]: time="2025-09-12T22:04:21.396973522Z" level=info msg="Container 54e9a8b1c0b456692f29b17817882b4f5b07d7ed08c64647421cf512c4aa05b1: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:04:21.403456 sshd[3941]: Accepted publickey for core from 10.0.0.1 port 55916 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:21.404856 containerd[1533]: time="2025-09-12T22:04:21.404624267Z" level=info msg="CreateContainer within sandbox \"2c1ef95349e7e2e8ac579880ade33aa748ae72a768d357a7001a643b4d8db47c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"54e9a8b1c0b456692f29b17817882b4f5b07d7ed08c64647421cf512c4aa05b1\"" Sep 12 22:04:21.404662 sshd-session[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:21.408660 containerd[1533]: time="2025-09-12T22:04:21.408624198Z" level=info msg="StartContainer for \"54e9a8b1c0b456692f29b17817882b4f5b07d7ed08c64647421cf512c4aa05b1\"" Sep 12 22:04:21.410261 containerd[1533]: time="2025-09-12T22:04:21.409960536Z" level=info msg="connecting to shim 54e9a8b1c0b456692f29b17817882b4f5b07d7ed08c64647421cf512c4aa05b1" address="unix:///run/containerd/s/892990f11359d6702ef04b9b8eb48d06d7316c1f302f87317d0b02e0034a7987" protocol=ttrpc version=3 Sep 12 22:04:21.411010 systemd-logind[1517]: New session 8 of user core. Sep 12 22:04:21.418010 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 22:04:21.439978 systemd[1]: Started cri-containerd-54e9a8b1c0b456692f29b17817882b4f5b07d7ed08c64647421cf512c4aa05b1.scope - libcontainer container 54e9a8b1c0b456692f29b17817882b4f5b07d7ed08c64647421cf512c4aa05b1. Sep 12 22:04:21.453794 kubelet[2655]: E0912 22:04:21.453688 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:21.481498 containerd[1533]: time="2025-09-12T22:04:21.481318521Z" level=info msg="StartContainer for \"54e9a8b1c0b456692f29b17817882b4f5b07d7ed08c64647421cf512c4aa05b1\" returns successfully" Sep 12 22:04:21.582686 sshd[3970]: Connection closed by 10.0.0.1 port 55916 Sep 12 22:04:21.581666 sshd-session[3941]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:21.585130 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:55916.service: Deactivated successfully. Sep 12 22:04:21.587122 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 22:04:21.587910 systemd-logind[1517]: Session 8 logged out. Waiting for processes to exit. Sep 12 22:04:21.589010 systemd-logind[1517]: Removed session 8. Sep 12 22:04:22.458741 kubelet[2655]: E0912 22:04:22.458620 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:22.460152 kubelet[2655]: E0912 22:04:22.460117 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:22.471036 kubelet[2655]: I0912 22:04:22.470655 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gbkm9" podStartSLOduration=25.470643062 podStartE2EDuration="25.470643062s" podCreationTimestamp="2025-09-12 22:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:04:22.470127791 +0000 UTC m=+31.244776295" watchObservedRunningTime="2025-09-12 22:04:22.470643062 +0000 UTC m=+31.245291566" Sep 12 22:04:22.471036 kubelet[2655]: I0912 22:04:22.470738 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-brwdl" podStartSLOduration=25.470734761 podStartE2EDuration="25.470734761s" podCreationTimestamp="2025-09-12 22:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:04:21.490894016 +0000 UTC m=+30.265542520" watchObservedRunningTime="2025-09-12 22:04:22.470734761 +0000 UTC m=+31.245383265" Sep 12 22:04:23.460388 kubelet[2655]: E0912 22:04:23.460341 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:23.460723 kubelet[2655]: E0912 22:04:23.460428 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:24.462630 kubelet[2655]: E0912 22:04:24.462518 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:24.462630 kubelet[2655]: E0912 22:04:24.462558 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:04:26.595935 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:55930.service - OpenSSH per-connection server daemon (10.0.0.1:55930). Sep 12 22:04:26.662065 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 55930 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:26.663404 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:26.670472 systemd-logind[1517]: New session 9 of user core. Sep 12 22:04:26.681058 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 22:04:26.802657 sshd[4031]: Connection closed by 10.0.0.1 port 55930 Sep 12 22:04:26.803471 sshd-session[4028]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:26.807175 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:55930.service: Deactivated successfully. Sep 12 22:04:26.811141 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 22:04:26.811892 systemd-logind[1517]: Session 9 logged out. Waiting for processes to exit. Sep 12 22:04:26.813420 systemd-logind[1517]: Removed session 9. Sep 12 22:04:31.818511 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:50882.service - OpenSSH per-connection server daemon (10.0.0.1:50882). Sep 12 22:04:31.872177 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 50882 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:31.875530 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:31.883031 systemd-logind[1517]: New session 10 of user core. Sep 12 22:04:31.894054 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 22:04:32.016325 sshd[4053]: Connection closed by 10.0.0.1 port 50882 Sep 12 22:04:32.016635 sshd-session[4050]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:32.019937 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:50882.service: Deactivated successfully. Sep 12 22:04:32.022640 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 22:04:32.025397 systemd-logind[1517]: Session 10 logged out. Waiting for processes to exit. Sep 12 22:04:32.026559 systemd-logind[1517]: Removed session 10. Sep 12 22:04:37.030120 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:50892.service - OpenSSH per-connection server daemon (10.0.0.1:50892). Sep 12 22:04:37.080063 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 50892 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:37.081081 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:37.095402 systemd-logind[1517]: New session 11 of user core. Sep 12 22:04:37.106000 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 22:04:37.230515 sshd[4070]: Connection closed by 10.0.0.1 port 50892 Sep 12 22:04:37.231053 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:37.247178 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:50892.service: Deactivated successfully. Sep 12 22:04:37.249880 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 22:04:37.250945 systemd-logind[1517]: Session 11 logged out. Waiting for processes to exit. Sep 12 22:04:37.252792 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:50896.service - OpenSSH per-connection server daemon (10.0.0.1:50896). Sep 12 22:04:37.257001 systemd-logind[1517]: Removed session 11. Sep 12 22:04:37.313497 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 50896 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:37.314756 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:37.319433 systemd-logind[1517]: New session 12 of user core. Sep 12 22:04:37.324969 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 22:04:37.478243 sshd[4087]: Connection closed by 10.0.0.1 port 50896 Sep 12 22:04:37.479528 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:37.490586 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:50896.service: Deactivated successfully. Sep 12 22:04:37.495177 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 22:04:37.499054 systemd-logind[1517]: Session 12 logged out. Waiting for processes to exit. Sep 12 22:04:37.502476 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:50912.service - OpenSSH per-connection server daemon (10.0.0.1:50912). Sep 12 22:04:37.505611 systemd-logind[1517]: Removed session 12. Sep 12 22:04:37.570519 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 50912 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:37.572079 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:37.575988 systemd-logind[1517]: New session 13 of user core. Sep 12 22:04:37.587983 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 22:04:37.699090 sshd[4102]: Connection closed by 10.0.0.1 port 50912 Sep 12 22:04:37.699426 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:37.702939 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:50912.service: Deactivated successfully. Sep 12 22:04:37.704759 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 22:04:37.705766 systemd-logind[1517]: Session 13 logged out. Waiting for processes to exit. Sep 12 22:04:37.707334 systemd-logind[1517]: Removed session 13. Sep 12 22:04:42.712350 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:48514.service - OpenSSH per-connection server daemon (10.0.0.1:48514). Sep 12 22:04:42.786949 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 48514 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:42.788484 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:42.792552 systemd-logind[1517]: New session 14 of user core. Sep 12 22:04:42.805010 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 22:04:42.939586 sshd[4120]: Connection closed by 10.0.0.1 port 48514 Sep 12 22:04:42.939915 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:42.943617 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:48514.service: Deactivated successfully. Sep 12 22:04:42.945294 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 22:04:42.946539 systemd-logind[1517]: Session 14 logged out. Waiting for processes to exit. Sep 12 22:04:42.948397 systemd-logind[1517]: Removed session 14. Sep 12 22:04:47.959340 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:48526.service - OpenSSH per-connection server daemon (10.0.0.1:48526). Sep 12 22:04:48.019522 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 48526 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:48.021337 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:48.025911 systemd-logind[1517]: New session 15 of user core. Sep 12 22:04:48.049040 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 22:04:48.168911 sshd[4137]: Connection closed by 10.0.0.1 port 48526 Sep 12 22:04:48.170415 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:48.183470 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:48526.service: Deactivated successfully. Sep 12 22:04:48.185069 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 22:04:48.186484 systemd-logind[1517]: Session 15 logged out. Waiting for processes to exit. Sep 12 22:04:48.190152 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:48534.service - OpenSSH per-connection server daemon (10.0.0.1:48534). Sep 12 22:04:48.191107 systemd-logind[1517]: Removed session 15. Sep 12 22:04:48.248543 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 48534 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:48.249596 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:48.253920 systemd-logind[1517]: New session 16 of user core. Sep 12 22:04:48.277006 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 22:04:48.468842 sshd[4153]: Connection closed by 10.0.0.1 port 48534 Sep 12 22:04:48.469981 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:48.476917 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:48534.service: Deactivated successfully. Sep 12 22:04:48.479563 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 22:04:48.480262 systemd-logind[1517]: Session 16 logged out. Waiting for processes to exit. Sep 12 22:04:48.482295 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:48542.service - OpenSSH per-connection server daemon (10.0.0.1:48542). Sep 12 22:04:48.483286 systemd-logind[1517]: Removed session 16. Sep 12 22:04:48.548367 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 48542 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:48.550148 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:48.558067 systemd-logind[1517]: New session 17 of user core. Sep 12 22:04:48.568022 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 22:04:49.156580 sshd[4168]: Connection closed by 10.0.0.1 port 48542 Sep 12 22:04:49.156939 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:49.170018 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:48542.service: Deactivated successfully. Sep 12 22:04:49.174078 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 22:04:49.176209 systemd-logind[1517]: Session 17 logged out. Waiting for processes to exit. Sep 12 22:04:49.179712 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:48558.service - OpenSSH per-connection server daemon (10.0.0.1:48558). Sep 12 22:04:49.181342 systemd-logind[1517]: Removed session 17. Sep 12 22:04:49.232704 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 48558 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:49.233886 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:49.237759 systemd-logind[1517]: New session 18 of user core. Sep 12 22:04:49.257032 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 22:04:49.485376 sshd[4191]: Connection closed by 10.0.0.1 port 48558 Sep 12 22:04:49.485631 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:49.499954 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:48558.service: Deactivated successfully. Sep 12 22:04:49.502481 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 22:04:49.503231 systemd-logind[1517]: Session 18 logged out. Waiting for processes to exit. Sep 12 22:04:49.506015 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:48570.service - OpenSSH per-connection server daemon (10.0.0.1:48570). Sep 12 22:04:49.507505 systemd-logind[1517]: Removed session 18. Sep 12 22:04:49.557987 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 48570 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:49.561458 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:49.565826 systemd-logind[1517]: New session 19 of user core. Sep 12 22:04:49.576021 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 22:04:49.685088 sshd[4205]: Connection closed by 10.0.0.1 port 48570 Sep 12 22:04:49.685432 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:49.688830 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:48570.service: Deactivated successfully. Sep 12 22:04:49.691039 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 22:04:49.692015 systemd-logind[1517]: Session 19 logged out. Waiting for processes to exit. Sep 12 22:04:49.693373 systemd-logind[1517]: Removed session 19. Sep 12 22:04:54.700920 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:39304.service - OpenSSH per-connection server daemon (10.0.0.1:39304). Sep 12 22:04:54.760525 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 39304 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:54.762023 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:54.767471 systemd-logind[1517]: New session 20 of user core. Sep 12 22:04:54.773963 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 22:04:54.893430 sshd[4226]: Connection closed by 10.0.0.1 port 39304 Sep 12 22:04:54.893341 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Sep 12 22:04:54.897299 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:39304.service: Deactivated successfully. Sep 12 22:04:54.928390 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 22:04:54.929629 systemd-logind[1517]: Session 20 logged out. Waiting for processes to exit. Sep 12 22:04:54.931267 systemd-logind[1517]: Removed session 20. Sep 12 22:04:59.908472 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:39308.service - OpenSSH per-connection server daemon (10.0.0.1:39308). Sep 12 22:04:59.964313 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 39308 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:04:59.965375 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:04:59.968868 systemd-logind[1517]: New session 21 of user core. Sep 12 22:04:59.974973 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 22:05:00.080598 sshd[4244]: Connection closed by 10.0.0.1 port 39308 Sep 12 22:05:00.080460 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Sep 12 22:05:00.083851 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:39308.service: Deactivated successfully. Sep 12 22:05:00.085444 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 22:05:00.086077 systemd-logind[1517]: Session 21 logged out. Waiting for processes to exit. Sep 12 22:05:00.087258 systemd-logind[1517]: Removed session 21. Sep 12 22:05:05.099073 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:45580.service - OpenSSH per-connection server daemon (10.0.0.1:45580). Sep 12 22:05:05.167585 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 45580 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:05:05.169549 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:05:05.174456 systemd-logind[1517]: New session 22 of user core. Sep 12 22:05:05.188998 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 22:05:05.325241 sshd[4260]: Connection closed by 10.0.0.1 port 45580 Sep 12 22:05:05.325578 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Sep 12 22:05:05.337989 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:45580.service: Deactivated successfully. Sep 12 22:05:05.341222 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 22:05:05.342357 systemd-logind[1517]: Session 22 logged out. Waiting for processes to exit. Sep 12 22:05:05.345186 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:45592.service - OpenSSH per-connection server daemon (10.0.0.1:45592). Sep 12 22:05:05.347047 systemd-logind[1517]: Removed session 22. Sep 12 22:05:05.412345 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 45592 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:05:05.413593 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:05:05.418070 systemd-logind[1517]: New session 23 of user core. Sep 12 22:05:05.425967 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 22:05:06.943835 containerd[1533]: time="2025-09-12T22:05:06.942078615Z" level=info msg="StopContainer for \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\" with timeout 30 (s)" Sep 12 22:05:06.944896 containerd[1533]: time="2025-09-12T22:05:06.944866680Z" level=info msg="Stop container \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\" with signal terminated" Sep 12 22:05:06.956232 systemd[1]: cri-containerd-2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d.scope: Deactivated successfully. Sep 12 22:05:06.959382 containerd[1533]: time="2025-09-12T22:05:06.959331564Z" level=info msg="received exit event container_id:\"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\" id:\"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\" pid:3429 exited_at:{seconds:1757714706 nanos:959075024}" Sep 12 22:05:06.960508 containerd[1533]: time="2025-09-12T22:05:06.960366445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\" id:\"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\" pid:3429 exited_at:{seconds:1757714706 nanos:959075024}" Sep 12 22:05:06.967225 containerd[1533]: time="2025-09-12T22:05:06.967190598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" id:\"c010db7b41ad09e9f4774a93c2aee67f59a64155ec35fe327321c15ef01b507d\" pid:4299 exited_at:{seconds:1757714706 nanos:966920859}" Sep 12 22:05:06.969671 containerd[1533]: time="2025-09-12T22:05:06.969637690Z" level=info msg="StopContainer for \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" with timeout 2 (s)" Sep 12 22:05:06.970031 containerd[1533]: time="2025-09-12T22:05:06.970006661Z" level=info msg="Stop container \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" with signal terminated" Sep 12 22:05:06.974085 containerd[1533]: time="2025-09-12T22:05:06.973946757Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 22:05:06.979265 systemd-networkd[1451]: lxc_health: Link DOWN Sep 12 22:05:06.979274 systemd-networkd[1451]: lxc_health: Lost carrier Sep 12 22:05:06.990256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d-rootfs.mount: Deactivated successfully. Sep 12 22:05:07.000446 systemd[1]: cri-containerd-41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad.scope: Deactivated successfully. Sep 12 22:05:07.000757 systemd[1]: cri-containerd-41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad.scope: Consumed 6.218s CPU time, 123M memory peak, 140K read from disk, 12.9M written to disk. Sep 12 22:05:07.002484 containerd[1533]: time="2025-09-12T22:05:07.002446846Z" level=info msg="StopContainer for \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\" returns successfully" Sep 12 22:05:07.002896 containerd[1533]: time="2025-09-12T22:05:07.002835258Z" level=info msg="received exit event container_id:\"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" id:\"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" pid:3277 exited_at:{seconds:1757714707 nanos:2615434}" Sep 12 22:05:07.003026 containerd[1533]: time="2025-09-12T22:05:07.002841818Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" id:\"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" pid:3277 exited_at:{seconds:1757714707 nanos:2615434}" Sep 12 22:05:07.005516 containerd[1533]: time="2025-09-12T22:05:07.005467707Z" level=info msg="StopPodSandbox for \"31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1\"" Sep 12 22:05:07.018144 containerd[1533]: time="2025-09-12T22:05:07.018078670Z" level=info msg="Container to stop \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:05:07.021367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad-rootfs.mount: Deactivated successfully. Sep 12 22:05:07.026514 systemd[1]: cri-containerd-31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1.scope: Deactivated successfully. Sep 12 22:05:07.029985 containerd[1533]: time="2025-09-12T22:05:07.029950047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1\" id:\"31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1\" pid:2900 exit_status:137 exited_at:{seconds:1757714707 nanos:29598192}" Sep 12 22:05:07.033577 containerd[1533]: time="2025-09-12T22:05:07.033473511Z" level=info msg="StopContainer for \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" returns successfully" Sep 12 22:05:07.034330 containerd[1533]: time="2025-09-12T22:05:07.034296851Z" level=info msg="StopPodSandbox for \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\"" Sep 12 22:05:07.034412 containerd[1533]: time="2025-09-12T22:05:07.034361006Z" level=info msg="Container to stop \"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:05:07.034412 containerd[1533]: time="2025-09-12T22:05:07.034372485Z" level=info msg="Container to stop \"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:05:07.034412 containerd[1533]: time="2025-09-12T22:05:07.034380725Z" level=info msg="Container to stop \"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:05:07.034412 containerd[1533]: time="2025-09-12T22:05:07.034389084Z" level=info msg="Container to stop \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:05:07.034412 containerd[1533]: time="2025-09-12T22:05:07.034397643Z" level=info msg="Container to stop \"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:05:07.043146 systemd[1]: cri-containerd-f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d.scope: Deactivated successfully. Sep 12 22:05:07.065304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1-rootfs.mount: Deactivated successfully. Sep 12 22:05:07.068833 containerd[1533]: time="2025-09-12T22:05:07.068679711Z" level=info msg="shim disconnected" id=31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1 namespace=k8s.io Sep 12 22:05:07.069076 containerd[1533]: time="2025-09-12T22:05:07.068839699Z" level=warning msg="cleaning up after shim disconnected" id=31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1 namespace=k8s.io Sep 12 22:05:07.069076 containerd[1533]: time="2025-09-12T22:05:07.068872417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 22:05:07.073458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d-rootfs.mount: Deactivated successfully. Sep 12 22:05:07.077832 containerd[1533]: time="2025-09-12T22:05:07.077350921Z" level=info msg="shim disconnected" id=f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d namespace=k8s.io Sep 12 22:05:07.077832 containerd[1533]: time="2025-09-12T22:05:07.077388958Z" level=warning msg="cleaning up after shim disconnected" id=f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d namespace=k8s.io Sep 12 22:05:07.077832 containerd[1533]: time="2025-09-12T22:05:07.077420756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 22:05:07.087575 containerd[1533]: time="2025-09-12T22:05:07.085758549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" id:\"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" pid:2811 exit_status:137 exited_at:{seconds:1757714707 nanos:51098589}" Sep 12 22:05:07.087575 containerd[1533]: time="2025-09-12T22:05:07.085925137Z" level=info msg="TearDown network for sandbox \"31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1\" successfully" Sep 12 22:05:07.087575 containerd[1533]: time="2025-09-12T22:05:07.085951135Z" level=info msg="StopPodSandbox for \"31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1\" returns successfully" Sep 12 22:05:07.088354 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1-shm.mount: Deactivated successfully. Sep 12 22:05:07.093156 containerd[1533]: time="2025-09-12T22:05:07.093114015Z" level=info msg="TearDown network for sandbox \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" successfully" Sep 12 22:05:07.093156 containerd[1533]: time="2025-09-12T22:05:07.093149772Z" level=info msg="StopPodSandbox for \"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" returns successfully" Sep 12 22:05:07.106460 containerd[1533]: time="2025-09-12T22:05:07.106301576Z" level=info msg="received exit event sandbox_id:\"31a032fde56e95c21da7ebedfe6853df381dd456e684510453f7324ac22719c1\" exit_status:137 exited_at:{seconds:1757714707 nanos:29598192}" Sep 12 22:05:07.106837 containerd[1533]: time="2025-09-12T22:05:07.106537199Z" level=info msg="received exit event sandbox_id:\"f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d\" exit_status:137 exited_at:{seconds:1757714707 nanos:51098589}" Sep 12 22:05:07.203229 kubelet[2655]: I0912 22:05:07.203073 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-lib-modules\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.203229 kubelet[2655]: I0912 22:05:07.203122 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-xtables-lock\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.203993 kubelet[2655]: I0912 22:05:07.203646 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-etc-cni-netd\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.203993 kubelet[2655]: I0912 22:05:07.203699 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcst7\" (UniqueName: \"kubernetes.io/projected/85459d40-62f5-4cc7-840d-d918bd342b02-kube-api-access-kcst7\") pod \"85459d40-62f5-4cc7-840d-d918bd342b02\" (UID: \"85459d40-62f5-4cc7-840d-d918bd342b02\") " Sep 12 22:05:07.203993 kubelet[2655]: I0912 22:05:07.203718 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cni-path\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.203993 kubelet[2655]: I0912 22:05:07.203733 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-bpf-maps\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.203993 kubelet[2655]: I0912 22:05:07.203749 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-hostproc\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.203993 kubelet[2655]: I0912 22:05:07.203764 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4402442-790b-475f-9d84-0528ccf0a7b7-hubble-tls\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.204238 kubelet[2655]: I0912 22:05:07.203781 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85459d40-62f5-4cc7-840d-d918bd342b02-cilium-config-path\") pod \"85459d40-62f5-4cc7-840d-d918bd342b02\" (UID: \"85459d40-62f5-4cc7-840d-d918bd342b02\") " Sep 12 22:05:07.204238 kubelet[2655]: I0912 22:05:07.203800 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4402442-790b-475f-9d84-0528ccf0a7b7-clustermesh-secrets\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.204238 kubelet[2655]: I0912 22:05:07.203846 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-cgroup\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.204238 kubelet[2655]: I0912 22:05:07.203870 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-config-path\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.204238 kubelet[2655]: I0912 22:05:07.203892 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-host-proc-sys-net\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.204238 kubelet[2655]: I0912 22:05:07.203909 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdbqg\" (UniqueName: \"kubernetes.io/projected/e4402442-790b-475f-9d84-0528ccf0a7b7-kube-api-access-jdbqg\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.204370 kubelet[2655]: I0912 22:05:07.203924 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-run\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.204370 kubelet[2655]: I0912 22:05:07.203940 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-host-proc-sys-kernel\") pod \"e4402442-790b-475f-9d84-0528ccf0a7b7\" (UID: \"e4402442-790b-475f-9d84-0528ccf0a7b7\") " Sep 12 22:05:07.205610 kubelet[2655]: I0912 22:05:07.205323 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 22:05:07.205610 kubelet[2655]: I0912 22:05:07.205323 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 22:05:07.205610 kubelet[2655]: I0912 22:05:07.205388 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-hostproc" (OuterVolumeSpecName: "hostproc") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 22:05:07.205610 kubelet[2655]: I0912 22:05:07.205571 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 22:05:07.205610 kubelet[2655]: I0912 22:05:07.205600 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cni-path" (OuterVolumeSpecName: "cni-path") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 22:05:07.205792 kubelet[2655]: I0912 22:05:07.205617 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 22:05:07.206169 kubelet[2655]: I0912 22:05:07.206110 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 22:05:07.221240 kubelet[2655]: I0912 22:05:07.221167 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85459d40-62f5-4cc7-840d-d918bd342b02-kube-api-access-kcst7" (OuterVolumeSpecName: "kube-api-access-kcst7") pod "85459d40-62f5-4cc7-840d-d918bd342b02" (UID: "85459d40-62f5-4cc7-840d-d918bd342b02"). InnerVolumeSpecName "kube-api-access-kcst7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 22:05:07.221383 kubelet[2655]: I0912 22:05:07.221272 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 22:05:07.221383 kubelet[2655]: I0912 22:05:07.221302 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 22:05:07.221522 kubelet[2655]: I0912 22:05:07.221491 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4402442-790b-475f-9d84-0528ccf0a7b7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 22:05:07.221559 kubelet[2655]: I0912 22:05:07.221534 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 22:05:07.223783 kubelet[2655]: I0912 22:05:07.223721 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4402442-790b-475f-9d84-0528ccf0a7b7-kube-api-access-jdbqg" (OuterVolumeSpecName: "kube-api-access-jdbqg") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "kube-api-access-jdbqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 22:05:07.225241 kubelet[2655]: I0912 22:05:07.225139 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85459d40-62f5-4cc7-840d-d918bd342b02-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "85459d40-62f5-4cc7-840d-d918bd342b02" (UID: "85459d40-62f5-4cc7-840d-d918bd342b02"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 22:05:07.225905 kubelet[2655]: I0912 22:05:07.225869 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4402442-790b-475f-9d84-0528ccf0a7b7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 22:05:07.227736 kubelet[2655]: I0912 22:05:07.227687 2655 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e4402442-790b-475f-9d84-0528ccf0a7b7" (UID: "e4402442-790b-475f-9d84-0528ccf0a7b7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 22:05:07.304190 kubelet[2655]: I0912 22:05:07.304120 2655 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304190 kubelet[2655]: I0912 22:05:07.304161 2655 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304190 kubelet[2655]: I0912 22:05:07.304170 2655 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304190 kubelet[2655]: I0912 22:05:07.304179 2655 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kcst7\" (UniqueName: \"kubernetes.io/projected/85459d40-62f5-4cc7-840d-d918bd342b02-kube-api-access-kcst7\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304190 kubelet[2655]: I0912 22:05:07.304192 2655 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304190 kubelet[2655]: I0912 22:05:07.304200 2655 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304487 kubelet[2655]: I0912 22:05:07.304208 2655 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304487 kubelet[2655]: I0912 22:05:07.304238 2655 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4402442-790b-475f-9d84-0528ccf0a7b7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304487 kubelet[2655]: I0912 22:05:07.304246 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85459d40-62f5-4cc7-840d-d918bd342b02-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304487 kubelet[2655]: I0912 22:05:07.304254 2655 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4402442-790b-475f-9d84-0528ccf0a7b7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304487 kubelet[2655]: I0912 22:05:07.304261 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304487 kubelet[2655]: I0912 22:05:07.304269 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304487 kubelet[2655]: I0912 22:05:07.304277 2655 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304487 kubelet[2655]: I0912 22:05:07.304285 2655 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jdbqg\" (UniqueName: \"kubernetes.io/projected/e4402442-790b-475f-9d84-0528ccf0a7b7-kube-api-access-jdbqg\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304689 kubelet[2655]: I0912 22:05:07.304292 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.304689 kubelet[2655]: I0912 22:05:07.304301 2655 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4402442-790b-475f-9d84-0528ccf0a7b7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 22:05:07.311047 kubelet[2655]: E0912 22:05:07.311014 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:07.319057 systemd[1]: Removed slice kubepods-burstable-pode4402442_790b_475f_9d84_0528ccf0a7b7.slice - libcontainer container kubepods-burstable-pode4402442_790b_475f_9d84_0528ccf0a7b7.slice. Sep 12 22:05:07.319170 systemd[1]: kubepods-burstable-pode4402442_790b_475f_9d84_0528ccf0a7b7.slice: Consumed 6.326s CPU time, 123.3M memory peak, 144K read from disk, 15.2M written to disk. Sep 12 22:05:07.320361 systemd[1]: Removed slice kubepods-besteffort-pod85459d40_62f5_4cc7_840d_d918bd342b02.slice - libcontainer container kubepods-besteffort-pod85459d40_62f5_4cc7_840d_d918bd342b02.slice. Sep 12 22:05:07.562913 kubelet[2655]: I0912 22:05:07.562529 2655 scope.go:117] "RemoveContainer" containerID="41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad" Sep 12 22:05:07.568922 containerd[1533]: time="2025-09-12T22:05:07.568133640Z" level=info msg="RemoveContainer for \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\"" Sep 12 22:05:07.578188 containerd[1533]: time="2025-09-12T22:05:07.578150792Z" level=info msg="RemoveContainer for \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" returns successfully" Sep 12 22:05:07.578637 kubelet[2655]: I0912 22:05:07.578597 2655 scope.go:117] "RemoveContainer" containerID="5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a" Sep 12 22:05:07.582960 containerd[1533]: time="2025-09-12T22:05:07.582925485Z" level=info msg="RemoveContainer for \"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\"" Sep 12 22:05:07.587527 containerd[1533]: time="2025-09-12T22:05:07.587498192Z" level=info msg="RemoveContainer for \"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\" returns successfully" Sep 12 22:05:07.587840 kubelet[2655]: I0912 22:05:07.587804 2655 scope.go:117] "RemoveContainer" containerID="163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240" Sep 12 22:05:07.590741 containerd[1533]: time="2025-09-12T22:05:07.590709719Z" level=info msg="RemoveContainer for \"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\"" Sep 12 22:05:07.594850 containerd[1533]: time="2025-09-12T22:05:07.594794942Z" level=info msg="RemoveContainer for \"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\" returns successfully" Sep 12 22:05:07.595142 kubelet[2655]: I0912 22:05:07.595106 2655 scope.go:117] "RemoveContainer" containerID="3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91" Sep 12 22:05:07.596453 containerd[1533]: time="2025-09-12T22:05:07.596428903Z" level=info msg="RemoveContainer for \"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\"" Sep 12 22:05:07.599335 containerd[1533]: time="2025-09-12T22:05:07.599295454Z" level=info msg="RemoveContainer for \"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\" returns successfully" Sep 12 22:05:07.599534 kubelet[2655]: I0912 22:05:07.599505 2655 scope.go:117] "RemoveContainer" containerID="9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff" Sep 12 22:05:07.600841 containerd[1533]: time="2025-09-12T22:05:07.600783706Z" level=info msg="RemoveContainer for \"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\"" Sep 12 22:05:07.603442 containerd[1533]: time="2025-09-12T22:05:07.603418715Z" level=info msg="RemoveContainer for \"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\" returns successfully" Sep 12 22:05:07.603600 kubelet[2655]: I0912 22:05:07.603576 2655 scope.go:117] "RemoveContainer" containerID="41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad" Sep 12 22:05:07.603791 containerd[1533]: time="2025-09-12T22:05:07.603762250Z" level=error msg="ContainerStatus for \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\": not found" Sep 12 22:05:07.603919 kubelet[2655]: E0912 22:05:07.603894 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\": not found" containerID="41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad" Sep 12 22:05:07.608215 kubelet[2655]: I0912 22:05:07.608086 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad"} err="failed to get container status \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\": rpc error: code = NotFound desc = an error occurred when try to find container \"41791bf726ea9de302e06c6290873b5928b5f2631fbb843066e06d8f048a3aad\": not found" Sep 12 22:05:07.608254 kubelet[2655]: I0912 22:05:07.608221 2655 scope.go:117] "RemoveContainer" containerID="5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a" Sep 12 22:05:07.608484 containerd[1533]: time="2025-09-12T22:05:07.608453349Z" level=error msg="ContainerStatus for \"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\": not found" Sep 12 22:05:07.608604 kubelet[2655]: E0912 22:05:07.608584 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\": not found" containerID="5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a" Sep 12 22:05:07.608642 kubelet[2655]: I0912 22:05:07.608607 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a"} err="failed to get container status \"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5365f3de80ea35d8e34b002e8515258850540d59a4227c1fed7d275d9d21937a\": not found" Sep 12 22:05:07.608642 kubelet[2655]: I0912 22:05:07.608622 2655 scope.go:117] "RemoveContainer" containerID="163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240" Sep 12 22:05:07.608793 containerd[1533]: time="2025-09-12T22:05:07.608763606Z" level=error msg="ContainerStatus for \"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\": not found" Sep 12 22:05:07.608921 kubelet[2655]: E0912 22:05:07.608897 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\": not found" containerID="163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240" Sep 12 22:05:07.608952 kubelet[2655]: I0912 22:05:07.608924 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240"} err="failed to get container status \"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\": rpc error: code = NotFound desc = an error occurred when try to find container \"163a5406129ed6dbffe0385b2b88828ea9838bf7dbf2bdcfa27da9baee67a240\": not found" Sep 12 22:05:07.608952 kubelet[2655]: I0912 22:05:07.608937 2655 scope.go:117] "RemoveContainer" containerID="3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91" Sep 12 22:05:07.609103 containerd[1533]: time="2025-09-12T22:05:07.609074743Z" level=error msg="ContainerStatus for \"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\": not found" Sep 12 22:05:07.609264 kubelet[2655]: E0912 22:05:07.609238 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\": not found" containerID="3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91" Sep 12 22:05:07.609306 kubelet[2655]: I0912 22:05:07.609267 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91"} err="failed to get container status \"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ebbb6f184451977ba75c1c2a6a93764242c836fc7beff3a4ba4391a18f65d91\": not found" Sep 12 22:05:07.609306 kubelet[2655]: I0912 22:05:07.609284 2655 scope.go:117] "RemoveContainer" containerID="9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff" Sep 12 22:05:07.609478 containerd[1533]: time="2025-09-12T22:05:07.609449156Z" level=error msg="ContainerStatus for \"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\": not found" Sep 12 22:05:07.609695 kubelet[2655]: E0912 22:05:07.609570 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\": not found" containerID="9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff" Sep 12 22:05:07.609695 kubelet[2655]: I0912 22:05:07.609597 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff"} err="failed to get container status \"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e4142c6594500b1d7ddce6d79fae83d3db2f029c612295deb812f046de7f4ff\": not found" Sep 12 22:05:07.609695 kubelet[2655]: I0912 22:05:07.609613 2655 scope.go:117] "RemoveContainer" containerID="2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d" Sep 12 22:05:07.610878 containerd[1533]: time="2025-09-12T22:05:07.610855294Z" level=info msg="RemoveContainer for \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\"" Sep 12 22:05:07.613906 containerd[1533]: time="2025-09-12T22:05:07.613869155Z" level=info msg="RemoveContainer for \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\" returns successfully" Sep 12 22:05:07.614102 kubelet[2655]: I0912 22:05:07.614080 2655 scope.go:117] "RemoveContainer" containerID="2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d" Sep 12 22:05:07.614333 containerd[1533]: time="2025-09-12T22:05:07.614304683Z" level=error msg="ContainerStatus for \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\": not found" Sep 12 22:05:07.614437 kubelet[2655]: E0912 22:05:07.614418 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\": not found" containerID="2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d" Sep 12 22:05:07.614473 kubelet[2655]: I0912 22:05:07.614441 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d"} err="failed to get container status \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2cf32f6f616297d204a3ebc28ce4b3336d4ca9a92a893ca263e4fa36c08f022d\": not found" Sep 12 22:05:07.986628 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f60ab3c56bb932c241ec3b8ce5c78d7c6d9faf1655bf4d0a1e4043ede9c7b44d-shm.mount: Deactivated successfully. Sep 12 22:05:07.986740 systemd[1]: var-lib-kubelet-pods-85459d40\x2d62f5\x2d4cc7\x2d840d\x2dd918bd342b02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkcst7.mount: Deactivated successfully. Sep 12 22:05:07.986799 systemd[1]: var-lib-kubelet-pods-e4402442\x2d790b\x2d475f\x2d9d84\x2d0528ccf0a7b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djdbqg.mount: Deactivated successfully. Sep 12 22:05:07.986874 systemd[1]: var-lib-kubelet-pods-e4402442\x2d790b\x2d475f\x2d9d84\x2d0528ccf0a7b7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 22:05:07.986924 systemd[1]: var-lib-kubelet-pods-e4402442\x2d790b\x2d475f\x2d9d84\x2d0528ccf0a7b7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 22:05:08.310838 kubelet[2655]: E0912 22:05:08.310681 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:08.883062 sshd[4277]: Connection closed by 10.0.0.1 port 45592 Sep 12 22:05:08.884141 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Sep 12 22:05:08.891577 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:45592.service: Deactivated successfully. Sep 12 22:05:08.893676 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 22:05:08.894514 systemd-logind[1517]: Session 23 logged out. Waiting for processes to exit. Sep 12 22:05:08.897434 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:45602.service - OpenSSH per-connection server daemon (10.0.0.1:45602). Sep 12 22:05:08.899166 systemd-logind[1517]: Removed session 23. Sep 12 22:05:08.947474 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 45602 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:05:08.948707 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:05:08.952885 systemd-logind[1517]: New session 24 of user core. Sep 12 22:05:08.963967 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 22:05:09.313216 kubelet[2655]: I0912 22:05:09.312418 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85459d40-62f5-4cc7-840d-d918bd342b02" path="/var/lib/kubelet/pods/85459d40-62f5-4cc7-840d-d918bd342b02/volumes" Sep 12 22:05:09.313216 kubelet[2655]: I0912 22:05:09.312771 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4402442-790b-475f-9d84-0528ccf0a7b7" path="/var/lib/kubelet/pods/e4402442-790b-475f-9d84-0528ccf0a7b7/volumes" Sep 12 22:05:10.164113 sshd[4434]: Connection closed by 10.0.0.1 port 45602 Sep 12 22:05:10.164191 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Sep 12 22:05:10.176544 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:45602.service: Deactivated successfully. Sep 12 22:05:10.179653 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 22:05:10.180891 systemd-logind[1517]: Session 24 logged out. Waiting for processes to exit. Sep 12 22:05:10.190003 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:42812.service - OpenSSH per-connection server daemon (10.0.0.1:42812). Sep 12 22:05:10.191883 systemd-logind[1517]: Removed session 24. Sep 12 22:05:10.192490 kubelet[2655]: I0912 22:05:10.192440 2655 memory_manager.go:355] "RemoveStaleState removing state" podUID="e4402442-790b-475f-9d84-0528ccf0a7b7" containerName="cilium-agent" Sep 12 22:05:10.192490 kubelet[2655]: I0912 22:05:10.192483 2655 memory_manager.go:355] "RemoveStaleState removing state" podUID="85459d40-62f5-4cc7-840d-d918bd342b02" containerName="cilium-operator" Sep 12 22:05:10.214023 systemd[1]: Created slice kubepods-burstable-pod6f2b2557_634f_403f_9828_9675123f1eae.slice - libcontainer container kubepods-burstable-pod6f2b2557_634f_403f_9828_9675123f1eae.slice. Sep 12 22:05:10.274980 sshd[4448]: Accepted publickey for core from 10.0.0.1 port 42812 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:05:10.276278 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:05:10.280875 systemd-logind[1517]: New session 25 of user core. Sep 12 22:05:10.294010 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 22:05:10.324119 kubelet[2655]: I0912 22:05:10.324076 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f2b2557-634f-403f-9828-9675123f1eae-cilium-cgroup\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324119 kubelet[2655]: I0912 22:05:10.324125 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f2b2557-634f-403f-9828-9675123f1eae-etc-cni-netd\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324623 kubelet[2655]: I0912 22:05:10.324178 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtbjw\" (UniqueName: \"kubernetes.io/projected/6f2b2557-634f-403f-9828-9675123f1eae-kube-api-access-gtbjw\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324623 kubelet[2655]: I0912 22:05:10.324243 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f2b2557-634f-403f-9828-9675123f1eae-cilium-run\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324623 kubelet[2655]: I0912 22:05:10.324271 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f2b2557-634f-403f-9828-9675123f1eae-hostproc\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324623 kubelet[2655]: I0912 22:05:10.324286 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f2b2557-634f-403f-9828-9675123f1eae-cni-path\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324623 kubelet[2655]: I0912 22:05:10.324302 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f2b2557-634f-403f-9828-9675123f1eae-lib-modules\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324623 kubelet[2655]: I0912 22:05:10.324318 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f2b2557-634f-403f-9828-9675123f1eae-hubble-tls\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324773 kubelet[2655]: I0912 22:05:10.324344 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f2b2557-634f-403f-9828-9675123f1eae-bpf-maps\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324773 kubelet[2655]: I0912 22:05:10.324362 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f2b2557-634f-403f-9828-9675123f1eae-clustermesh-secrets\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324773 kubelet[2655]: I0912 22:05:10.324377 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f2b2557-634f-403f-9828-9675123f1eae-host-proc-sys-kernel\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324773 kubelet[2655]: I0912 22:05:10.324394 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f2b2557-634f-403f-9828-9675123f1eae-cilium-config-path\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324773 kubelet[2655]: I0912 22:05:10.324409 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f2b2557-634f-403f-9828-9675123f1eae-xtables-lock\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324908 kubelet[2655]: I0912 22:05:10.324446 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6f2b2557-634f-403f-9828-9675123f1eae-cilium-ipsec-secrets\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.324908 kubelet[2655]: I0912 22:05:10.324474 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f2b2557-634f-403f-9828-9675123f1eae-host-proc-sys-net\") pod \"cilium-prrpv\" (UID: \"6f2b2557-634f-403f-9828-9675123f1eae\") " pod="kube-system/cilium-prrpv" Sep 12 22:05:10.344135 sshd[4451]: Connection closed by 10.0.0.1 port 42812 Sep 12 22:05:10.344556 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Sep 12 22:05:10.359763 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:42812.service: Deactivated successfully. Sep 12 22:05:10.361597 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 22:05:10.363376 systemd-logind[1517]: Session 25 logged out. Waiting for processes to exit. Sep 12 22:05:10.365697 systemd[1]: Started sshd@25-10.0.0.34:22-10.0.0.1:42822.service - OpenSSH per-connection server daemon (10.0.0.1:42822). Sep 12 22:05:10.366431 systemd-logind[1517]: Removed session 25. Sep 12 22:05:10.434636 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 42822 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:05:10.435682 sshd-session[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:05:10.444733 systemd-logind[1517]: New session 26 of user core. Sep 12 22:05:10.455040 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 22:05:10.517280 kubelet[2655]: E0912 22:05:10.517236 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:10.517768 containerd[1533]: time="2025-09-12T22:05:10.517730986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-prrpv,Uid:6f2b2557-634f-403f-9828-9675123f1eae,Namespace:kube-system,Attempt:0,}" Sep 12 22:05:10.539447 containerd[1533]: time="2025-09-12T22:05:10.539353443Z" level=info msg="connecting to shim 016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2" address="unix:///run/containerd/s/eab3be7f701c3abee4ef8461128b3ab89052ad0598eedab55fa75c21c91b17f0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:05:10.576067 systemd[1]: Started cri-containerd-016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2.scope - libcontainer container 016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2. Sep 12 22:05:10.601024 containerd[1533]: time="2025-09-12T22:05:10.600947253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-prrpv,Uid:6f2b2557-634f-403f-9828-9675123f1eae,Namespace:kube-system,Attempt:0,} returns sandbox id \"016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2\"" Sep 12 22:05:10.602205 kubelet[2655]: E0912 22:05:10.601963 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:10.604402 containerd[1533]: time="2025-09-12T22:05:10.604357328Z" level=info msg="CreateContainer within sandbox \"016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 22:05:10.611138 containerd[1533]: time="2025-09-12T22:05:10.611085203Z" level=info msg="Container 3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:05:10.616721 containerd[1533]: time="2025-09-12T22:05:10.616662147Z" level=info msg="CreateContainer within sandbox \"016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc\"" Sep 12 22:05:10.617172 containerd[1533]: time="2025-09-12T22:05:10.617151517Z" level=info msg="StartContainer for \"3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc\"" Sep 12 22:05:10.618326 containerd[1533]: time="2025-09-12T22:05:10.618270290Z" level=info msg="connecting to shim 3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc" address="unix:///run/containerd/s/eab3be7f701c3abee4ef8461128b3ab89052ad0598eedab55fa75c21c91b17f0" protocol=ttrpc version=3 Sep 12 22:05:10.638018 systemd[1]: Started cri-containerd-3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc.scope - libcontainer container 3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc. Sep 12 22:05:10.662910 containerd[1533]: time="2025-09-12T22:05:10.662865484Z" level=info msg="StartContainer for \"3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc\" returns successfully" Sep 12 22:05:10.674353 systemd[1]: cri-containerd-3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc.scope: Deactivated successfully. Sep 12 22:05:10.677420 containerd[1533]: time="2025-09-12T22:05:10.677370010Z" level=info msg="received exit event container_id:\"3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc\" id:\"3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc\" pid:4531 exited_at:{seconds:1757714710 nanos:677120985}" Sep 12 22:05:10.677597 containerd[1533]: time="2025-09-12T22:05:10.677527401Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc\" id:\"3d0ffd63562c6926826f32500572f122be8e8e22870d931850f595cd1fbffbbc\" pid:4531 exited_at:{seconds:1757714710 nanos:677120985}" Sep 12 22:05:11.362158 kubelet[2655]: E0912 22:05:11.362112 2655 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 22:05:11.590255 kubelet[2655]: E0912 22:05:11.590185 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:11.594501 containerd[1533]: time="2025-09-12T22:05:11.594464378Z" level=info msg="CreateContainer within sandbox \"016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 22:05:11.609019 containerd[1533]: time="2025-09-12T22:05:11.608973561Z" level=info msg="Container d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:05:11.616293 containerd[1533]: time="2025-09-12T22:05:11.616164556Z" level=info msg="CreateContainer within sandbox \"016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158\"" Sep 12 22:05:11.618373 containerd[1533]: time="2025-09-12T22:05:11.618342433Z" level=info msg="StartContainer for \"d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158\"" Sep 12 22:05:11.619216 containerd[1533]: time="2025-09-12T22:05:11.619186306Z" level=info msg="connecting to shim d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158" address="unix:///run/containerd/s/eab3be7f701c3abee4ef8461128b3ab89052ad0598eedab55fa75c21c91b17f0" protocol=ttrpc version=3 Sep 12 22:05:11.640995 systemd[1]: Started cri-containerd-d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158.scope - libcontainer container d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158. Sep 12 22:05:11.686081 containerd[1533]: time="2025-09-12T22:05:11.686037740Z" level=info msg="StartContainer for \"d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158\" returns successfully" Sep 12 22:05:11.692624 systemd[1]: cri-containerd-d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158.scope: Deactivated successfully. Sep 12 22:05:11.693328 containerd[1533]: time="2025-09-12T22:05:11.693296971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158\" id:\"d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158\" pid:4575 exited_at:{seconds:1757714711 nanos:692828517}" Sep 12 22:05:11.693404 containerd[1533]: time="2025-09-12T22:05:11.693299731Z" level=info msg="received exit event container_id:\"d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158\" id:\"d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158\" pid:4575 exited_at:{seconds:1757714711 nanos:692828517}" Sep 12 22:05:11.711026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5f2a67c2bcd5792d034d4281b4f207c4ee95cb91caa83f48e05a08f24d24158-rootfs.mount: Deactivated successfully. Sep 12 22:05:12.594370 kubelet[2655]: E0912 22:05:12.594333 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:12.598540 containerd[1533]: time="2025-09-12T22:05:12.598497866Z" level=info msg="CreateContainer within sandbox \"016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 22:05:12.607837 containerd[1533]: time="2025-09-12T22:05:12.607635066Z" level=info msg="Container 7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:05:12.618971 containerd[1533]: time="2025-09-12T22:05:12.618917393Z" level=info msg="CreateContainer within sandbox \"016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826\"" Sep 12 22:05:12.619433 containerd[1533]: time="2025-09-12T22:05:12.619395768Z" level=info msg="StartContainer for \"7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826\"" Sep 12 22:05:12.620761 containerd[1533]: time="2025-09-12T22:05:12.620733458Z" level=info msg="connecting to shim 7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826" address="unix:///run/containerd/s/eab3be7f701c3abee4ef8461128b3ab89052ad0598eedab55fa75c21c91b17f0" protocol=ttrpc version=3 Sep 12 22:05:12.648995 systemd[1]: Started cri-containerd-7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826.scope - libcontainer container 7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826. Sep 12 22:05:12.696831 containerd[1533]: time="2025-09-12T22:05:12.696753622Z" level=info msg="StartContainer for \"7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826\" returns successfully" Sep 12 22:05:12.697872 systemd[1]: cri-containerd-7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826.scope: Deactivated successfully. Sep 12 22:05:12.701210 containerd[1533]: time="2025-09-12T22:05:12.701172510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826\" id:\"7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826\" pid:4618 exited_at:{seconds:1757714712 nanos:700833968}" Sep 12 22:05:12.701333 containerd[1533]: time="2025-09-12T22:05:12.701282064Z" level=info msg="received exit event container_id:\"7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826\" id:\"7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826\" pid:4618 exited_at:{seconds:1757714712 nanos:700833968}" Sep 12 22:05:12.729065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dbeac9653e08abdbf5318c2be15f62b974a71c4f67687d9a26ae6be411f4826-rootfs.mount: Deactivated successfully. Sep 12 22:05:12.825013 kubelet[2655]: I0912 22:05:12.822804 2655 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T22:05:12Z","lastTransitionTime":"2025-09-12T22:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 22:05:13.599499 kubelet[2655]: E0912 22:05:13.599457 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:13.602518 containerd[1533]: time="2025-09-12T22:05:13.601633257Z" level=info msg="CreateContainer within sandbox \"016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 22:05:13.613784 containerd[1533]: time="2025-09-12T22:05:13.613705787Z" level=info msg="Container 55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:05:13.622370 containerd[1533]: time="2025-09-12T22:05:13.622299407Z" level=info msg="CreateContainer within sandbox \"016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38\"" Sep 12 22:05:13.622982 containerd[1533]: time="2025-09-12T22:05:13.622894898Z" level=info msg="StartContainer for \"55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38\"" Sep 12 22:05:13.624271 containerd[1533]: time="2025-09-12T22:05:13.623793534Z" level=info msg="connecting to shim 55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38" address="unix:///run/containerd/s/eab3be7f701c3abee4ef8461128b3ab89052ad0598eedab55fa75c21c91b17f0" protocol=ttrpc version=3 Sep 12 22:05:13.650009 systemd[1]: Started cri-containerd-55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38.scope - libcontainer container 55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38. Sep 12 22:05:13.674106 systemd[1]: cri-containerd-55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38.scope: Deactivated successfully. Sep 12 22:05:13.678310 containerd[1533]: time="2025-09-12T22:05:13.674677886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38\" id:\"55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38\" pid:4657 exited_at:{seconds:1757714713 nanos:674375100}" Sep 12 22:05:13.698058 containerd[1533]: time="2025-09-12T22:05:13.698000105Z" level=info msg="received exit event container_id:\"55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38\" id:\"55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38\" pid:4657 exited_at:{seconds:1757714713 nanos:674375100}" Sep 12 22:05:13.705205 containerd[1533]: time="2025-09-12T22:05:13.705152315Z" level=info msg="StartContainer for \"55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38\" returns successfully" Sep 12 22:05:13.717309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55eb013e720f3a1151abea108a253341add12842a61a5052779b900ae7783f38-rootfs.mount: Deactivated successfully. Sep 12 22:05:14.604939 kubelet[2655]: E0912 22:05:14.604899 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:14.607969 containerd[1533]: time="2025-09-12T22:05:14.607237538Z" level=info msg="CreateContainer within sandbox \"016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 22:05:14.666853 containerd[1533]: time="2025-09-12T22:05:14.666727440Z" level=info msg="Container 33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:05:14.687873 containerd[1533]: time="2025-09-12T22:05:14.686831849Z" level=info msg="CreateContainer within sandbox \"016f97a8e9889a79947ccb269f676f0a1f8c36e040f6fb2307ccdb78715840d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2\"" Sep 12 22:05:14.688538 containerd[1533]: time="2025-09-12T22:05:14.688505413Z" level=info msg="StartContainer for \"33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2\"" Sep 12 22:05:14.690109 containerd[1533]: time="2025-09-12T22:05:14.690072782Z" level=info msg="connecting to shim 33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2" address="unix:///run/containerd/s/eab3be7f701c3abee4ef8461128b3ab89052ad0598eedab55fa75c21c91b17f0" protocol=ttrpc version=3 Sep 12 22:05:14.710690 systemd[1]: Started cri-containerd-33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2.scope - libcontainer container 33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2. Sep 12 22:05:14.746058 containerd[1533]: time="2025-09-12T22:05:14.746014485Z" level=info msg="StartContainer for \"33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2\" returns successfully" Sep 12 22:05:14.798993 containerd[1533]: time="2025-09-12T22:05:14.798946524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2\" id:\"6ba5a636e9ae3186938ab39d88bf9e62144a30e78ae5cdb36fca809add74eff0\" pid:4726 exited_at:{seconds:1757714714 nanos:798655417}" Sep 12 22:05:15.005888 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 22:05:15.310809 kubelet[2655]: E0912 22:05:15.310640 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:15.611226 kubelet[2655]: E0912 22:05:15.611112 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:15.625652 kubelet[2655]: I0912 22:05:15.625427 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-prrpv" podStartSLOduration=5.625409065 podStartE2EDuration="5.625409065s" podCreationTimestamp="2025-09-12 22:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:05:15.624031842 +0000 UTC m=+84.398680306" watchObservedRunningTime="2025-09-12 22:05:15.625409065 +0000 UTC m=+84.400057569" Sep 12 22:05:16.612183 kubelet[2655]: E0912 22:05:16.612101 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:16.915550 containerd[1533]: time="2025-09-12T22:05:16.915495068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2\" id:\"7417eb3dfbd6d59455a0f583b21b867fac6e2f5c89e1ef63fae80838e6a10578\" pid:4933 exit_status:1 exited_at:{seconds:1757714716 nanos:914829213}" Sep 12 22:05:16.930121 kubelet[2655]: E0912 22:05:16.930072 2655 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57876->127.0.0.1:41849: write tcp 127.0.0.1:57876->127.0.0.1:41849: write: connection reset by peer Sep 12 22:05:17.880765 systemd-networkd[1451]: lxc_health: Link UP Sep 12 22:05:17.881045 systemd-networkd[1451]: lxc_health: Gained carrier Sep 12 22:05:18.519906 kubelet[2655]: E0912 22:05:18.519645 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:18.615165 kubelet[2655]: E0912 22:05:18.615051 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:19.070341 containerd[1533]: time="2025-09-12T22:05:19.070304489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2\" id:\"4f9dafc7910c5af9051a3382efe2072a5621775972522888e0b7ba1b5871aa6b\" pid:5263 exited_at:{seconds:1757714719 nanos:69945420}" Sep 12 22:05:19.610068 systemd-networkd[1451]: lxc_health: Gained IPv6LL Sep 12 22:05:19.628199 kubelet[2655]: E0912 22:05:19.628119 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:21.227023 containerd[1533]: time="2025-09-12T22:05:21.226946506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2\" id:\"5c9752d8a19908c772c6e6c2402fee083bf933dc9cf48aaba710cc42e74aa931\" pid:5289 exited_at:{seconds:1757714721 nanos:225679935}" Sep 12 22:05:21.312184 kubelet[2655]: E0912 22:05:21.312134 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:05:23.344667 containerd[1533]: time="2025-09-12T22:05:23.344556531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2\" id:\"3c6d92f7e7a687e4b22aa8949114f41ed3b79a8db7c65a704b0f0eb4f49be03e\" pid:5321 exited_at:{seconds:1757714723 nanos:344265377}" Sep 12 22:05:25.452471 containerd[1533]: time="2025-09-12T22:05:25.452402289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33479caf930fc46c096d67c01e1451364346b3a5a3fe2447b65a1a90d3b861a2\" id:\"880d423d839e31c1756514cf9b568df63150486327f608f9596468775f54bcef\" pid:5348 exited_at:{seconds:1757714725 nanos:452086893}" Sep 12 22:05:25.456854 sshd[4465]: Connection closed by 10.0.0.1 port 42822 Sep 12 22:05:25.457142 sshd-session[4458]: pam_unix(sshd:session): session closed for user core Sep 12 22:05:25.461554 systemd[1]: sshd@25-10.0.0.34:22-10.0.0.1:42822.service: Deactivated successfully. Sep 12 22:05:25.464283 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 22:05:25.465024 systemd-logind[1517]: Session 26 logged out. Waiting for processes to exit. Sep 12 22:05:25.466099 systemd-logind[1517]: Removed session 26.