Sep 12 22:00:07.764523 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 22:00:07.764546 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 12 20:38:46 -00 2025 Sep 12 22:00:07.764556 kernel: KASLR enabled Sep 12 22:00:07.764563 kernel: efi: EFI v2.7 by EDK II Sep 12 22:00:07.764569 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Sep 12 22:00:07.764575 kernel: random: crng init done Sep 12 22:00:07.764582 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 22:00:07.764588 kernel: secureboot: Secure boot enabled Sep 12 22:00:07.764594 kernel: ACPI: Early table checksum verification disabled Sep 12 22:00:07.764602 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 12 22:00:07.764608 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 22:00:07.764614 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:00:07.764620 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:00:07.764627 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:00:07.764634 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:00:07.764642 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:00:07.764648 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:00:07.764654 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:00:07.764661 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:00:07.764667 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:00:07.764673 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 22:00:07.764680 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 22:00:07.764686 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 22:00:07.764692 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 12 22:00:07.764699 kernel: Zone ranges: Sep 12 22:00:07.764729 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 22:00:07.764736 kernel: DMA32 empty Sep 12 22:00:07.764742 kernel: Normal empty Sep 12 22:00:07.764748 kernel: Device empty Sep 12 22:00:07.764754 kernel: Movable zone start for each node Sep 12 22:00:07.764761 kernel: Early memory node ranges Sep 12 22:00:07.764768 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 12 22:00:07.764774 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 12 22:00:07.764780 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 12 22:00:07.764786 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 12 22:00:07.764792 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 12 22:00:07.764798 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 12 22:00:07.764806 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 12 22:00:07.764812 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 12 22:00:07.764819 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 22:00:07.764828 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 22:00:07.764835 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 22:00:07.764841 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 12 22:00:07.764848 kernel: psci: probing for conduit method from ACPI. Sep 12 22:00:07.764856 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 22:00:07.764863 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 22:00:07.764870 kernel: psci: Trusted OS migration not required Sep 12 22:00:07.764876 kernel: psci: SMC Calling Convention v1.1 Sep 12 22:00:07.764883 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 22:00:07.764890 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 22:00:07.764896 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 22:00:07.764903 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 22:00:07.764910 kernel: Detected PIPT I-cache on CPU0 Sep 12 22:00:07.764918 kernel: CPU features: detected: GIC system register CPU interface Sep 12 22:00:07.764925 kernel: CPU features: detected: Spectre-v4 Sep 12 22:00:07.764931 kernel: CPU features: detected: Spectre-BHB Sep 12 22:00:07.764938 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 22:00:07.764945 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 22:00:07.764951 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 22:00:07.764958 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 22:00:07.764965 kernel: alternatives: applying boot alternatives Sep 12 22:00:07.764972 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=319fa5fb212e5dd8bf766d2f9f0bbb61d6aa6c81f2813f4b5b49defba0af2b2f Sep 12 22:00:07.764979 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 22:00:07.764986 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 22:00:07.764994 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 22:00:07.765001 kernel: Fallback order for Node 0: 0 Sep 12 22:00:07.765007 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 12 22:00:07.765014 kernel: Policy zone: DMA Sep 12 22:00:07.765020 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 22:00:07.765027 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 12 22:00:07.765033 kernel: software IO TLB: area num 4. Sep 12 22:00:07.765040 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 12 22:00:07.765046 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 12 22:00:07.765053 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 22:00:07.765060 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 22:00:07.765066 kernel: rcu: RCU event tracing is enabled. Sep 12 22:00:07.765074 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 22:00:07.765081 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 22:00:07.765088 kernel: Tracing variant of Tasks RCU enabled. Sep 12 22:00:07.765094 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 22:00:07.765101 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 22:00:07.765107 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 22:00:07.765114 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 22:00:07.765121 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 22:00:07.765128 kernel: GICv3: 256 SPIs implemented Sep 12 22:00:07.765134 kernel: GICv3: 0 Extended SPIs implemented Sep 12 22:00:07.765140 kernel: Root IRQ handler: gic_handle_irq Sep 12 22:00:07.765149 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 22:00:07.765155 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 22:00:07.765162 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 22:00:07.765169 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 22:00:07.765175 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 12 22:00:07.765182 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 12 22:00:07.765189 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 12 22:00:07.765195 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 12 22:00:07.765202 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 22:00:07.765208 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 22:00:07.765215 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 22:00:07.765221 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 22:00:07.765229 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 22:00:07.765236 kernel: arm-pv: using stolen time PV Sep 12 22:00:07.765243 kernel: Console: colour dummy device 80x25 Sep 12 22:00:07.765250 kernel: ACPI: Core revision 20240827 Sep 12 22:00:07.765257 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 22:00:07.765264 kernel: pid_max: default: 32768 minimum: 301 Sep 12 22:00:07.765271 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 22:00:07.765277 kernel: landlock: Up and running. Sep 12 22:00:07.765284 kernel: SELinux: Initializing. Sep 12 22:00:07.765293 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 22:00:07.765300 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 22:00:07.765307 kernel: rcu: Hierarchical SRCU implementation. Sep 12 22:00:07.765314 kernel: rcu: Max phase no-delay instances is 400. Sep 12 22:00:07.765320 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 22:00:07.765327 kernel: Remapping and enabling EFI services. Sep 12 22:00:07.765334 kernel: smp: Bringing up secondary CPUs ... Sep 12 22:00:07.765340 kernel: Detected PIPT I-cache on CPU1 Sep 12 22:00:07.765347 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 22:00:07.767414 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 12 22:00:07.767447 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 22:00:07.767455 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 22:00:07.767465 kernel: Detected PIPT I-cache on CPU2 Sep 12 22:00:07.767472 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 22:00:07.767479 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 12 22:00:07.767486 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 22:00:07.767493 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 22:00:07.767507 kernel: Detected PIPT I-cache on CPU3 Sep 12 22:00:07.767517 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 22:00:07.767525 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 12 22:00:07.767532 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 22:00:07.767539 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 22:00:07.767546 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 22:00:07.767553 kernel: SMP: Total of 4 processors activated. Sep 12 22:00:07.767560 kernel: CPU: All CPU(s) started at EL1 Sep 12 22:00:07.767567 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 22:00:07.767574 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 22:00:07.767583 kernel: CPU features: detected: Common not Private translations Sep 12 22:00:07.767590 kernel: CPU features: detected: CRC32 instructions Sep 12 22:00:07.767597 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 22:00:07.767604 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 22:00:07.767611 kernel: CPU features: detected: LSE atomic instructions Sep 12 22:00:07.767618 kernel: CPU features: detected: Privileged Access Never Sep 12 22:00:07.767626 kernel: CPU features: detected: RAS Extension Support Sep 12 22:00:07.767633 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 22:00:07.767640 kernel: alternatives: applying system-wide alternatives Sep 12 22:00:07.767648 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 12 22:00:07.767657 kernel: Memory: 2422372K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38976K init, 1038K bss, 127580K reserved, 16384K cma-reserved) Sep 12 22:00:07.767664 kernel: devtmpfs: initialized Sep 12 22:00:07.767671 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 22:00:07.767679 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 22:00:07.767686 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 22:00:07.767693 kernel: 0 pages in range for non-PLT usage Sep 12 22:00:07.767700 kernel: 508560 pages in range for PLT usage Sep 12 22:00:07.767707 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 22:00:07.767715 kernel: SMBIOS 3.0.0 present. Sep 12 22:00:07.767723 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 22:00:07.767729 kernel: DMI: Memory slots populated: 1/1 Sep 12 22:00:07.767737 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 22:00:07.767744 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 22:00:07.767751 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 22:00:07.767758 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 22:00:07.767765 kernel: audit: initializing netlink subsys (disabled) Sep 12 22:00:07.767772 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 Sep 12 22:00:07.767781 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 22:00:07.767788 kernel: cpuidle: using governor menu Sep 12 22:00:07.767795 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 22:00:07.767802 kernel: ASID allocator initialised with 32768 entries Sep 12 22:00:07.767809 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 22:00:07.767816 kernel: Serial: AMBA PL011 UART driver Sep 12 22:00:07.767823 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 22:00:07.767830 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 22:00:07.767837 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 22:00:07.767846 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 22:00:07.767853 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 22:00:07.767860 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 22:00:07.767867 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 22:00:07.767874 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 22:00:07.767881 kernel: ACPI: Added _OSI(Module Device) Sep 12 22:00:07.767888 kernel: ACPI: Added _OSI(Processor Device) Sep 12 22:00:07.767895 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 22:00:07.767902 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 22:00:07.767910 kernel: ACPI: Interpreter enabled Sep 12 22:00:07.767917 kernel: ACPI: Using GIC for interrupt routing Sep 12 22:00:07.767924 kernel: ACPI: MCFG table detected, 1 entries Sep 12 22:00:07.767931 kernel: ACPI: CPU0 has been hot-added Sep 12 22:00:07.767938 kernel: ACPI: CPU1 has been hot-added Sep 12 22:00:07.767945 kernel: ACPI: CPU2 has been hot-added Sep 12 22:00:07.767952 kernel: ACPI: CPU3 has been hot-added Sep 12 22:00:07.767959 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 22:00:07.767966 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 22:00:07.767975 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 22:00:07.769613 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 22:00:07.769691 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 22:00:07.769752 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 22:00:07.769812 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 22:00:07.769869 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 22:00:07.769878 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 22:00:07.769894 kernel: PCI host bridge to bus 0000:00 Sep 12 22:00:07.769961 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 22:00:07.770024 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 22:00:07.770078 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 22:00:07.770131 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 22:00:07.770209 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 12 22:00:07.770289 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 22:00:07.770365 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 12 22:00:07.770436 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 12 22:00:07.770497 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 22:00:07.770572 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 12 22:00:07.770634 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 12 22:00:07.770696 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 12 22:00:07.770756 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 22:00:07.770812 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 22:00:07.770867 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 22:00:07.770876 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 22:00:07.770884 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 22:00:07.770891 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 22:00:07.770898 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 22:00:07.770905 kernel: iommu: Default domain type: Translated Sep 12 22:00:07.770912 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 22:00:07.770922 kernel: efivars: Registered efivars operations Sep 12 22:00:07.770929 kernel: vgaarb: loaded Sep 12 22:00:07.770936 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 22:00:07.770943 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 22:00:07.770950 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 22:00:07.770957 kernel: pnp: PnP ACPI init Sep 12 22:00:07.771025 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 22:00:07.771036 kernel: pnp: PnP ACPI: found 1 devices Sep 12 22:00:07.771045 kernel: NET: Registered PF_INET protocol family Sep 12 22:00:07.771052 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 22:00:07.771059 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 22:00:07.771067 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 22:00:07.771074 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 22:00:07.771082 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 22:00:07.771089 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 22:00:07.771096 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 22:00:07.771104 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 22:00:07.771113 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 22:00:07.771120 kernel: PCI: CLS 0 bytes, default 64 Sep 12 22:00:07.771127 kernel: kvm [1]: HYP mode not available Sep 12 22:00:07.771135 kernel: Initialise system trusted keyrings Sep 12 22:00:07.771142 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 22:00:07.771149 kernel: Key type asymmetric registered Sep 12 22:00:07.771157 kernel: Asymmetric key parser 'x509' registered Sep 12 22:00:07.771164 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 22:00:07.771171 kernel: io scheduler mq-deadline registered Sep 12 22:00:07.771180 kernel: io scheduler kyber registered Sep 12 22:00:07.771187 kernel: io scheduler bfq registered Sep 12 22:00:07.771194 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 22:00:07.771201 kernel: ACPI: button: Power Button [PWRB] Sep 12 22:00:07.771209 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 22:00:07.771270 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 22:00:07.771279 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 22:00:07.771287 kernel: thunder_xcv, ver 1.0 Sep 12 22:00:07.771294 kernel: thunder_bgx, ver 1.0 Sep 12 22:00:07.771303 kernel: nicpf, ver 1.0 Sep 12 22:00:07.771310 kernel: nicvf, ver 1.0 Sep 12 22:00:07.773517 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 22:00:07.773604 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T22:00:07 UTC (1757714407) Sep 12 22:00:07.773614 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 22:00:07.773622 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 22:00:07.773630 kernel: watchdog: NMI not fully supported Sep 12 22:00:07.773637 kernel: watchdog: Hard watchdog permanently disabled Sep 12 22:00:07.773651 kernel: NET: Registered PF_INET6 protocol family Sep 12 22:00:07.773659 kernel: Segment Routing with IPv6 Sep 12 22:00:07.773666 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 22:00:07.773673 kernel: NET: Registered PF_PACKET protocol family Sep 12 22:00:07.773680 kernel: Key type dns_resolver registered Sep 12 22:00:07.773687 kernel: registered taskstats version 1 Sep 12 22:00:07.773694 kernel: Loading compiled-in X.509 certificates Sep 12 22:00:07.773702 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 2d7730e6d35b3fbd1c590cd72a2500b2380c020e' Sep 12 22:00:07.773709 kernel: Demotion targets for Node 0: null Sep 12 22:00:07.773718 kernel: Key type .fscrypt registered Sep 12 22:00:07.773726 kernel: Key type fscrypt-provisioning registered Sep 12 22:00:07.773733 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 22:00:07.773740 kernel: ima: Allocated hash algorithm: sha1 Sep 12 22:00:07.773748 kernel: ima: No architecture policies found Sep 12 22:00:07.773755 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 22:00:07.773762 kernel: clk: Disabling unused clocks Sep 12 22:00:07.773769 kernel: PM: genpd: Disabling unused power domains Sep 12 22:00:07.773776 kernel: Warning: unable to open an initial console. Sep 12 22:00:07.773785 kernel: Freeing unused kernel memory: 38976K Sep 12 22:00:07.773792 kernel: Run /init as init process Sep 12 22:00:07.773799 kernel: with arguments: Sep 12 22:00:07.773807 kernel: /init Sep 12 22:00:07.773814 kernel: with environment: Sep 12 22:00:07.773820 kernel: HOME=/ Sep 12 22:00:07.773827 kernel: TERM=linux Sep 12 22:00:07.773834 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 22:00:07.773842 systemd[1]: Successfully made /usr/ read-only. Sep 12 22:00:07.773855 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:00:07.773863 systemd[1]: Detected virtualization kvm. Sep 12 22:00:07.773871 systemd[1]: Detected architecture arm64. Sep 12 22:00:07.773878 systemd[1]: Running in initrd. Sep 12 22:00:07.773886 systemd[1]: No hostname configured, using default hostname. Sep 12 22:00:07.773894 systemd[1]: Hostname set to . Sep 12 22:00:07.773901 systemd[1]: Initializing machine ID from VM UUID. Sep 12 22:00:07.773910 systemd[1]: Queued start job for default target initrd.target. Sep 12 22:00:07.773918 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:00:07.773925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:00:07.773934 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 22:00:07.773942 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:00:07.773949 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 22:00:07.773958 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 22:00:07.773968 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 22:00:07.773986 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 22:00:07.773994 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:00:07.774002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:00:07.774009 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:00:07.774017 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:00:07.774024 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:00:07.774032 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:00:07.774041 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:00:07.774049 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:00:07.774057 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 22:00:07.774065 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 22:00:07.774073 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:00:07.774081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:00:07.774090 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:00:07.774097 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:00:07.774105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 22:00:07.774114 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:00:07.774122 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 22:00:07.774131 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 22:00:07.774139 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 22:00:07.774146 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:00:07.774154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:00:07.774162 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:00:07.774170 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:00:07.774179 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 22:00:07.774187 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 22:00:07.774195 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 22:00:07.774219 systemd-journald[244]: Collecting audit messages is disabled. Sep 12 22:00:07.774240 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 22:00:07.774248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:00:07.774256 kernel: Bridge firewalling registered Sep 12 22:00:07.774263 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 22:00:07.774273 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:00:07.774282 systemd-journald[244]: Journal started Sep 12 22:00:07.774299 systemd-journald[244]: Runtime Journal (/run/log/journal/6eed61ac948c4fd9805d689cbe3df752) is 6M, max 48.5M, 42.4M free. Sep 12 22:00:07.775701 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:00:07.754145 systemd-modules-load[245]: Inserted module 'overlay' Sep 12 22:00:07.769947 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 12 22:00:07.791677 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:00:07.796676 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:00:07.798284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:00:07.800590 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:00:07.806919 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:00:07.809422 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 22:00:07.814236 systemd-tmpfiles[276]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 22:00:07.815339 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:00:07.818940 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:00:07.820715 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:00:07.824284 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:00:07.827752 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=319fa5fb212e5dd8bf766d2f9f0bbb61d6aa6c81f2813f4b5b49defba0af2b2f Sep 12 22:00:07.864158 systemd-resolved[295]: Positive Trust Anchors: Sep 12 22:00:07.864178 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:00:07.864210 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:00:07.869472 systemd-resolved[295]: Defaulting to hostname 'linux'. Sep 12 22:00:07.871194 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:00:07.872151 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:00:07.898527 kernel: SCSI subsystem initialized Sep 12 22:00:07.903515 kernel: Loading iSCSI transport class v2.0-870. Sep 12 22:00:07.910523 kernel: iscsi: registered transport (tcp) Sep 12 22:00:07.923549 kernel: iscsi: registered transport (qla4xxx) Sep 12 22:00:07.923600 kernel: QLogic iSCSI HBA Driver Sep 12 22:00:07.939093 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:00:07.958813 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:00:07.960041 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:00:08.004278 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 22:00:08.006019 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 22:00:08.062554 kernel: raid6: neonx8 gen() 15723 MB/s Sep 12 22:00:08.079527 kernel: raid6: neonx4 gen() 15804 MB/s Sep 12 22:00:08.096534 kernel: raid6: neonx2 gen() 13223 MB/s Sep 12 22:00:08.113524 kernel: raid6: neonx1 gen() 9991 MB/s Sep 12 22:00:08.130529 kernel: raid6: int64x8 gen() 6897 MB/s Sep 12 22:00:08.147555 kernel: raid6: int64x4 gen() 7328 MB/s Sep 12 22:00:08.164525 kernel: raid6: int64x2 gen() 6048 MB/s Sep 12 22:00:08.183376 kernel: raid6: int64x1 gen() 5041 MB/s Sep 12 22:00:08.183432 kernel: raid6: using algorithm neonx4 gen() 15804 MB/s Sep 12 22:00:08.201381 kernel: raid6: .... xor() 12009 MB/s, rmw enabled Sep 12 22:00:08.201420 kernel: raid6: using neon recovery algorithm Sep 12 22:00:08.209384 kernel: xor: measuring software checksum speed Sep 12 22:00:08.209419 kernel: 8regs : 20950 MB/sec Sep 12 22:00:08.209435 kernel: 32regs : 21676 MB/sec Sep 12 22:00:08.209444 kernel: arm64_neon : 28070 MB/sec Sep 12 22:00:08.209452 kernel: xor: using function: arm64_neon (28070 MB/sec) Sep 12 22:00:08.263541 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 22:00:08.270030 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:00:08.272970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:00:08.300997 systemd-udevd[497]: Using default interface naming scheme 'v255'. Sep 12 22:00:08.306644 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:00:08.308832 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 22:00:08.331445 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Sep 12 22:00:08.357072 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:00:08.361852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:00:08.421518 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:00:08.424392 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 22:00:08.481532 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 22:00:08.481845 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 22:00:08.487520 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 22:00:08.487555 kernel: GPT:9289727 != 19775487 Sep 12 22:00:08.487565 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 22:00:08.487574 kernel: GPT:9289727 != 19775487 Sep 12 22:00:08.488701 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 22:00:08.488723 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:00:08.495235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:00:08.495363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:00:08.500443 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:00:08.504681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:00:08.520341 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 22:00:08.532539 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 22:00:08.533583 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:00:08.547941 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 22:00:08.553892 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 22:00:08.554837 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 22:00:08.563015 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 22:00:08.563959 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:00:08.565743 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:00:08.567514 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:00:08.569961 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 22:00:08.571590 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 22:00:08.589011 disk-uuid[592]: Primary Header is updated. Sep 12 22:00:08.589011 disk-uuid[592]: Secondary Entries is updated. Sep 12 22:00:08.589011 disk-uuid[592]: Secondary Header is updated. Sep 12 22:00:08.592542 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:00:08.593818 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:00:09.600519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:00:09.600572 disk-uuid[596]: The operation has completed successfully. Sep 12 22:00:09.629023 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 22:00:09.629152 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 22:00:09.654064 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 22:00:09.667557 sh[612]: Success Sep 12 22:00:09.680107 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 22:00:09.680147 kernel: device-mapper: uevent: version 1.0.3 Sep 12 22:00:09.680157 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 22:00:09.688520 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 22:00:09.713913 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 22:00:09.716329 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 22:00:09.728412 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 22:00:09.734530 kernel: BTRFS: device fsid 254e43f1-b609-42b8-bcc5-437252095415 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (624) Sep 12 22:00:09.734572 kernel: BTRFS info (device dm-0): first mount of filesystem 254e43f1-b609-42b8-bcc5-437252095415 Sep 12 22:00:09.736073 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 22:00:09.739514 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 22:00:09.739536 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 22:00:09.741007 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 22:00:09.742010 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:00:09.743258 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 22:00:09.743985 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 22:00:09.745458 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 22:00:09.770578 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Sep 12 22:00:09.772562 kernel: BTRFS info (device vda6): first mount of filesystem 5dadbedd-e975-4944-978a-462cb6ec6aa0 Sep 12 22:00:09.772598 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 22:00:09.774724 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:00:09.774755 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:00:09.778514 kernel: BTRFS info (device vda6): last unmount of filesystem 5dadbedd-e975-4944-978a-462cb6ec6aa0 Sep 12 22:00:09.779232 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 22:00:09.781179 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 22:00:09.846650 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:00:09.852098 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:00:09.888919 ignition[703]: Ignition 2.22.0 Sep 12 22:00:09.888934 ignition[703]: Stage: fetch-offline Sep 12 22:00:09.888961 ignition[703]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:00:09.888969 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:00:09.889046 ignition[703]: parsed url from cmdline: "" Sep 12 22:00:09.889050 ignition[703]: no config URL provided Sep 12 22:00:09.889054 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 22:00:09.892797 systemd-networkd[802]: lo: Link UP Sep 12 22:00:09.889061 ignition[703]: no config at "/usr/lib/ignition/user.ign" Sep 12 22:00:09.892800 systemd-networkd[802]: lo: Gained carrier Sep 12 22:00:09.889079 ignition[703]: op(1): [started] loading QEMU firmware config module Sep 12 22:00:09.893837 systemd-networkd[802]: Enumeration completed Sep 12 22:00:09.889083 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 22:00:09.893938 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:00:09.894211 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:00:09.902325 ignition[703]: op(1): [finished] loading QEMU firmware config module Sep 12 22:00:09.894216 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:00:09.895392 systemd-networkd[802]: eth0: Link UP Sep 12 22:00:09.895895 systemd-networkd[802]: eth0: Gained carrier Sep 12 22:00:09.895906 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:00:09.895946 systemd[1]: Reached target network.target - Network. Sep 12 22:00:09.915543 systemd-networkd[802]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 22:00:09.948584 ignition[703]: parsing config with SHA512: f7fdf759fbb3bfb2d6ba4f4008117dee00b96edcbe3fcd6f70a476ee50eb8787fbf30a2605816abfb0283dce24808403bb0bb5de87f4c2594efc303c5add2b7d Sep 12 22:00:09.954998 unknown[703]: fetched base config from "system" Sep 12 22:00:09.955009 unknown[703]: fetched user config from "qemu" Sep 12 22:00:09.955482 ignition[703]: fetch-offline: fetch-offline passed Sep 12 22:00:09.957471 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:00:09.955554 ignition[703]: Ignition finished successfully Sep 12 22:00:09.958780 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 22:00:09.959480 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 22:00:09.991066 ignition[811]: Ignition 2.22.0 Sep 12 22:00:09.991084 ignition[811]: Stage: kargs Sep 12 22:00:09.991205 ignition[811]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:00:09.991213 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:00:09.991953 ignition[811]: kargs: kargs passed Sep 12 22:00:09.991995 ignition[811]: Ignition finished successfully Sep 12 22:00:09.996546 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 22:00:09.998924 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 22:00:10.030736 ignition[819]: Ignition 2.22.0 Sep 12 22:00:10.030750 ignition[819]: Stage: disks Sep 12 22:00:10.030871 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:00:10.030880 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:00:10.031629 ignition[819]: disks: disks passed Sep 12 22:00:10.031673 ignition[819]: Ignition finished successfully Sep 12 22:00:10.033829 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 22:00:10.035042 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 22:00:10.036643 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 22:00:10.038188 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:00:10.039628 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:00:10.041152 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:00:10.043235 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 22:00:10.073744 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 22:00:10.077599 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 22:00:10.079626 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 22:00:10.156526 kernel: EXT4-fs (vda9): mounted filesystem a7b592ec-3c41-4dc2-88a7-056c1f18b418 r/w with ordered data mode. Quota mode: none. Sep 12 22:00:10.157170 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 22:00:10.158218 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 22:00:10.163614 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:00:10.165064 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 22:00:10.165859 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 22:00:10.165897 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 22:00:10.165920 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:00:10.178013 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 22:00:10.181029 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 22:00:10.185021 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 12 22:00:10.185074 kernel: BTRFS info (device vda6): first mount of filesystem 5dadbedd-e975-4944-978a-462cb6ec6aa0 Sep 12 22:00:10.185100 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 22:00:10.188336 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:00:10.188395 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:00:10.189374 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:00:10.218080 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 22:00:10.222113 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Sep 12 22:00:10.225908 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 22:00:10.229079 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 22:00:10.295880 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 22:00:10.298028 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 22:00:10.300318 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 22:00:10.320521 kernel: BTRFS info (device vda6): last unmount of filesystem 5dadbedd-e975-4944-978a-462cb6ec6aa0 Sep 12 22:00:10.334042 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 22:00:10.341885 ignition[952]: INFO : Ignition 2.22.0 Sep 12 22:00:10.341885 ignition[952]: INFO : Stage: mount Sep 12 22:00:10.344586 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:00:10.344586 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:00:10.344586 ignition[952]: INFO : mount: mount passed Sep 12 22:00:10.344586 ignition[952]: INFO : Ignition finished successfully Sep 12 22:00:10.345718 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 22:00:10.347847 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 22:00:10.872928 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 22:00:10.874352 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:00:10.907992 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 12 22:00:10.908044 kernel: BTRFS info (device vda6): first mount of filesystem 5dadbedd-e975-4944-978a-462cb6ec6aa0 Sep 12 22:00:10.908063 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 22:00:10.911642 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:00:10.911695 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:00:10.913060 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:00:10.950037 ignition[983]: INFO : Ignition 2.22.0 Sep 12 22:00:10.950037 ignition[983]: INFO : Stage: files Sep 12 22:00:10.951542 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:00:10.951542 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:00:10.951542 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 12 22:00:10.954666 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 22:00:10.954666 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 22:00:10.954666 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 22:00:10.954666 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 22:00:10.954666 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 22:00:10.954178 unknown[983]: wrote ssh authorized keys file for user: core Sep 12 22:00:10.960976 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 22:00:10.960976 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 22:00:11.030645 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 22:00:11.212311 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 22:00:11.212311 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 22:00:11.212311 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 22:00:11.481100 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 22:00:11.653406 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 22:00:11.653406 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 22:00:11.657180 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 22:00:11.657180 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:00:11.657180 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:00:11.657180 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:00:11.657180 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:00:11.657180 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:00:11.657180 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:00:11.657180 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:00:11.657180 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:00:11.657180 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 22:00:11.675058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 22:00:11.675058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 22:00:11.675058 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 22:00:11.840673 systemd-networkd[802]: eth0: Gained IPv6LL Sep 12 22:00:11.982216 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 22:00:12.983518 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 22:00:12.983518 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 22:00:12.986676 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:00:12.990158 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:00:12.990158 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 22:00:12.990158 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 22:00:12.994961 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 22:00:12.994961 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 22:00:12.994961 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 22:00:12.994961 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 22:00:13.005520 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 22:00:13.010014 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 22:00:13.011235 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 22:00:13.011235 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 22:00:13.011235 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 22:00:13.011235 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:00:13.011235 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:00:13.011235 ignition[983]: INFO : files: files passed Sep 12 22:00:13.011235 ignition[983]: INFO : Ignition finished successfully Sep 12 22:00:13.015008 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 22:00:13.018630 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 22:00:13.022302 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 22:00:13.038190 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 22:00:13.037754 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 22:00:13.037887 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 22:00:13.046153 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:00:13.046153 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:00:13.049107 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:00:13.048328 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:00:13.050392 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 22:00:13.052999 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 22:00:13.102053 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 22:00:13.102155 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 22:00:13.104030 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 22:00:13.105395 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 22:00:13.106873 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 22:00:13.107656 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 22:00:13.123216 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:00:13.125380 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 22:00:13.144515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:00:13.145433 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:00:13.147105 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 22:00:13.148414 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 22:00:13.148550 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:00:13.150443 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 22:00:13.152038 systemd[1]: Stopped target basic.target - Basic System. Sep 12 22:00:13.153394 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 22:00:13.154653 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:00:13.156137 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 22:00:13.157626 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:00:13.159080 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 22:00:13.160449 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:00:13.161967 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 22:00:13.163390 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 22:00:13.164724 systemd[1]: Stopped target swap.target - Swaps. Sep 12 22:00:13.165880 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 22:00:13.165988 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:00:13.167800 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:00:13.169315 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:00:13.170836 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 22:00:13.171559 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:00:13.173141 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 22:00:13.173243 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 22:00:13.175365 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 22:00:13.175473 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:00:13.177001 systemd[1]: Stopped target paths.target - Path Units. Sep 12 22:00:13.178239 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 22:00:13.178330 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:00:13.179903 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 22:00:13.181097 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 22:00:13.182425 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 22:00:13.182521 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:00:13.184270 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 22:00:13.184346 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:00:13.185563 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 22:00:13.185668 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:00:13.187028 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 22:00:13.187123 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 22:00:13.190190 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 22:00:13.191652 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 22:00:13.191767 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:00:13.212191 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 22:00:13.212894 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 22:00:13.213010 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:00:13.214652 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 22:00:13.214744 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:00:13.220120 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 22:00:13.220231 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 22:00:13.227284 ignition[1038]: INFO : Ignition 2.22.0 Sep 12 22:00:13.227284 ignition[1038]: INFO : Stage: umount Sep 12 22:00:13.229803 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:00:13.229803 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:00:13.229803 ignition[1038]: INFO : umount: umount passed Sep 12 22:00:13.229803 ignition[1038]: INFO : Ignition finished successfully Sep 12 22:00:13.230259 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 22:00:13.230753 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 22:00:13.230835 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 22:00:13.233663 systemd[1]: Stopped target network.target - Network. Sep 12 22:00:13.234707 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 22:00:13.234766 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 22:00:13.235974 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 22:00:13.236010 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 22:00:13.237260 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 22:00:13.237309 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 22:00:13.238936 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 22:00:13.238980 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 22:00:13.241330 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 22:00:13.242885 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 22:00:13.250062 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 22:00:13.250177 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 22:00:13.253354 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 22:00:13.253603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 22:00:13.253640 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:00:13.256450 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:00:13.261106 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 22:00:13.261224 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 22:00:13.264283 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 22:00:13.264456 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 22:00:13.266422 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 22:00:13.266454 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:00:13.268948 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 22:00:13.270227 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 22:00:13.270277 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:00:13.272053 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 22:00:13.272097 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:00:13.274380 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 22:00:13.274618 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 22:00:13.275949 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:00:13.279359 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 22:00:13.291167 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 22:00:13.291291 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 22:00:13.292984 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 22:00:13.293105 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:00:13.294988 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 22:00:13.295049 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 22:00:13.295994 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 22:00:13.296023 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:00:13.297969 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 22:00:13.298022 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:00:13.300619 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 22:00:13.300850 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 22:00:13.303248 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 22:00:13.303330 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:00:13.306720 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 22:00:13.308013 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 22:00:13.308071 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:00:13.310431 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 22:00:13.310473 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:00:13.313292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:00:13.313348 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:00:13.324264 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 22:00:13.324401 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 22:00:13.342820 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 22:00:13.344559 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 22:00:13.345836 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 22:00:13.347079 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 22:00:13.347143 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 22:00:13.349716 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 22:00:13.382456 systemd[1]: Switching root. Sep 12 22:00:13.430171 systemd-journald[244]: Journal stopped Sep 12 22:00:14.302366 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 12 22:00:14.302490 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 22:00:14.302612 kernel: SELinux: policy capability open_perms=1 Sep 12 22:00:14.302625 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 22:00:14.302634 kernel: SELinux: policy capability always_check_network=0 Sep 12 22:00:14.302648 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 22:00:14.302659 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 22:00:14.302670 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 22:00:14.302678 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 22:00:14.302687 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 22:00:14.302697 kernel: audit: type=1403 audit(1757714413.689:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 22:00:14.302708 systemd[1]: Successfully loaded SELinux policy in 54.856ms. Sep 12 22:00:14.302726 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.212ms. Sep 12 22:00:14.302737 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:00:14.302748 systemd[1]: Detected virtualization kvm. Sep 12 22:00:14.302758 systemd[1]: Detected architecture arm64. Sep 12 22:00:14.302769 systemd[1]: Detected first boot. Sep 12 22:00:14.302779 systemd[1]: Initializing machine ID from VM UUID. Sep 12 22:00:14.302789 zram_generator::config[1083]: No configuration found. Sep 12 22:00:14.302803 kernel: NET: Registered PF_VSOCK protocol family Sep 12 22:00:14.302814 systemd[1]: Populated /etc with preset unit settings. Sep 12 22:00:14.302825 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 22:00:14.302834 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 22:00:14.302844 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 22:00:14.302854 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 22:00:14.302865 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 22:00:14.302874 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 22:00:14.302884 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 22:00:14.302893 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 22:00:14.302903 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 22:00:14.302913 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 22:00:14.302923 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 22:00:14.302934 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 22:00:14.302945 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:00:14.302956 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:00:14.302966 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 22:00:14.302975 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 22:00:14.302985 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 22:00:14.302995 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:00:14.303005 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 22:00:14.303015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:00:14.303026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:00:14.303035 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 22:00:14.303045 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 22:00:14.303054 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 22:00:14.303064 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 22:00:14.303074 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:00:14.303083 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:00:14.303093 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:00:14.303103 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:00:14.303114 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 22:00:14.303124 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 22:00:14.303134 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 22:00:14.303144 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:00:14.303153 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:00:14.303163 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:00:14.303173 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 22:00:14.303183 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 22:00:14.303193 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 22:00:14.303204 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 22:00:14.303214 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 22:00:14.303224 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 22:00:14.303691 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 22:00:14.303712 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 22:00:14.303722 systemd[1]: Reached target machines.target - Containers. Sep 12 22:00:14.303732 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 22:00:14.303742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:00:14.303758 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:00:14.303768 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 22:00:14.303779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:00:14.303789 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:00:14.303798 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:00:14.303808 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 22:00:14.303818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:00:14.303833 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 22:00:14.303843 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 22:00:14.303855 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 22:00:14.303867 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 22:00:14.303876 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 22:00:14.303887 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:00:14.303896 kernel: fuse: init (API version 7.41) Sep 12 22:00:14.303907 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:00:14.303918 kernel: loop: module loaded Sep 12 22:00:14.303927 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:00:14.303937 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:00:14.303949 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 22:00:14.303959 kernel: ACPI: bus type drm_connector registered Sep 12 22:00:14.303969 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 22:00:14.303979 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:00:14.303989 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 22:00:14.304001 systemd[1]: Stopped verity-setup.service. Sep 12 22:00:14.304010 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 22:00:14.304055 systemd-journald[1151]: Collecting audit messages is disabled. Sep 12 22:00:14.304080 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 22:00:14.304090 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 22:00:14.304101 systemd-journald[1151]: Journal started Sep 12 22:00:14.304123 systemd-journald[1151]: Runtime Journal (/run/log/journal/6eed61ac948c4fd9805d689cbe3df752) is 6M, max 48.5M, 42.4M free. Sep 12 22:00:14.093026 systemd[1]: Queued start job for default target multi-user.target. Sep 12 22:00:14.114425 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 22:00:14.114813 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 22:00:14.308526 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:00:14.309648 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 22:00:14.310660 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 22:00:14.311620 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 22:00:14.312670 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 22:00:14.313966 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:00:14.315321 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 22:00:14.315609 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 22:00:14.316773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:00:14.316942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:00:14.319824 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:00:14.320005 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:00:14.321247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:00:14.321470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:00:14.322654 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 22:00:14.322827 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 22:00:14.323985 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:00:14.324134 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:00:14.325491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:00:14.326710 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:00:14.327956 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 22:00:14.330607 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 22:00:14.342491 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:00:14.344669 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 22:00:14.346448 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 22:00:14.347433 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 22:00:14.347464 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:00:14.349251 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 22:00:14.359525 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 22:00:14.360455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:00:14.361787 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 22:00:14.363650 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 22:00:14.364578 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:00:14.365981 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 22:00:14.367568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:00:14.369189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:00:14.371357 systemd-journald[1151]: Time spent on flushing to /var/log/journal/6eed61ac948c4fd9805d689cbe3df752 is 13.464ms for 884 entries. Sep 12 22:00:14.371357 systemd-journald[1151]: System Journal (/var/log/journal/6eed61ac948c4fd9805d689cbe3df752) is 8M, max 195.6M, 187.6M free. Sep 12 22:00:14.406027 systemd-journald[1151]: Received client request to flush runtime journal. Sep 12 22:00:14.406282 kernel: loop0: detected capacity change from 0 to 119368 Sep 12 22:00:14.371784 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 22:00:14.375194 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 22:00:14.391567 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:00:14.392983 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 22:00:14.394453 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 22:00:14.397828 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 22:00:14.404823 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 22:00:14.407686 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 22:00:14.410569 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 22:00:14.415580 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:00:14.416571 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 22:00:14.434199 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 22:00:14.437593 kernel: loop1: detected capacity change from 0 to 100632 Sep 12 22:00:14.437702 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:00:14.451670 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 22:00:14.465895 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Sep 12 22:00:14.466217 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Sep 12 22:00:14.473326 kernel: loop2: detected capacity change from 0 to 203944 Sep 12 22:00:14.470327 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:00:14.499525 kernel: loop3: detected capacity change from 0 to 119368 Sep 12 22:00:14.504542 kernel: loop4: detected capacity change from 0 to 100632 Sep 12 22:00:14.509530 kernel: loop5: detected capacity change from 0 to 203944 Sep 12 22:00:14.513610 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 22:00:14.513999 (sd-merge)[1221]: Merged extensions into '/usr'. Sep 12 22:00:14.518129 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 22:00:14.518144 systemd[1]: Reloading... Sep 12 22:00:14.578592 zram_generator::config[1251]: No configuration found. Sep 12 22:00:14.642055 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 22:00:14.714764 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 22:00:14.714920 systemd[1]: Reloading finished in 196 ms. Sep 12 22:00:14.730557 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 22:00:14.731874 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 22:00:14.744752 systemd[1]: Starting ensure-sysext.service... Sep 12 22:00:14.746593 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:00:14.755400 systemd[1]: Reload requested from client PID 1282 ('systemctl') (unit ensure-sysext.service)... Sep 12 22:00:14.755417 systemd[1]: Reloading... Sep 12 22:00:14.770592 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 22:00:14.770626 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 22:00:14.770887 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 22:00:14.771073 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 22:00:14.772198 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 22:00:14.772543 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 12 22:00:14.772699 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 12 22:00:14.775741 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:00:14.775930 systemd-tmpfiles[1283]: Skipping /boot Sep 12 22:00:14.782288 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:00:14.782418 systemd-tmpfiles[1283]: Skipping /boot Sep 12 22:00:14.810558 zram_generator::config[1310]: No configuration found. Sep 12 22:00:14.940810 systemd[1]: Reloading finished in 184 ms. Sep 12 22:00:14.959201 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 22:00:14.964784 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:00:14.974714 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:00:14.977038 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 22:00:14.987408 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 22:00:14.990522 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:00:14.992599 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:00:14.996694 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 22:00:15.003790 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 22:00:15.006123 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:00:15.010972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:00:15.015835 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:00:15.019770 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:00:15.020922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:00:15.021048 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:00:15.022138 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 22:00:15.031491 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:00:15.032172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:00:15.032338 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:00:15.034354 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 22:00:15.037290 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 22:00:15.042559 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:00:15.042727 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:00:15.044726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:00:15.044882 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:00:15.047088 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:00:15.047242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:00:15.049246 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 22:00:15.054846 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 22:00:15.057570 augenrules[1381]: No rules Sep 12 22:00:15.058789 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:00:15.059573 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:00:15.061969 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Sep 12 22:00:15.063464 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 22:00:15.083800 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:00:15.084909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:00:15.086223 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:00:15.090308 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:00:15.094852 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:00:15.101794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:00:15.103784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:00:15.103912 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:00:15.104026 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 22:00:15.105968 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:00:15.112884 systemd[1]: Finished ensure-sysext.service. Sep 12 22:00:15.114873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:00:15.115588 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:00:15.119036 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:00:15.119573 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:00:15.132836 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 22:00:15.140346 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:00:15.141875 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:00:15.150748 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 22:00:15.152234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:00:15.154576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:00:15.168924 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:00:15.169100 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:00:15.173770 augenrules[1393]: /sbin/augenrules: No change Sep 12 22:00:15.179527 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:00:15.179531 systemd-resolved[1350]: Positive Trust Anchors: Sep 12 22:00:15.179543 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:00:15.179574 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:00:15.186586 augenrules[1452]: No rules Sep 12 22:00:15.186602 systemd-resolved[1350]: Defaulting to hostname 'linux'. Sep 12 22:00:15.188854 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:00:15.192021 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:00:15.192236 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:00:15.195966 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:00:15.202689 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 22:00:15.205696 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 22:00:15.228179 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 22:00:15.253224 systemd-networkd[1432]: lo: Link UP Sep 12 22:00:15.253239 systemd-networkd[1432]: lo: Gained carrier Sep 12 22:00:15.254034 systemd-networkd[1432]: Enumeration completed Sep 12 22:00:15.254149 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:00:15.255241 systemd[1]: Reached target network.target - Network. Sep 12 22:00:15.257195 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 22:00:15.260926 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 22:00:15.261948 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 22:00:15.262996 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:00:15.263978 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 22:00:15.265053 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 22:00:15.265386 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:00:15.265390 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:00:15.266142 systemd-networkd[1432]: eth0: Link UP Sep 12 22:00:15.266260 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 22:00:15.266443 systemd-networkd[1432]: eth0: Gained carrier Sep 12 22:00:15.266523 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:00:15.267466 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 22:00:15.267510 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:00:15.268234 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 22:00:15.269736 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 22:00:15.270676 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 22:00:15.271736 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:00:15.273382 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 22:00:15.276721 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 22:00:15.279253 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 22:00:15.282724 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 22:00:15.283856 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 22:00:15.288292 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 22:00:15.289821 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 22:00:15.291604 systemd-networkd[1432]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 22:00:15.291709 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 22:00:15.292598 systemd-timesyncd[1438]: Network configuration changed, trying to establish connection. Sep 12 22:00:15.293123 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:00:15.294020 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:00:15.295151 systemd-timesyncd[1438]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 22:00:15.295191 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:00:15.295212 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:00:15.295383 systemd-timesyncd[1438]: Initial clock synchronization to Fri 2025-09-12 22:00:15.599152 UTC. Sep 12 22:00:15.297632 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 22:00:15.300797 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 22:00:15.302939 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 22:00:15.306726 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 22:00:15.326770 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 22:00:15.327724 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 22:00:15.331213 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 22:00:15.331703 jq[1487]: false Sep 12 22:00:15.334395 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 22:00:15.336604 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 22:00:15.340977 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 22:00:15.346282 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 22:00:15.348571 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 22:00:15.349078 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 22:00:15.350603 extend-filesystems[1488]: Found /dev/vda6 Sep 12 22:00:15.352695 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 22:00:15.356747 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 22:00:15.360651 extend-filesystems[1488]: Found /dev/vda9 Sep 12 22:00:15.360676 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 22:00:15.363517 extend-filesystems[1488]: Checking size of /dev/vda9 Sep 12 22:00:15.363740 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 22:00:15.366939 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 22:00:15.367129 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 22:00:15.368102 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 22:00:15.368272 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 22:00:15.371775 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 22:00:15.374119 jq[1503]: true Sep 12 22:00:15.371954 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 22:00:15.389361 tar[1514]: linux-arm64/helm Sep 12 22:00:15.389770 extend-filesystems[1488]: Resized partition /dev/vda9 Sep 12 22:00:15.398531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:00:15.399883 extend-filesystems[1529]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 22:00:15.404707 jq[1517]: true Sep 12 22:00:15.405290 update_engine[1500]: I20250912 22:00:15.404891 1500 main.cc:92] Flatcar Update Engine starting Sep 12 22:00:15.411560 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 22:00:15.414979 (ntainerd)[1516]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 22:00:15.415324 dbus-daemon[1483]: [system] SELinux support is enabled Sep 12 22:00:15.415597 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 22:00:15.420989 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 22:00:15.421038 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 22:00:15.423714 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 22:00:15.423737 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 22:00:15.428550 systemd[1]: Started update-engine.service - Update Engine. Sep 12 22:00:15.431211 update_engine[1500]: I20250912 22:00:15.431007 1500 update_check_scheduler.cc:74] Next update check in 6m45s Sep 12 22:00:15.433986 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 22:00:15.447024 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 22:00:15.461542 extend-filesystems[1529]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 22:00:15.461542 extend-filesystems[1529]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 22:00:15.461542 extend-filesystems[1529]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 22:00:15.477652 extend-filesystems[1488]: Resized filesystem in /dev/vda9 Sep 12 22:00:15.462798 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 22:00:15.463026 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 22:00:15.487024 bash[1550]: Updated "/home/core/.ssh/authorized_keys" Sep 12 22:00:15.498850 systemd-logind[1498]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 22:00:15.499233 systemd-logind[1498]: New seat seat0. Sep 12 22:00:15.517949 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 22:00:15.519938 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:00:15.521795 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 22:00:15.536695 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 22:00:15.541630 locksmithd[1540]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 22:00:15.607026 containerd[1516]: time="2025-09-12T22:00:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 22:00:15.607844 containerd[1516]: time="2025-09-12T22:00:15.607808360Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 22:00:15.618239 containerd[1516]: time="2025-09-12T22:00:15.618200680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9µs" Sep 12 22:00:15.618239 containerd[1516]: time="2025-09-12T22:00:15.618236560Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 22:00:15.618292 containerd[1516]: time="2025-09-12T22:00:15.618254880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 22:00:15.618432 containerd[1516]: time="2025-09-12T22:00:15.618411560Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 22:00:15.618457 containerd[1516]: time="2025-09-12T22:00:15.618432920Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 22:00:15.618474 containerd[1516]: time="2025-09-12T22:00:15.618457480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:00:15.618650 containerd[1516]: time="2025-09-12T22:00:15.618625440Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:00:15.618674 containerd[1516]: time="2025-09-12T22:00:15.618649040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:00:15.619070 containerd[1516]: time="2025-09-12T22:00:15.618995920Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:00:15.619095 containerd[1516]: time="2025-09-12T22:00:15.619067880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:00:15.619095 containerd[1516]: time="2025-09-12T22:00:15.619082280Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:00:15.619095 containerd[1516]: time="2025-09-12T22:00:15.619090400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 22:00:15.619284 containerd[1516]: time="2025-09-12T22:00:15.619225560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 22:00:15.619698 containerd[1516]: time="2025-09-12T22:00:15.619672360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:00:15.619778 containerd[1516]: time="2025-09-12T22:00:15.619758840Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:00:15.619802 containerd[1516]: time="2025-09-12T22:00:15.619778280Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 22:00:15.619874 containerd[1516]: time="2025-09-12T22:00:15.619856240Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 22:00:15.620247 containerd[1516]: time="2025-09-12T22:00:15.620225800Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 22:00:15.620369 containerd[1516]: time="2025-09-12T22:00:15.620350080Z" level=info msg="metadata content store policy set" policy=shared Sep 12 22:00:15.623963 containerd[1516]: time="2025-09-12T22:00:15.623928160Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 22:00:15.624029 containerd[1516]: time="2025-09-12T22:00:15.624006280Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 22:00:15.624029 containerd[1516]: time="2025-09-12T22:00:15.624022000Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 22:00:15.624064 containerd[1516]: time="2025-09-12T22:00:15.624047200Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 22:00:15.624064 containerd[1516]: time="2025-09-12T22:00:15.624059880Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 22:00:15.624099 containerd[1516]: time="2025-09-12T22:00:15.624073040Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 22:00:15.624099 containerd[1516]: time="2025-09-12T22:00:15.624085520Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 22:00:15.624132 containerd[1516]: time="2025-09-12T22:00:15.624100280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 22:00:15.624132 containerd[1516]: time="2025-09-12T22:00:15.624112760Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 22:00:15.624132 containerd[1516]: time="2025-09-12T22:00:15.624122360Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 22:00:15.624177 containerd[1516]: time="2025-09-12T22:00:15.624132360Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 22:00:15.624177 containerd[1516]: time="2025-09-12T22:00:15.624145040Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 22:00:15.624284 containerd[1516]: time="2025-09-12T22:00:15.624262120Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 22:00:15.624307 containerd[1516]: time="2025-09-12T22:00:15.624289160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 22:00:15.624338 containerd[1516]: time="2025-09-12T22:00:15.624304920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 22:00:15.624338 containerd[1516]: time="2025-09-12T22:00:15.624316600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 22:00:15.624376 containerd[1516]: time="2025-09-12T22:00:15.624339800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 22:00:15.624376 containerd[1516]: time="2025-09-12T22:00:15.624352720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 22:00:15.624376 containerd[1516]: time="2025-09-12T22:00:15.624363800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 22:00:15.624376 containerd[1516]: time="2025-09-12T22:00:15.624374240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 22:00:15.624443 containerd[1516]: time="2025-09-12T22:00:15.624389320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 22:00:15.624443 containerd[1516]: time="2025-09-12T22:00:15.624400800Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 22:00:15.624443 containerd[1516]: time="2025-09-12T22:00:15.624410840Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 22:00:15.626505 containerd[1516]: time="2025-09-12T22:00:15.624610440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 22:00:15.626505 containerd[1516]: time="2025-09-12T22:00:15.624632040Z" level=info msg="Start snapshots syncer" Sep 12 22:00:15.626505 containerd[1516]: time="2025-09-12T22:00:15.624660240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.624861520Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.624907480Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.624976240Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625077160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625098360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625108760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625120400Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625258880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625279720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625290440Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625518120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625541320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625551920Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625644240Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625669080Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625677680Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625853960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625871800Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625884160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625895280Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625973480Z" level=info msg="runtime interface created" Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625978320Z" level=info msg="created NRI interface" Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.625986200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.626043240Z" level=info msg="Connect containerd service" Sep 12 22:00:15.626571 containerd[1516]: time="2025-09-12T22:00:15.626082440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 22:00:15.627233 containerd[1516]: time="2025-09-12T22:00:15.627199960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 22:00:15.694673 containerd[1516]: time="2025-09-12T22:00:15.694609200Z" level=info msg="Start subscribing containerd event" Sep 12 22:00:15.694743 containerd[1516]: time="2025-09-12T22:00:15.694690280Z" level=info msg="Start recovering state" Sep 12 22:00:15.694793 containerd[1516]: time="2025-09-12T22:00:15.694777360Z" level=info msg="Start event monitor" Sep 12 22:00:15.694817 containerd[1516]: time="2025-09-12T22:00:15.694793800Z" level=info msg="Start cni network conf syncer for default" Sep 12 22:00:15.694817 containerd[1516]: time="2025-09-12T22:00:15.694802520Z" level=info msg="Start streaming server" Sep 12 22:00:15.694817 containerd[1516]: time="2025-09-12T22:00:15.694811960Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 22:00:15.694884 containerd[1516]: time="2025-09-12T22:00:15.694819760Z" level=info msg="runtime interface starting up..." Sep 12 22:00:15.694884 containerd[1516]: time="2025-09-12T22:00:15.694825840Z" level=info msg="starting plugins..." Sep 12 22:00:15.694884 containerd[1516]: time="2025-09-12T22:00:15.694837560Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 22:00:15.694930 containerd[1516]: time="2025-09-12T22:00:15.694884400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 22:00:15.694948 containerd[1516]: time="2025-09-12T22:00:15.694932280Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 22:00:15.695078 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 22:00:15.696038 containerd[1516]: time="2025-09-12T22:00:15.696012880Z" level=info msg="containerd successfully booted in 0.089429s" Sep 12 22:00:15.711281 tar[1514]: linux-arm64/LICENSE Sep 12 22:00:15.711378 tar[1514]: linux-arm64/README.md Sep 12 22:00:15.727573 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 22:00:16.299828 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 22:00:16.319515 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 22:00:16.322934 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 22:00:16.346239 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 22:00:16.346470 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 22:00:16.349073 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 22:00:16.373235 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 22:00:16.375784 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 22:00:16.377654 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 22:00:16.378721 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 22:00:17.025000 systemd-networkd[1432]: eth0: Gained IPv6LL Sep 12 22:00:17.027209 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 22:00:17.028765 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 22:00:17.030994 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 22:00:17.033274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:00:17.045137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 22:00:17.061863 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 22:00:17.062260 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 22:00:17.064061 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 22:00:17.065645 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 22:00:17.586903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:00:17.588213 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 22:00:17.592552 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:00:17.592598 systemd[1]: Startup finished in 2.016s (kernel) + 6.079s (initrd) + 3.958s (userspace) = 12.055s. Sep 12 22:00:17.961367 kubelet[1627]: E0912 22:00:17.961315 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:00:17.964071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:00:17.964211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:00:17.964546 systemd[1]: kubelet.service: Consumed 758ms CPU time, 257.1M memory peak. Sep 12 22:00:20.753234 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 22:00:20.754319 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:47782.service - OpenSSH per-connection server daemon (10.0.0.1:47782). Sep 12 22:00:20.821106 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 47782 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:00:20.822817 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:00:20.828959 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 22:00:20.830096 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 22:00:20.835974 systemd-logind[1498]: New session 1 of user core. Sep 12 22:00:20.858593 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 22:00:20.861552 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 22:00:20.878905 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 22:00:20.882116 systemd-logind[1498]: New session c1 of user core. Sep 12 22:00:20.990483 systemd[1645]: Queued start job for default target default.target. Sep 12 22:00:21.013089 systemd[1645]: Created slice app.slice - User Application Slice. Sep 12 22:00:21.013125 systemd[1645]: Reached target paths.target - Paths. Sep 12 22:00:21.013164 systemd[1645]: Reached target timers.target - Timers. Sep 12 22:00:21.014382 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 22:00:21.023974 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 22:00:21.024058 systemd[1645]: Reached target sockets.target - Sockets. Sep 12 22:00:21.024115 systemd[1645]: Reached target basic.target - Basic System. Sep 12 22:00:21.024148 systemd[1645]: Reached target default.target - Main User Target. Sep 12 22:00:21.024176 systemd[1645]: Startup finished in 135ms. Sep 12 22:00:21.025386 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 22:00:21.029322 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 22:00:21.092084 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:47792.service - OpenSSH per-connection server daemon (10.0.0.1:47792). Sep 12 22:00:21.148689 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 47792 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:00:21.149977 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:00:21.153847 systemd-logind[1498]: New session 2 of user core. Sep 12 22:00:21.162702 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 22:00:21.215450 sshd[1659]: Connection closed by 10.0.0.1 port 47792 Sep 12 22:00:21.215789 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Sep 12 22:00:21.225554 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:47792.service: Deactivated successfully. Sep 12 22:00:21.228887 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 22:00:21.229531 systemd-logind[1498]: Session 2 logged out. Waiting for processes to exit. Sep 12 22:00:21.231702 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:47808.service - OpenSSH per-connection server daemon (10.0.0.1:47808). Sep 12 22:00:21.232693 systemd-logind[1498]: Removed session 2. Sep 12 22:00:21.295314 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 47808 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:00:21.297124 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:00:21.303594 systemd-logind[1498]: New session 3 of user core. Sep 12 22:00:21.318789 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 22:00:21.368638 sshd[1668]: Connection closed by 10.0.0.1 port 47808 Sep 12 22:00:21.369085 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 12 22:00:21.380663 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:47808.service: Deactivated successfully. Sep 12 22:00:21.382804 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 22:00:21.383638 systemd-logind[1498]: Session 3 logged out. Waiting for processes to exit. Sep 12 22:00:21.385298 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:47814.service - OpenSSH per-connection server daemon (10.0.0.1:47814). Sep 12 22:00:21.386330 systemd-logind[1498]: Removed session 3. Sep 12 22:00:21.443456 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 47814 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:00:21.444618 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:00:21.448582 systemd-logind[1498]: New session 4 of user core. Sep 12 22:00:21.461667 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 22:00:21.513255 sshd[1677]: Connection closed by 10.0.0.1 port 47814 Sep 12 22:00:21.513670 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Sep 12 22:00:21.522420 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:47814.service: Deactivated successfully. Sep 12 22:00:21.525011 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 22:00:21.525677 systemd-logind[1498]: Session 4 logged out. Waiting for processes to exit. Sep 12 22:00:21.527847 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:47828.service - OpenSSH per-connection server daemon (10.0.0.1:47828). Sep 12 22:00:21.528685 systemd-logind[1498]: Removed session 4. Sep 12 22:00:21.583657 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 47828 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:00:21.584884 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:00:21.590974 systemd-logind[1498]: New session 5 of user core. Sep 12 22:00:21.610688 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 22:00:21.668483 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 22:00:21.668799 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:00:21.685409 sudo[1687]: pam_unix(sudo:session): session closed for user root Sep 12 22:00:21.687034 sshd[1686]: Connection closed by 10.0.0.1 port 47828 Sep 12 22:00:21.687465 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Sep 12 22:00:21.703604 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:47828.service: Deactivated successfully. Sep 12 22:00:21.705949 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 22:00:21.706679 systemd-logind[1498]: Session 5 logged out. Waiting for processes to exit. Sep 12 22:00:21.708940 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:47840.service - OpenSSH per-connection server daemon (10.0.0.1:47840). Sep 12 22:00:21.709722 systemd-logind[1498]: Removed session 5. Sep 12 22:00:21.764885 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 47840 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:00:21.766119 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:00:21.771689 systemd-logind[1498]: New session 6 of user core. Sep 12 22:00:21.778673 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 22:00:21.830932 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 22:00:21.831188 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:00:21.879047 sudo[1699]: pam_unix(sudo:session): session closed for user root Sep 12 22:00:21.884243 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 22:00:21.884504 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:00:21.894416 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:00:21.939286 augenrules[1721]: No rules Sep 12 22:00:21.940567 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:00:21.942569 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:00:21.944057 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 12 22:00:21.948394 sshd[1697]: Connection closed by 10.0.0.1 port 47840 Sep 12 22:00:21.947687 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Sep 12 22:00:21.958294 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:47840.service: Deactivated successfully. Sep 12 22:00:21.960314 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 22:00:21.960992 systemd-logind[1498]: Session 6 logged out. Waiting for processes to exit. Sep 12 22:00:21.962635 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:47848.service - OpenSSH per-connection server daemon (10.0.0.1:47848). Sep 12 22:00:21.963598 systemd-logind[1498]: Removed session 6. Sep 12 22:00:22.032334 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 47848 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:00:22.032319 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:00:22.036554 systemd-logind[1498]: New session 7 of user core. Sep 12 22:00:22.042660 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 22:00:22.096291 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 22:00:22.096635 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:00:22.370703 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 22:00:22.389963 (dockerd)[1755]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 22:00:22.595377 dockerd[1755]: time="2025-09-12T22:00:22.595229136Z" level=info msg="Starting up" Sep 12 22:00:22.596203 dockerd[1755]: time="2025-09-12T22:00:22.596176872Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 22:00:22.605944 dockerd[1755]: time="2025-09-12T22:00:22.605896167Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 22:00:22.708313 dockerd[1755]: time="2025-09-12T22:00:22.708214843Z" level=info msg="Loading containers: start." Sep 12 22:00:22.718534 kernel: Initializing XFRM netlink socket Sep 12 22:00:22.911270 systemd-networkd[1432]: docker0: Link UP Sep 12 22:00:22.914198 dockerd[1755]: time="2025-09-12T22:00:22.914145365Z" level=info msg="Loading containers: done." Sep 12 22:00:22.928105 dockerd[1755]: time="2025-09-12T22:00:22.928047210Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 22:00:22.928265 dockerd[1755]: time="2025-09-12T22:00:22.928122798Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 22:00:22.928265 dockerd[1755]: time="2025-09-12T22:00:22.928192528Z" level=info msg="Initializing buildkit" Sep 12 22:00:22.948824 dockerd[1755]: time="2025-09-12T22:00:22.948780841Z" level=info msg="Completed buildkit initialization" Sep 12 22:00:22.953339 dockerd[1755]: time="2025-09-12T22:00:22.953302682Z" level=info msg="Daemon has completed initialization" Sep 12 22:00:22.953469 dockerd[1755]: time="2025-09-12T22:00:22.953432377Z" level=info msg="API listen on /run/docker.sock" Sep 12 22:00:22.953552 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 22:00:23.807766 containerd[1516]: time="2025-09-12T22:00:23.807711679Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 22:00:24.535275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount438488350.mount: Deactivated successfully. Sep 12 22:00:25.659965 containerd[1516]: time="2025-09-12T22:00:25.659910395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:25.660654 containerd[1516]: time="2025-09-12T22:00:25.660596346Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 12 22:00:25.661352 containerd[1516]: time="2025-09-12T22:00:25.661325100Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:25.664360 containerd[1516]: time="2025-09-12T22:00:25.664322446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:25.665884 containerd[1516]: time="2025-09-12T22:00:25.665721210Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.857965915s" Sep 12 22:00:25.665884 containerd[1516]: time="2025-09-12T22:00:25.665758349Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 22:00:25.667089 containerd[1516]: time="2025-09-12T22:00:25.667060300Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 22:00:26.832544 containerd[1516]: time="2025-09-12T22:00:26.832224680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:26.833336 containerd[1516]: time="2025-09-12T22:00:26.833154526Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 12 22:00:26.834125 containerd[1516]: time="2025-09-12T22:00:26.834091240Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:26.836957 containerd[1516]: time="2025-09-12T22:00:26.836928085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:26.837878 containerd[1516]: time="2025-09-12T22:00:26.837850982Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.170754405s" Sep 12 22:00:26.837932 containerd[1516]: time="2025-09-12T22:00:26.837882979Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 22:00:26.838411 containerd[1516]: time="2025-09-12T22:00:26.838329197Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 22:00:27.930856 containerd[1516]: time="2025-09-12T22:00:27.930812564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:27.931531 containerd[1516]: time="2025-09-12T22:00:27.931241565Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 12 22:00:27.932871 containerd[1516]: time="2025-09-12T22:00:27.932829902Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:27.935852 containerd[1516]: time="2025-09-12T22:00:27.935820158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:27.937423 containerd[1516]: time="2025-09-12T22:00:27.937313510Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.098892536s" Sep 12 22:00:27.937423 containerd[1516]: time="2025-09-12T22:00:27.937343289Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 22:00:27.937822 containerd[1516]: time="2025-09-12T22:00:27.937767528Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 22:00:28.162604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 22:00:28.164104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:00:28.313663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:00:28.326877 (kubelet)[2048]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:00:28.366218 kubelet[2048]: E0912 22:00:28.366167 2048 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:00:28.369251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:00:28.369487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:00:28.369867 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107M memory peak. Sep 12 22:00:28.993708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193977374.mount: Deactivated successfully. Sep 12 22:00:29.327347 containerd[1516]: time="2025-09-12T22:00:29.327093851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:29.327720 containerd[1516]: time="2025-09-12T22:00:29.327597200Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 12 22:00:29.328446 containerd[1516]: time="2025-09-12T22:00:29.328415888Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:29.330225 containerd[1516]: time="2025-09-12T22:00:29.330177530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:29.330955 containerd[1516]: time="2025-09-12T22:00:29.330921521Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.3931227s" Sep 12 22:00:29.330990 containerd[1516]: time="2025-09-12T22:00:29.330954742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 22:00:29.331676 containerd[1516]: time="2025-09-12T22:00:29.331652625Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 22:00:30.020948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4186944117.mount: Deactivated successfully. Sep 12 22:00:30.742690 containerd[1516]: time="2025-09-12T22:00:30.742625249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:30.743096 containerd[1516]: time="2025-09-12T22:00:30.743069035Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 12 22:00:30.744073 containerd[1516]: time="2025-09-12T22:00:30.744040415Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:30.747080 containerd[1516]: time="2025-09-12T22:00:30.747026053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:30.747608 containerd[1516]: time="2025-09-12T22:00:30.747585433Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.415899432s" Sep 12 22:00:30.747789 containerd[1516]: time="2025-09-12T22:00:30.747691813Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 22:00:30.748104 containerd[1516]: time="2025-09-12T22:00:30.748079029Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 22:00:31.167556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380624080.mount: Deactivated successfully. Sep 12 22:00:31.173904 containerd[1516]: time="2025-09-12T22:00:31.173847706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:00:31.174376 containerd[1516]: time="2025-09-12T22:00:31.174345405Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 22:00:31.175334 containerd[1516]: time="2025-09-12T22:00:31.175278969Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:00:31.177528 containerd[1516]: time="2025-09-12T22:00:31.177413300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:00:31.178269 containerd[1516]: time="2025-09-12T22:00:31.178043314Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 429.931421ms" Sep 12 22:00:31.178269 containerd[1516]: time="2025-09-12T22:00:31.178073066Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 22:00:31.178586 containerd[1516]: time="2025-09-12T22:00:31.178490838Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 22:00:31.661106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853265588.mount: Deactivated successfully. Sep 12 22:00:33.128621 containerd[1516]: time="2025-09-12T22:00:33.128559274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:33.129342 containerd[1516]: time="2025-09-12T22:00:33.129307681Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 12 22:00:33.130215 containerd[1516]: time="2025-09-12T22:00:33.130181981Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:33.135825 containerd[1516]: time="2025-09-12T22:00:33.135786422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:00:33.137424 containerd[1516]: time="2025-09-12T22:00:33.137393387Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.958838433s" Sep 12 22:00:33.137472 containerd[1516]: time="2025-09-12T22:00:33.137427641Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 22:00:37.304669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:00:37.304867 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107M memory peak. Sep 12 22:00:37.306790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:00:37.327372 systemd[1]: Reload requested from client PID 2203 ('systemctl') (unit session-7.scope)... Sep 12 22:00:37.327392 systemd[1]: Reloading... Sep 12 22:00:37.399537 zram_generator::config[2247]: No configuration found. Sep 12 22:00:37.555827 systemd[1]: Reloading finished in 228 ms. Sep 12 22:00:37.621157 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 22:00:37.621246 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 22:00:37.621496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:00:37.621565 systemd[1]: kubelet.service: Consumed 86ms CPU time, 94.9M memory peak. Sep 12 22:00:37.623152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:00:37.753000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:00:37.761847 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:00:37.796774 kubelet[2292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:00:37.796774 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 22:00:37.796774 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:00:37.797122 kubelet[2292]: I0912 22:00:37.796822 2292 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:00:39.484153 kubelet[2292]: I0912 22:00:39.484108 2292 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 22:00:39.484153 kubelet[2292]: I0912 22:00:39.484142 2292 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:00:39.484535 kubelet[2292]: I0912 22:00:39.484375 2292 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 22:00:39.507965 kubelet[2292]: E0912 22:00:39.507914 2292 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:00:39.508964 kubelet[2292]: I0912 22:00:39.508934 2292 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:00:39.516737 kubelet[2292]: I0912 22:00:39.516718 2292 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:00:39.520527 kubelet[2292]: I0912 22:00:39.520194 2292 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:00:39.520613 kubelet[2292]: I0912 22:00:39.520599 2292 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 22:00:39.520757 kubelet[2292]: I0912 22:00:39.520723 2292 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:00:39.520909 kubelet[2292]: I0912 22:00:39.520749 2292 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:00:39.520992 kubelet[2292]: I0912 22:00:39.520978 2292 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:00:39.520992 kubelet[2292]: I0912 22:00:39.520987 2292 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 22:00:39.521203 kubelet[2292]: I0912 22:00:39.521190 2292 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:00:39.523313 kubelet[2292]: I0912 22:00:39.523143 2292 kubelet.go:408] "Attempting to sync node with API server" Sep 12 22:00:39.523313 kubelet[2292]: I0912 22:00:39.523171 2292 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:00:39.523313 kubelet[2292]: I0912 22:00:39.523195 2292 kubelet.go:314] "Adding apiserver pod source" Sep 12 22:00:39.523313 kubelet[2292]: I0912 22:00:39.523271 2292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:00:39.526806 kubelet[2292]: W0912 22:00:39.526667 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 12 22:00:39.526891 kubelet[2292]: E0912 22:00:39.526817 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:00:39.526891 kubelet[2292]: W0912 22:00:39.526684 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 12 22:00:39.526891 kubelet[2292]: E0912 22:00:39.526845 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:00:39.528963 kubelet[2292]: I0912 22:00:39.528941 2292 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:00:39.530185 kubelet[2292]: I0912 22:00:39.530148 2292 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 22:00:39.530287 kubelet[2292]: W0912 22:00:39.530273 2292 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 22:00:39.531423 kubelet[2292]: I0912 22:00:39.531402 2292 server.go:1274] "Started kubelet" Sep 12 22:00:39.532494 kubelet[2292]: I0912 22:00:39.532374 2292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:00:39.532811 kubelet[2292]: I0912 22:00:39.532789 2292 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:00:39.532873 kubelet[2292]: I0912 22:00:39.532844 2292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:00:39.532895 kubelet[2292]: I0912 22:00:39.532875 2292 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:00:39.534459 kubelet[2292]: I0912 22:00:39.534071 2292 server.go:449] "Adding debug handlers to kubelet server" Sep 12 22:00:39.535183 kubelet[2292]: I0912 22:00:39.535155 2292 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:00:39.535802 kubelet[2292]: E0912 22:00:39.534801 2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864a7f850d7a906 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 22:00:39.531374854 +0000 UTC m=+1.766636165,LastTimestamp:2025-09-12 22:00:39.531374854 +0000 UTC m=+1.766636165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 22:00:39.536520 kubelet[2292]: I0912 22:00:39.536373 2292 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 22:00:39.536520 kubelet[2292]: I0912 22:00:39.536491 2292 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 22:00:39.536608 kubelet[2292]: I0912 22:00:39.536556 2292 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:00:39.536965 kubelet[2292]: W0912 22:00:39.536903 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 12 22:00:39.536965 kubelet[2292]: I0912 22:00:39.536952 2292 factory.go:221] Registration of the systemd container factory successfully Sep 12 22:00:39.537040 kubelet[2292]: E0912 22:00:39.536968 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:00:39.537061 kubelet[2292]: I0912 22:00:39.537036 2292 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:00:39.537722 kubelet[2292]: E0912 22:00:39.537424 2292 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:00:39.537722 kubelet[2292]: E0912 22:00:39.537514 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Sep 12 22:00:39.538327 kubelet[2292]: E0912 22:00:39.538133 2292 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 22:00:39.538468 kubelet[2292]: I0912 22:00:39.538415 2292 factory.go:221] Registration of the containerd container factory successfully Sep 12 22:00:39.550014 kubelet[2292]: I0912 22:00:39.549972 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 22:00:39.551321 kubelet[2292]: I0912 22:00:39.551279 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 22:00:39.551321 kubelet[2292]: I0912 22:00:39.551311 2292 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 22:00:39.551321 kubelet[2292]: I0912 22:00:39.551328 2292 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 22:00:39.551442 kubelet[2292]: E0912 22:00:39.551367 2292 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:00:39.554702 kubelet[2292]: I0912 22:00:39.554678 2292 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 22:00:39.554702 kubelet[2292]: I0912 22:00:39.554695 2292 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 22:00:39.554803 kubelet[2292]: I0912 22:00:39.554713 2292 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:00:39.554883 kubelet[2292]: W0912 22:00:39.554825 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 12 22:00:39.554883 kubelet[2292]: E0912 22:00:39.554879 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:00:39.558498 kubelet[2292]: I0912 22:00:39.558465 2292 policy_none.go:49] "None policy: Start" Sep 12 22:00:39.559137 kubelet[2292]: I0912 22:00:39.559100 2292 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 22:00:39.559137 kubelet[2292]: I0912 22:00:39.559128 2292 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:00:39.566701 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 22:00:39.580168 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 22:00:39.583535 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 22:00:39.603343 kubelet[2292]: I0912 22:00:39.603315 2292 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 22:00:39.603923 kubelet[2292]: I0912 22:00:39.603905 2292 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:00:39.604184 kubelet[2292]: I0912 22:00:39.604146 2292 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:00:39.604499 kubelet[2292]: I0912 22:00:39.604469 2292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:00:39.605313 kubelet[2292]: E0912 22:00:39.605261 2292 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 22:00:39.658104 systemd[1]: Created slice kubepods-burstable-pod2d359307c784f49dc00d46da06f84cea.slice - libcontainer container kubepods-burstable-pod2d359307c784f49dc00d46da06f84cea.slice. Sep 12 22:00:39.674686 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 12 22:00:39.697017 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 12 22:00:39.705627 kubelet[2292]: I0912 22:00:39.705572 2292 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 22:00:39.706114 kubelet[2292]: E0912 22:00:39.706075 2292 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Sep 12 22:00:39.737650 kubelet[2292]: I0912 22:00:39.737561 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 22:00:39.737937 kubelet[2292]: E0912 22:00:39.737897 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Sep 12 22:00:39.838042 kubelet[2292]: I0912 22:00:39.838003 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d359307c784f49dc00d46da06f84cea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2d359307c784f49dc00d46da06f84cea\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:00:39.838042 kubelet[2292]: I0912 22:00:39.838057 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:00:39.838201 kubelet[2292]: I0912 22:00:39.838097 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d359307c784f49dc00d46da06f84cea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2d359307c784f49dc00d46da06f84cea\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:00:39.838201 kubelet[2292]: I0912 22:00:39.838117 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:00:39.838201 kubelet[2292]: I0912 22:00:39.838144 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:00:39.838201 kubelet[2292]: I0912 22:00:39.838161 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:00:39.838201 kubelet[2292]: I0912 22:00:39.838177 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d359307c784f49dc00d46da06f84cea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2d359307c784f49dc00d46da06f84cea\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:00:39.838297 kubelet[2292]: I0912 22:00:39.838191 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:00:39.907380 kubelet[2292]: I0912 22:00:39.907291 2292 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 22:00:39.907723 kubelet[2292]: E0912 22:00:39.907683 2292 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Sep 12 22:00:39.973431 kubelet[2292]: E0912 22:00:39.973387 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:39.974043 containerd[1516]: time="2025-09-12T22:00:39.973999790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2d359307c784f49dc00d46da06f84cea,Namespace:kube-system,Attempt:0,}" Sep 12 22:00:39.989829 containerd[1516]: time="2025-09-12T22:00:39.989724292Z" level=info msg="connecting to shim d78658983b6f6231488a2dd0a2c5aba589f0b6dc66e70c3b51b760c25ee825b9" address="unix:///run/containerd/s/c7f64501967c2dc1d42f07b6a9e394fa9cec1f599abf510184ee8663c6f4ef52" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:00:39.996098 kubelet[2292]: E0912 22:00:39.995384 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:39.997634 containerd[1516]: time="2025-09-12T22:00:39.997406727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 22:00:39.999750 kubelet[2292]: E0912 22:00:39.999719 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:40.000427 containerd[1516]: time="2025-09-12T22:00:40.000241874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 22:00:40.018667 systemd[1]: Started cri-containerd-d78658983b6f6231488a2dd0a2c5aba589f0b6dc66e70c3b51b760c25ee825b9.scope - libcontainer container d78658983b6f6231488a2dd0a2c5aba589f0b6dc66e70c3b51b760c25ee825b9. Sep 12 22:00:40.025538 containerd[1516]: time="2025-09-12T22:00:40.025485076Z" level=info msg="connecting to shim c6143bc3f8f04e26ea70d1d29203d4a82e47bb5273e58e42b2e2439c3c568eaf" address="unix:///run/containerd/s/f2459276c7a3939f959cc48d890a88d651e8ae5a5d3544968b1cc93b9234c3a4" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:00:40.026854 containerd[1516]: time="2025-09-12T22:00:40.026776745Z" level=info msg="connecting to shim 5d4c80ab616183040e3e800281c8d986029bb7a185d762e50f0e4370f59148f2" address="unix:///run/containerd/s/b90350df39f3e5f5fd7e152d74b4f2f32d499afaa660923237e254b44d67a979" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:00:40.053846 systemd[1]: Started cri-containerd-c6143bc3f8f04e26ea70d1d29203d4a82e47bb5273e58e42b2e2439c3c568eaf.scope - libcontainer container c6143bc3f8f04e26ea70d1d29203d4a82e47bb5273e58e42b2e2439c3c568eaf. Sep 12 22:00:40.057852 systemd[1]: Started cri-containerd-5d4c80ab616183040e3e800281c8d986029bb7a185d762e50f0e4370f59148f2.scope - libcontainer container 5d4c80ab616183040e3e800281c8d986029bb7a185d762e50f0e4370f59148f2. Sep 12 22:00:40.071992 containerd[1516]: time="2025-09-12T22:00:40.071954942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2d359307c784f49dc00d46da06f84cea,Namespace:kube-system,Attempt:0,} returns sandbox id \"d78658983b6f6231488a2dd0a2c5aba589f0b6dc66e70c3b51b760c25ee825b9\"" Sep 12 22:00:40.073146 kubelet[2292]: E0912 22:00:40.073128 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:40.075906 containerd[1516]: time="2025-09-12T22:00:40.075874176Z" level=info msg="CreateContainer within sandbox \"d78658983b6f6231488a2dd0a2c5aba589f0b6dc66e70c3b51b760c25ee825b9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 22:00:40.090529 containerd[1516]: time="2025-09-12T22:00:40.090011302Z" level=info msg="Container 8afbb3a0ac2d26a33fc072585e3ec7098474d422f0255300dae86e9c19619e34: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:00:40.097900 containerd[1516]: time="2025-09-12T22:00:40.097864793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d4c80ab616183040e3e800281c8d986029bb7a185d762e50f0e4370f59148f2\"" Sep 12 22:00:40.098404 containerd[1516]: time="2025-09-12T22:00:40.098312563Z" level=info msg="CreateContainer within sandbox \"d78658983b6f6231488a2dd0a2c5aba589f0b6dc66e70c3b51b760c25ee825b9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8afbb3a0ac2d26a33fc072585e3ec7098474d422f0255300dae86e9c19619e34\"" Sep 12 22:00:40.099416 containerd[1516]: time="2025-09-12T22:00:40.099389180Z" level=info msg="StartContainer for \"8afbb3a0ac2d26a33fc072585e3ec7098474d422f0255300dae86e9c19619e34\"" Sep 12 22:00:40.099999 kubelet[2292]: E0912 22:00:40.099961 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:40.100792 containerd[1516]: time="2025-09-12T22:00:40.100754843Z" level=info msg="connecting to shim 8afbb3a0ac2d26a33fc072585e3ec7098474d422f0255300dae86e9c19619e34" address="unix:///run/containerd/s/c7f64501967c2dc1d42f07b6a9e394fa9cec1f599abf510184ee8663c6f4ef52" protocol=ttrpc version=3 Sep 12 22:00:40.101244 containerd[1516]: time="2025-09-12T22:00:40.101213349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6143bc3f8f04e26ea70d1d29203d4a82e47bb5273e58e42b2e2439c3c568eaf\"" Sep 12 22:00:40.101648 containerd[1516]: time="2025-09-12T22:00:40.101624221Z" level=info msg="CreateContainer within sandbox \"5d4c80ab616183040e3e800281c8d986029bb7a185d762e50f0e4370f59148f2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 22:00:40.101791 kubelet[2292]: E0912 22:00:40.101767 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:40.103739 containerd[1516]: time="2025-09-12T22:00:40.103683472Z" level=info msg="CreateContainer within sandbox \"c6143bc3f8f04e26ea70d1d29203d4a82e47bb5273e58e42b2e2439c3c568eaf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 22:00:40.111621 containerd[1516]: time="2025-09-12T22:00:40.111448908Z" level=info msg="Container 8d88b1671b0acbc9731a653aa9bedcabe5d0808ea474b7196452999836bfd873: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:00:40.114699 containerd[1516]: time="2025-09-12T22:00:40.114656927Z" level=info msg="Container b1b503591a91d83a277b71bad4d0c2c740224a95c638cc0f0b0327a7ec3dcef1: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:00:40.120570 containerd[1516]: time="2025-09-12T22:00:40.120495276Z" level=info msg="CreateContainer within sandbox \"5d4c80ab616183040e3e800281c8d986029bb7a185d762e50f0e4370f59148f2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8d88b1671b0acbc9731a653aa9bedcabe5d0808ea474b7196452999836bfd873\"" Sep 12 22:00:40.121526 containerd[1516]: time="2025-09-12T22:00:40.121018241Z" level=info msg="StartContainer for \"8d88b1671b0acbc9731a653aa9bedcabe5d0808ea474b7196452999836bfd873\"" Sep 12 22:00:40.122520 containerd[1516]: time="2025-09-12T22:00:40.122468674Z" level=info msg="CreateContainer within sandbox \"c6143bc3f8f04e26ea70d1d29203d4a82e47bb5273e58e42b2e2439c3c568eaf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b1b503591a91d83a277b71bad4d0c2c740224a95c638cc0f0b0327a7ec3dcef1\"" Sep 12 22:00:40.122675 systemd[1]: Started cri-containerd-8afbb3a0ac2d26a33fc072585e3ec7098474d422f0255300dae86e9c19619e34.scope - libcontainer container 8afbb3a0ac2d26a33fc072585e3ec7098474d422f0255300dae86e9c19619e34. Sep 12 22:00:40.123286 containerd[1516]: time="2025-09-12T22:00:40.123119276Z" level=info msg="connecting to shim 8d88b1671b0acbc9731a653aa9bedcabe5d0808ea474b7196452999836bfd873" address="unix:///run/containerd/s/b90350df39f3e5f5fd7e152d74b4f2f32d499afaa660923237e254b44d67a979" protocol=ttrpc version=3 Sep 12 22:00:40.124832 containerd[1516]: time="2025-09-12T22:00:40.124801065Z" level=info msg="StartContainer for \"b1b503591a91d83a277b71bad4d0c2c740224a95c638cc0f0b0327a7ec3dcef1\"" Sep 12 22:00:40.126081 containerd[1516]: time="2025-09-12T22:00:40.126038490Z" level=info msg="connecting to shim b1b503591a91d83a277b71bad4d0c2c740224a95c638cc0f0b0327a7ec3dcef1" address="unix:///run/containerd/s/f2459276c7a3939f959cc48d890a88d651e8ae5a5d3544968b1cc93b9234c3a4" protocol=ttrpc version=3 Sep 12 22:00:40.139689 kubelet[2292]: E0912 22:00:40.139652 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Sep 12 22:00:40.142670 systemd[1]: Started cri-containerd-8d88b1671b0acbc9731a653aa9bedcabe5d0808ea474b7196452999836bfd873.scope - libcontainer container 8d88b1671b0acbc9731a653aa9bedcabe5d0808ea474b7196452999836bfd873. Sep 12 22:00:40.146266 systemd[1]: Started cri-containerd-b1b503591a91d83a277b71bad4d0c2c740224a95c638cc0f0b0327a7ec3dcef1.scope - libcontainer container b1b503591a91d83a277b71bad4d0c2c740224a95c638cc0f0b0327a7ec3dcef1. Sep 12 22:00:40.172912 containerd[1516]: time="2025-09-12T22:00:40.172870353Z" level=info msg="StartContainer for \"8afbb3a0ac2d26a33fc072585e3ec7098474d422f0255300dae86e9c19619e34\" returns successfully" Sep 12 22:00:40.196361 containerd[1516]: time="2025-09-12T22:00:40.196302710Z" level=info msg="StartContainer for \"b1b503591a91d83a277b71bad4d0c2c740224a95c638cc0f0b0327a7ec3dcef1\" returns successfully" Sep 12 22:00:40.198744 containerd[1516]: time="2025-09-12T22:00:40.198712300Z" level=info msg="StartContainer for \"8d88b1671b0acbc9731a653aa9bedcabe5d0808ea474b7196452999836bfd873\" returns successfully" Sep 12 22:00:40.311322 kubelet[2292]: I0912 22:00:40.310947 2292 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 22:00:40.559334 kubelet[2292]: E0912 22:00:40.559307 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:40.564611 kubelet[2292]: E0912 22:00:40.563026 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:40.564980 kubelet[2292]: E0912 22:00:40.564891 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:41.566534 kubelet[2292]: E0912 22:00:41.566428 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:41.566838 kubelet[2292]: E0912 22:00:41.566752 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:41.634875 kubelet[2292]: E0912 22:00:41.634829 2292 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 22:00:41.685701 kubelet[2292]: I0912 22:00:41.685658 2292 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 22:00:42.525189 kubelet[2292]: I0912 22:00:42.525152 2292 apiserver.go:52] "Watching apiserver" Sep 12 22:00:42.537018 kubelet[2292]: I0912 22:00:42.536983 2292 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 22:00:43.515766 kubelet[2292]: E0912 22:00:43.515732 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:43.568122 kubelet[2292]: E0912 22:00:43.568096 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:43.703619 systemd[1]: Reload requested from client PID 2569 ('systemctl') (unit session-7.scope)... Sep 12 22:00:43.703634 systemd[1]: Reloading... Sep 12 22:00:43.778562 zram_generator::config[2610]: No configuration found. Sep 12 22:00:43.990236 systemd[1]: Reloading finished in 286 ms. Sep 12 22:00:44.023436 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:00:44.037456 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 22:00:44.038593 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:00:44.038659 systemd[1]: kubelet.service: Consumed 2.117s CPU time, 128.1M memory peak. Sep 12 22:00:44.040446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:00:44.176600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:00:44.180875 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:00:44.225169 kubelet[2654]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:00:44.225169 kubelet[2654]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 22:00:44.225473 kubelet[2654]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:00:44.225473 kubelet[2654]: I0912 22:00:44.225277 2654 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:00:44.231708 kubelet[2654]: I0912 22:00:44.231666 2654 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 22:00:44.231708 kubelet[2654]: I0912 22:00:44.231697 2654 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:00:44.231925 kubelet[2654]: I0912 22:00:44.231908 2654 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 22:00:44.233265 kubelet[2654]: I0912 22:00:44.233240 2654 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 22:00:44.235329 kubelet[2654]: I0912 22:00:44.235302 2654 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:00:44.239546 kubelet[2654]: I0912 22:00:44.239444 2654 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:00:44.243140 kubelet[2654]: I0912 22:00:44.243044 2654 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:00:44.243470 kubelet[2654]: I0912 22:00:44.243424 2654 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 22:00:44.243703 kubelet[2654]: I0912 22:00:44.243562 2654 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:00:44.245531 kubelet[2654]: I0912 22:00:44.245065 2654 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:00:44.245531 kubelet[2654]: I0912 22:00:44.245381 2654 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:00:44.245531 kubelet[2654]: I0912 22:00:44.245392 2654 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 22:00:44.245531 kubelet[2654]: I0912 22:00:44.245437 2654 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:00:44.245721 kubelet[2654]: I0912 22:00:44.245587 2654 kubelet.go:408] "Attempting to sync node with API server" Sep 12 22:00:44.245721 kubelet[2654]: I0912 22:00:44.245618 2654 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:00:44.245721 kubelet[2654]: I0912 22:00:44.245639 2654 kubelet.go:314] "Adding apiserver pod source" Sep 12 22:00:44.245721 kubelet[2654]: I0912 22:00:44.245653 2654 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:00:44.247430 kubelet[2654]: I0912 22:00:44.246920 2654 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:00:44.248335 kubelet[2654]: I0912 22:00:44.248295 2654 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 22:00:44.249846 kubelet[2654]: I0912 22:00:44.249482 2654 server.go:1274] "Started kubelet" Sep 12 22:00:44.249846 kubelet[2654]: I0912 22:00:44.249684 2654 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:00:44.249846 kubelet[2654]: I0912 22:00:44.249727 2654 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:00:44.250751 kubelet[2654]: I0912 22:00:44.250729 2654 server.go:449] "Adding debug handlers to kubelet server" Sep 12 22:00:44.251696 kubelet[2654]: I0912 22:00:44.251662 2654 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:00:44.253015 kubelet[2654]: I0912 22:00:44.252986 2654 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:00:44.254979 kubelet[2654]: I0912 22:00:44.254942 2654 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:00:44.260481 kubelet[2654]: E0912 22:00:44.260453 2654 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:00:44.260734 kubelet[2654]: I0912 22:00:44.260721 2654 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 22:00:44.261403 kubelet[2654]: I0912 22:00:44.261384 2654 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 22:00:44.261677 kubelet[2654]: I0912 22:00:44.261659 2654 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:00:44.262832 kubelet[2654]: E0912 22:00:44.262806 2654 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 22:00:44.264383 kubelet[2654]: I0912 22:00:44.264352 2654 factory.go:221] Registration of the systemd container factory successfully Sep 12 22:00:44.264522 kubelet[2654]: I0912 22:00:44.264470 2654 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:00:44.266478 kubelet[2654]: I0912 22:00:44.266354 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 22:00:44.267338 kubelet[2654]: I0912 22:00:44.267321 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 22:00:44.267602 kubelet[2654]: I0912 22:00:44.267586 2654 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 22:00:44.267684 kubelet[2654]: I0912 22:00:44.267673 2654 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 22:00:44.267774 kubelet[2654]: E0912 22:00:44.267759 2654 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:00:44.272471 kubelet[2654]: I0912 22:00:44.272440 2654 factory.go:221] Registration of the containerd container factory successfully Sep 12 22:00:44.315410 kubelet[2654]: I0912 22:00:44.315379 2654 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 22:00:44.315410 kubelet[2654]: I0912 22:00:44.315399 2654 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 22:00:44.315410 kubelet[2654]: I0912 22:00:44.315419 2654 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:00:44.315597 kubelet[2654]: I0912 22:00:44.315579 2654 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 22:00:44.315624 kubelet[2654]: I0912 22:00:44.315597 2654 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 22:00:44.315624 kubelet[2654]: I0912 22:00:44.315614 2654 policy_none.go:49] "None policy: Start" Sep 12 22:00:44.316268 kubelet[2654]: I0912 22:00:44.316250 2654 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 22:00:44.316302 kubelet[2654]: I0912 22:00:44.316275 2654 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:00:44.316492 kubelet[2654]: I0912 22:00:44.316475 2654 state_mem.go:75] "Updated machine memory state" Sep 12 22:00:44.322532 kubelet[2654]: I0912 22:00:44.322381 2654 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 22:00:44.322602 kubelet[2654]: I0912 22:00:44.322571 2654 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:00:44.322622 kubelet[2654]: I0912 22:00:44.322584 2654 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:00:44.322832 kubelet[2654]: I0912 22:00:44.322814 2654 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:00:44.378530 kubelet[2654]: E0912 22:00:44.376702 2654 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 22:00:44.427066 kubelet[2654]: I0912 22:00:44.427041 2654 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 22:00:44.434976 kubelet[2654]: I0912 22:00:44.434943 2654 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 22:00:44.435076 kubelet[2654]: I0912 22:00:44.435025 2654 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 22:00:44.562636 kubelet[2654]: I0912 22:00:44.562493 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d359307c784f49dc00d46da06f84cea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2d359307c784f49dc00d46da06f84cea\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:00:44.562636 kubelet[2654]: I0912 22:00:44.562556 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d359307c784f49dc00d46da06f84cea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2d359307c784f49dc00d46da06f84cea\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:00:44.562636 kubelet[2654]: I0912 22:00:44.562577 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:00:44.562636 kubelet[2654]: I0912 22:00:44.562594 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 22:00:44.562636 kubelet[2654]: I0912 22:00:44.562611 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d359307c784f49dc00d46da06f84cea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2d359307c784f49dc00d46da06f84cea\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:00:44.562818 kubelet[2654]: I0912 22:00:44.562638 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:00:44.562818 kubelet[2654]: I0912 22:00:44.562652 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:00:44.562818 kubelet[2654]: I0912 22:00:44.562666 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:00:44.562818 kubelet[2654]: I0912 22:00:44.562682 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:00:44.676972 kubelet[2654]: E0912 22:00:44.676855 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:44.676972 kubelet[2654]: E0912 22:00:44.676855 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:44.676972 kubelet[2654]: E0912 22:00:44.676869 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:44.696575 sudo[2688]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 22:00:44.696860 sudo[2688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 22:00:45.005793 sudo[2688]: pam_unix(sudo:session): session closed for user root Sep 12 22:00:45.246734 kubelet[2654]: I0912 22:00:45.246680 2654 apiserver.go:52] "Watching apiserver" Sep 12 22:00:45.261712 kubelet[2654]: I0912 22:00:45.261564 2654 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 22:00:45.293181 kubelet[2654]: E0912 22:00:45.293137 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:45.294551 kubelet[2654]: E0912 22:00:45.293194 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:45.294551 kubelet[2654]: E0912 22:00:45.293810 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:45.334964 kubelet[2654]: I0912 22:00:45.334894 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.334876794 podStartE2EDuration="1.334876794s" podCreationTimestamp="2025-09-12 22:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:00:45.326057144 +0000 UTC m=+1.140230991" watchObservedRunningTime="2025-09-12 22:00:45.334876794 +0000 UTC m=+1.149050641" Sep 12 22:00:45.335125 kubelet[2654]: I0912 22:00:45.335011 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.335006817 podStartE2EDuration="1.335006817s" podCreationTimestamp="2025-09-12 22:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:00:45.334170316 +0000 UTC m=+1.148344163" watchObservedRunningTime="2025-09-12 22:00:45.335006817 +0000 UTC m=+1.149180664" Sep 12 22:00:45.341434 kubelet[2654]: I0912 22:00:45.341387 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.341374049 podStartE2EDuration="2.341374049s" podCreationTimestamp="2025-09-12 22:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:00:45.34075616 +0000 UTC m=+1.154930007" watchObservedRunningTime="2025-09-12 22:00:45.341374049 +0000 UTC m=+1.155547896" Sep 12 22:00:46.294736 kubelet[2654]: E0912 22:00:46.294707 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:46.797805 sudo[1734]: pam_unix(sudo:session): session closed for user root Sep 12 22:00:46.798898 sshd[1733]: Connection closed by 10.0.0.1 port 47848 Sep 12 22:00:46.799798 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 12 22:00:46.802997 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:47848.service: Deactivated successfully. Sep 12 22:00:46.805100 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 22:00:46.805399 systemd[1]: session-7.scope: Consumed 5.992s CPU time, 257.9M memory peak. Sep 12 22:00:46.806420 systemd-logind[1498]: Session 7 logged out. Waiting for processes to exit. Sep 12 22:00:46.807596 systemd-logind[1498]: Removed session 7. Sep 12 22:00:48.844132 kubelet[2654]: I0912 22:00:48.844102 2654 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 22:00:48.844767 containerd[1516]: time="2025-09-12T22:00:48.844675297Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 22:00:48.845003 kubelet[2654]: I0912 22:00:48.844839 2654 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 22:00:49.815776 systemd[1]: Created slice kubepods-besteffort-pod54597134_8a22_449b_aa28_3d0dce1ef26e.slice - libcontainer container kubepods-besteffort-pod54597134_8a22_449b_aa28_3d0dce1ef26e.slice. Sep 12 22:00:49.828849 systemd[1]: Created slice kubepods-burstable-podb7af3e26_aad0_4be9_9ea5_5eb501e638b7.slice - libcontainer container kubepods-burstable-podb7af3e26_aad0_4be9_9ea5_5eb501e638b7.slice. Sep 12 22:00:49.893887 kubelet[2654]: I0912 22:00:49.893846 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-bpf-maps\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.893887 kubelet[2654]: I0912 22:00:49.893889 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-host-proc-sys-net\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894252 kubelet[2654]: I0912 22:00:49.893906 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-etc-cni-netd\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894252 kubelet[2654]: I0912 22:00:49.893923 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54597134-8a22-449b-aa28-3d0dce1ef26e-kube-proxy\") pod \"kube-proxy-nb9qf\" (UID: \"54597134-8a22-449b-aa28-3d0dce1ef26e\") " pod="kube-system/kube-proxy-nb9qf" Sep 12 22:00:49.894252 kubelet[2654]: I0912 22:00:49.893939 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plrnz\" (UniqueName: \"kubernetes.io/projected/54597134-8a22-449b-aa28-3d0dce1ef26e-kube-api-access-plrnz\") pod \"kube-proxy-nb9qf\" (UID: \"54597134-8a22-449b-aa28-3d0dce1ef26e\") " pod="kube-system/kube-proxy-nb9qf" Sep 12 22:00:49.894252 kubelet[2654]: I0912 22:00:49.893957 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-cgroup\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894252 kubelet[2654]: I0912 22:00:49.893970 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cni-path\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894349 kubelet[2654]: I0912 22:00:49.893984 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-host-proc-sys-kernel\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894349 kubelet[2654]: I0912 22:00:49.893996 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54597134-8a22-449b-aa28-3d0dce1ef26e-lib-modules\") pod \"kube-proxy-nb9qf\" (UID: \"54597134-8a22-449b-aa28-3d0dce1ef26e\") " pod="kube-system/kube-proxy-nb9qf" Sep 12 22:00:49.894349 kubelet[2654]: I0912 22:00:49.894011 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-clustermesh-secrets\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894349 kubelet[2654]: I0912 22:00:49.894026 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-config-path\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894349 kubelet[2654]: I0912 22:00:49.894042 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-hubble-tls\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894349 kubelet[2654]: I0912 22:00:49.894056 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-hostproc\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894461 kubelet[2654]: I0912 22:00:49.894071 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-xtables-lock\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894461 kubelet[2654]: I0912 22:00:49.894097 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwfm\" (UniqueName: \"kubernetes.io/projected/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-kube-api-access-fjwfm\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894461 kubelet[2654]: I0912 22:00:49.894114 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-run\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894461 kubelet[2654]: I0912 22:00:49.894135 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-lib-modules\") pod \"cilium-fstsq\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " pod="kube-system/cilium-fstsq" Sep 12 22:00:49.894461 kubelet[2654]: I0912 22:00:49.894154 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54597134-8a22-449b-aa28-3d0dce1ef26e-xtables-lock\") pod \"kube-proxy-nb9qf\" (UID: \"54597134-8a22-449b-aa28-3d0dce1ef26e\") " pod="kube-system/kube-proxy-nb9qf" Sep 12 22:00:50.032940 systemd[1]: Created slice kubepods-besteffort-pod65f93ffa_c339_4fba_809f_d74e68bf7c96.slice - libcontainer container kubepods-besteffort-pod65f93ffa_c339_4fba_809f_d74e68bf7c96.slice. Sep 12 22:00:50.096556 kubelet[2654]: I0912 22:00:50.096429 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65f93ffa-c339-4fba-809f-d74e68bf7c96-cilium-config-path\") pod \"cilium-operator-5d85765b45-d4chl\" (UID: \"65f93ffa-c339-4fba-809f-d74e68bf7c96\") " pod="kube-system/cilium-operator-5d85765b45-d4chl" Sep 12 22:00:50.096556 kubelet[2654]: I0912 22:00:50.096491 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcmf7\" (UniqueName: \"kubernetes.io/projected/65f93ffa-c339-4fba-809f-d74e68bf7c96-kube-api-access-hcmf7\") pod \"cilium-operator-5d85765b45-d4chl\" (UID: \"65f93ffa-c339-4fba-809f-d74e68bf7c96\") " pod="kube-system/cilium-operator-5d85765b45-d4chl" Sep 12 22:00:50.126690 kubelet[2654]: E0912 22:00:50.126648 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:50.127347 containerd[1516]: time="2025-09-12T22:00:50.127217849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nb9qf,Uid:54597134-8a22-449b-aa28-3d0dce1ef26e,Namespace:kube-system,Attempt:0,}" Sep 12 22:00:50.133078 kubelet[2654]: E0912 22:00:50.133050 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:50.133611 containerd[1516]: time="2025-09-12T22:00:50.133575816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fstsq,Uid:b7af3e26-aad0-4be9-9ea5-5eb501e638b7,Namespace:kube-system,Attempt:0,}" Sep 12 22:00:50.193152 containerd[1516]: time="2025-09-12T22:00:50.193108243Z" level=info msg="connecting to shim 08387b1e0b5e43e24ee049df9c909a562629508dfe221e4a9db03ee183895ccc" address="unix:///run/containerd/s/b6983539f7942cd1a4a8564afbd71e32d0ec3efb916fb3a3335ce22d289081b5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:00:50.194120 containerd[1516]: time="2025-09-12T22:00:50.194090736Z" level=info msg="connecting to shim 2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3" address="unix:///run/containerd/s/930ba85b21446855b32ee7d01257a258425b8f9c00508c0a2dae5c6d5ae92065" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:00:50.218700 kubelet[2654]: E0912 22:00:50.218660 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:50.220673 systemd[1]: Started cri-containerd-08387b1e0b5e43e24ee049df9c909a562629508dfe221e4a9db03ee183895ccc.scope - libcontainer container 08387b1e0b5e43e24ee049df9c909a562629508dfe221e4a9db03ee183895ccc. Sep 12 22:00:50.224351 systemd[1]: Started cri-containerd-2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3.scope - libcontainer container 2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3. Sep 12 22:00:50.257283 containerd[1516]: time="2025-09-12T22:00:50.257237459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nb9qf,Uid:54597134-8a22-449b-aa28-3d0dce1ef26e,Namespace:kube-system,Attempt:0,} returns sandbox id \"08387b1e0b5e43e24ee049df9c909a562629508dfe221e4a9db03ee183895ccc\"" Sep 12 22:00:50.258190 kubelet[2654]: E0912 22:00:50.258143 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:50.261220 containerd[1516]: time="2025-09-12T22:00:50.261183161Z" level=info msg="CreateContainer within sandbox \"08387b1e0b5e43e24ee049df9c909a562629508dfe221e4a9db03ee183895ccc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 22:00:50.262080 containerd[1516]: time="2025-09-12T22:00:50.262041056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fstsq,Uid:b7af3e26-aad0-4be9-9ea5-5eb501e638b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\"" Sep 12 22:00:50.263991 kubelet[2654]: E0912 22:00:50.263722 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:50.265569 containerd[1516]: time="2025-09-12T22:00:50.265536998Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 22:00:50.274025 containerd[1516]: time="2025-09-12T22:00:50.273987871Z" level=info msg="Container 6fe95502fee690cb1c541e7de0260a9b9c8e4b4c0ecd7530efcdb8dbed1ff0aa: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:00:50.282582 containerd[1516]: time="2025-09-12T22:00:50.282539607Z" level=info msg="CreateContainer within sandbox \"08387b1e0b5e43e24ee049df9c909a562629508dfe221e4a9db03ee183895ccc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6fe95502fee690cb1c541e7de0260a9b9c8e4b4c0ecd7530efcdb8dbed1ff0aa\"" Sep 12 22:00:50.283317 containerd[1516]: time="2025-09-12T22:00:50.283279669Z" level=info msg="StartContainer for \"6fe95502fee690cb1c541e7de0260a9b9c8e4b4c0ecd7530efcdb8dbed1ff0aa\"" Sep 12 22:00:50.284875 containerd[1516]: time="2025-09-12T22:00:50.284846126Z" level=info msg="connecting to shim 6fe95502fee690cb1c541e7de0260a9b9c8e4b4c0ecd7530efcdb8dbed1ff0aa" address="unix:///run/containerd/s/b6983539f7942cd1a4a8564afbd71e32d0ec3efb916fb3a3335ce22d289081b5" protocol=ttrpc version=3 Sep 12 22:00:50.304278 kubelet[2654]: E0912 22:00:50.304245 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:50.305659 systemd[1]: Started cri-containerd-6fe95502fee690cb1c541e7de0260a9b9c8e4b4c0ecd7530efcdb8dbed1ff0aa.scope - libcontainer container 6fe95502fee690cb1c541e7de0260a9b9c8e4b4c0ecd7530efcdb8dbed1ff0aa. Sep 12 22:00:50.336787 kubelet[2654]: E0912 22:00:50.336644 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:50.337304 containerd[1516]: time="2025-09-12T22:00:50.337255549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d4chl,Uid:65f93ffa-c339-4fba-809f-d74e68bf7c96,Namespace:kube-system,Attempt:0,}" Sep 12 22:00:50.341176 containerd[1516]: time="2025-09-12T22:00:50.341145016Z" level=info msg="StartContainer for \"6fe95502fee690cb1c541e7de0260a9b9c8e4b4c0ecd7530efcdb8dbed1ff0aa\" returns successfully" Sep 12 22:00:50.355485 containerd[1516]: time="2025-09-12T22:00:50.355051774Z" level=info msg="connecting to shim 902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca" address="unix:///run/containerd/s/27d882837c3fa4662dcd5ec6930a70e35ce303709bb26e7c1a1b2c80ecaf24ed" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:00:50.383671 systemd[1]: Started cri-containerd-902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca.scope - libcontainer container 902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca. Sep 12 22:00:50.425216 containerd[1516]: time="2025-09-12T22:00:50.425174569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d4chl,Uid:65f93ffa-c339-4fba-809f-d74e68bf7c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca\"" Sep 12 22:00:50.426100 kubelet[2654]: E0912 22:00:50.426080 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:51.309518 kubelet[2654]: E0912 22:00:51.309472 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:51.321747 kubelet[2654]: I0912 22:00:51.321623 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nb9qf" podStartSLOduration=2.321606606 podStartE2EDuration="2.321606606s" podCreationTimestamp="2025-09-12 22:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:00:51.321473047 +0000 UTC m=+7.135646854" watchObservedRunningTime="2025-09-12 22:00:51.321606606 +0000 UTC m=+7.135780413" Sep 12 22:00:52.310358 kubelet[2654]: E0912 22:00:52.310330 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:52.785199 kubelet[2654]: E0912 22:00:52.785161 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:53.314058 kubelet[2654]: E0912 22:00:53.312820 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:53.824781 kubelet[2654]: E0912 22:00:53.824702 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:54.313767 kubelet[2654]: E0912 22:00:54.313706 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:54.313767 kubelet[2654]: E0912 22:00:54.313722 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:00:59.310472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559130661.mount: Deactivated successfully. Sep 12 22:01:00.544799 update_engine[1500]: I20250912 22:01:00.544734 1500 update_attempter.cc:509] Updating boot flags... Sep 12 22:01:03.337372 containerd[1516]: time="2025-09-12T22:01:03.337313253Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:01:03.337914 containerd[1516]: time="2025-09-12T22:01:03.337875469Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 22:01:03.338792 containerd[1516]: time="2025-09-12T22:01:03.338738900Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:01:03.340176 containerd[1516]: time="2025-09-12T22:01:03.340147022Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.074571443s" Sep 12 22:01:03.340255 containerd[1516]: time="2025-09-12T22:01:03.340187915Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 22:01:03.346459 containerd[1516]: time="2025-09-12T22:01:03.346310317Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 22:01:03.359543 containerd[1516]: time="2025-09-12T22:01:03.359475809Z" level=info msg="CreateContainer within sandbox \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 22:01:03.372429 containerd[1516]: time="2025-09-12T22:01:03.372237494Z" level=info msg="Container 8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:01:03.381536 containerd[1516]: time="2025-09-12T22:01:03.381461189Z" level=info msg="CreateContainer within sandbox \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\"" Sep 12 22:01:03.382124 containerd[1516]: time="2025-09-12T22:01:03.382026087Z" level=info msg="StartContainer for \"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\"" Sep 12 22:01:03.383254 containerd[1516]: time="2025-09-12T22:01:03.383189372Z" level=info msg="connecting to shim 8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38" address="unix:///run/containerd/s/930ba85b21446855b32ee7d01257a258425b8f9c00508c0a2dae5c6d5ae92065" protocol=ttrpc version=3 Sep 12 22:01:03.426727 systemd[1]: Started cri-containerd-8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38.scope - libcontainer container 8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38. Sep 12 22:01:03.483037 systemd[1]: cri-containerd-8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38.scope: Deactivated successfully. Sep 12 22:01:03.515786 containerd[1516]: time="2025-09-12T22:01:03.515746817Z" level=info msg="StartContainer for \"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\" returns successfully" Sep 12 22:01:03.533418 containerd[1516]: time="2025-09-12T22:01:03.533290564Z" level=info msg="received exit event container_id:\"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\" id:\"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\" pid:3088 exited_at:{seconds:1757714463 nanos:526856064}" Sep 12 22:01:03.533563 containerd[1516]: time="2025-09-12T22:01:03.533363186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\" id:\"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\" pid:3088 exited_at:{seconds:1757714463 nanos:526856064}" Sep 12 22:01:04.336561 kubelet[2654]: E0912 22:01:04.336470 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:04.339089 containerd[1516]: time="2025-09-12T22:01:04.338993705Z" level=info msg="CreateContainer within sandbox \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 22:01:04.348772 containerd[1516]: time="2025-09-12T22:01:04.348119435Z" level=info msg="Container 8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:01:04.355191 containerd[1516]: time="2025-09-12T22:01:04.355151858Z" level=info msg="CreateContainer within sandbox \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\"" Sep 12 22:01:04.356467 containerd[1516]: time="2025-09-12T22:01:04.356158279Z" level=info msg="StartContainer for \"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\"" Sep 12 22:01:04.357333 containerd[1516]: time="2025-09-12T22:01:04.357293219Z" level=info msg="connecting to shim 8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482" address="unix:///run/containerd/s/930ba85b21446855b32ee7d01257a258425b8f9c00508c0a2dae5c6d5ae92065" protocol=ttrpc version=3 Sep 12 22:01:04.372800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38-rootfs.mount: Deactivated successfully. Sep 12 22:01:04.388721 systemd[1]: Started cri-containerd-8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482.scope - libcontainer container 8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482. Sep 12 22:01:04.430917 containerd[1516]: time="2025-09-12T22:01:04.430872827Z" level=info msg="StartContainer for \"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\" returns successfully" Sep 12 22:01:04.444669 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 22:01:04.445158 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:01:04.445586 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:01:04.446912 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:01:04.448923 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 22:01:04.449290 systemd[1]: cri-containerd-8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482.scope: Deactivated successfully. Sep 12 22:01:04.451674 containerd[1516]: time="2025-09-12T22:01:04.451637038Z" level=info msg="received exit event container_id:\"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\" id:\"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\" pid:3134 exited_at:{seconds:1757714464 nanos:448494618}" Sep 12 22:01:04.452480 containerd[1516]: time="2025-09-12T22:01:04.452455243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\" id:\"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\" pid:3134 exited_at:{seconds:1757714464 nanos:448494618}" Sep 12 22:01:04.469556 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:01:04.474906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482-rootfs.mount: Deactivated successfully. Sep 12 22:01:05.311952 containerd[1516]: time="2025-09-12T22:01:05.311904341Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:01:05.312522 containerd[1516]: time="2025-09-12T22:01:05.312350348Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 22:01:05.313416 containerd[1516]: time="2025-09-12T22:01:05.313382723Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:01:05.314762 containerd[1516]: time="2025-09-12T22:01:05.314728226Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.968376177s" Sep 12 22:01:05.314762 containerd[1516]: time="2025-09-12T22:01:05.314761796Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 22:01:05.317696 containerd[1516]: time="2025-09-12T22:01:05.317663304Z" level=info msg="CreateContainer within sandbox \"902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 22:01:05.340938 kubelet[2654]: E0912 22:01:05.340908 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:05.345146 containerd[1516]: time="2025-09-12T22:01:05.344980417Z" level=info msg="CreateContainer within sandbox \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 22:01:05.350390 containerd[1516]: time="2025-09-12T22:01:05.350356711Z" level=info msg="Container c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:01:05.357657 containerd[1516]: time="2025-09-12T22:01:05.357451294Z" level=info msg="CreateContainer within sandbox \"902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\"" Sep 12 22:01:05.358071 containerd[1516]: time="2025-09-12T22:01:05.358043663Z" level=info msg="StartContainer for \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\"" Sep 12 22:01:05.358898 containerd[1516]: time="2025-09-12T22:01:05.358869499Z" level=info msg="connecting to shim c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56" address="unix:///run/containerd/s/27d882837c3fa4662dcd5ec6930a70e35ce303709bb26e7c1a1b2c80ecaf24ed" protocol=ttrpc version=3 Sep 12 22:01:05.360745 containerd[1516]: time="2025-09-12T22:01:05.360508727Z" level=info msg="Container 3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:01:05.368879 containerd[1516]: time="2025-09-12T22:01:05.368849546Z" level=info msg="CreateContainer within sandbox \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\"" Sep 12 22:01:05.370022 containerd[1516]: time="2025-09-12T22:01:05.369906288Z" level=info msg="StartContainer for \"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\"" Sep 12 22:01:05.373712 containerd[1516]: time="2025-09-12T22:01:05.371209579Z" level=info msg="connecting to shim 3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff" address="unix:///run/containerd/s/930ba85b21446855b32ee7d01257a258425b8f9c00508c0a2dae5c6d5ae92065" protocol=ttrpc version=3 Sep 12 22:01:05.372869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195325247.mount: Deactivated successfully. Sep 12 22:01:05.395704 systemd[1]: Started cri-containerd-c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56.scope - libcontainer container c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56. Sep 12 22:01:05.399158 systemd[1]: Started cri-containerd-3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff.scope - libcontainer container 3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff. Sep 12 22:01:05.439445 systemd[1]: cri-containerd-3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff.scope: Deactivated successfully. Sep 12 22:01:05.441814 containerd[1516]: time="2025-09-12T22:01:05.441676202Z" level=info msg="received exit event container_id:\"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\" id:\"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\" pid:3215 exited_at:{seconds:1757714465 nanos:440357426}" Sep 12 22:01:05.441814 containerd[1516]: time="2025-09-12T22:01:05.441770149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\" id:\"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\" pid:3215 exited_at:{seconds:1757714465 nanos:440357426}" Sep 12 22:01:05.442244 containerd[1516]: time="2025-09-12T22:01:05.442212155Z" level=info msg="StartContainer for \"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\" returns successfully" Sep 12 22:01:05.461407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff-rootfs.mount: Deactivated successfully. Sep 12 22:01:05.522977 containerd[1516]: time="2025-09-12T22:01:05.522931823Z" level=info msg="StartContainer for \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\" returns successfully" Sep 12 22:01:06.341932 kubelet[2654]: E0912 22:01:06.341884 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:06.345988 kubelet[2654]: E0912 22:01:06.345964 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:06.348100 containerd[1516]: time="2025-09-12T22:01:06.348050237Z" level=info msg="CreateContainer within sandbox \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 22:01:06.355143 kubelet[2654]: I0912 22:01:06.354446 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-d4chl" podStartSLOduration=2.465814951 podStartE2EDuration="17.354431015s" podCreationTimestamp="2025-09-12 22:00:49 +0000 UTC" firstStartedPulling="2025-09-12 22:00:50.4270832 +0000 UTC m=+6.241257047" lastFinishedPulling="2025-09-12 22:01:05.315699263 +0000 UTC m=+21.129873111" observedRunningTime="2025-09-12 22:01:06.35283638 +0000 UTC m=+22.167010187" watchObservedRunningTime="2025-09-12 22:01:06.354431015 +0000 UTC m=+22.168604862" Sep 12 22:01:06.363746 containerd[1516]: time="2025-09-12T22:01:06.362417069Z" level=info msg="Container 1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:01:06.370396 containerd[1516]: time="2025-09-12T22:01:06.370207231Z" level=info msg="CreateContainer within sandbox \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\"" Sep 12 22:01:06.372493 containerd[1516]: time="2025-09-12T22:01:06.371010049Z" level=info msg="StartContainer for \"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\"" Sep 12 22:01:06.372493 containerd[1516]: time="2025-09-12T22:01:06.371895130Z" level=info msg="connecting to shim 1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8" address="unix:///run/containerd/s/930ba85b21446855b32ee7d01257a258425b8f9c00508c0a2dae5c6d5ae92065" protocol=ttrpc version=3 Sep 12 22:01:06.399676 systemd[1]: Started cri-containerd-1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8.scope - libcontainer container 1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8. Sep 12 22:01:06.423983 systemd[1]: cri-containerd-1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8.scope: Deactivated successfully. Sep 12 22:01:06.424555 containerd[1516]: time="2025-09-12T22:01:06.424494814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\" id:\"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\" pid:3273 exited_at:{seconds:1757714466 nanos:424169645}" Sep 12 22:01:06.426042 containerd[1516]: time="2025-09-12T22:01:06.426007146Z" level=info msg="received exit event container_id:\"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\" id:\"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\" pid:3273 exited_at:{seconds:1757714466 nanos:424169645}" Sep 12 22:01:06.432165 containerd[1516]: time="2025-09-12T22:01:06.432125052Z" level=info msg="StartContainer for \"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\" returns successfully" Sep 12 22:01:06.446325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8-rootfs.mount: Deactivated successfully. Sep 12 22:01:07.351547 kubelet[2654]: E0912 22:01:07.350765 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:07.351547 kubelet[2654]: E0912 22:01:07.350771 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:07.352609 containerd[1516]: time="2025-09-12T22:01:07.352480034Z" level=info msg="CreateContainer within sandbox \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 22:01:07.368487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4269565853.mount: Deactivated successfully. Sep 12 22:01:07.369009 containerd[1516]: time="2025-09-12T22:01:07.368968283Z" level=info msg="Container a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:01:07.378878 containerd[1516]: time="2025-09-12T22:01:07.378819646Z" level=info msg="CreateContainer within sandbox \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\"" Sep 12 22:01:07.379663 containerd[1516]: time="2025-09-12T22:01:07.379283927Z" level=info msg="StartContainer for \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\"" Sep 12 22:01:07.380480 containerd[1516]: time="2025-09-12T22:01:07.380437187Z" level=info msg="connecting to shim a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8" address="unix:///run/containerd/s/930ba85b21446855b32ee7d01257a258425b8f9c00508c0a2dae5c6d5ae92065" protocol=ttrpc version=3 Sep 12 22:01:07.400668 systemd[1]: Started cri-containerd-a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8.scope - libcontainer container a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8. Sep 12 22:01:07.436943 containerd[1516]: time="2025-09-12T22:01:07.436895075Z" level=info msg="StartContainer for \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" returns successfully" Sep 12 22:01:07.520295 containerd[1516]: time="2025-09-12T22:01:07.520255801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" id:\"57a8f091a736e40dc8c3f7778011748034769cc2b5dee138d91cb334cce080c6\" pid:3341 exited_at:{seconds:1757714467 nanos:519942240}" Sep 12 22:01:07.544981 kubelet[2654]: I0912 22:01:07.544928 2654 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 22:01:07.584978 systemd[1]: Created slice kubepods-burstable-pod38f585d2_7427_482b_aaff_37691c45a506.slice - libcontainer container kubepods-burstable-pod38f585d2_7427_482b_aaff_37691c45a506.slice. Sep 12 22:01:07.593355 systemd[1]: Created slice kubepods-burstable-pod40aca368_5485_4462_ae8e_be4cfd18d945.slice - libcontainer container kubepods-burstable-pod40aca368_5485_4462_ae8e_be4cfd18d945.slice. Sep 12 22:01:07.624698 kubelet[2654]: I0912 22:01:07.624587 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38f585d2-7427-482b-aaff-37691c45a506-config-volume\") pod \"coredns-7c65d6cfc9-mcfqt\" (UID: \"38f585d2-7427-482b-aaff-37691c45a506\") " pod="kube-system/coredns-7c65d6cfc9-mcfqt" Sep 12 22:01:07.624698 kubelet[2654]: I0912 22:01:07.624629 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40aca368-5485-4462-ae8e-be4cfd18d945-config-volume\") pod \"coredns-7c65d6cfc9-6bgw9\" (UID: \"40aca368-5485-4462-ae8e-be4cfd18d945\") " pod="kube-system/coredns-7c65d6cfc9-6bgw9" Sep 12 22:01:07.624698 kubelet[2654]: I0912 22:01:07.624649 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4p9g\" (UniqueName: \"kubernetes.io/projected/40aca368-5485-4462-ae8e-be4cfd18d945-kube-api-access-l4p9g\") pod \"coredns-7c65d6cfc9-6bgw9\" (UID: \"40aca368-5485-4462-ae8e-be4cfd18d945\") " pod="kube-system/coredns-7c65d6cfc9-6bgw9" Sep 12 22:01:07.624698 kubelet[2654]: I0912 22:01:07.624679 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8hjz\" (UniqueName: \"kubernetes.io/projected/38f585d2-7427-482b-aaff-37691c45a506-kube-api-access-d8hjz\") pod \"coredns-7c65d6cfc9-mcfqt\" (UID: \"38f585d2-7427-482b-aaff-37691c45a506\") " pod="kube-system/coredns-7c65d6cfc9-mcfqt" Sep 12 22:01:07.889452 kubelet[2654]: E0912 22:01:07.889323 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:07.896255 containerd[1516]: time="2025-09-12T22:01:07.896215087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mcfqt,Uid:38f585d2-7427-482b-aaff-37691c45a506,Namespace:kube-system,Attempt:0,}" Sep 12 22:01:07.896389 kubelet[2654]: E0912 22:01:07.896252 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:07.896949 containerd[1516]: time="2025-09-12T22:01:07.896838210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6bgw9,Uid:40aca368-5485-4462-ae8e-be4cfd18d945,Namespace:kube-system,Attempt:0,}" Sep 12 22:01:08.357050 kubelet[2654]: E0912 22:01:08.356987 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:08.374006 kubelet[2654]: I0912 22:01:08.373690 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fstsq" podStartSLOduration=6.292546834 podStartE2EDuration="19.373673285s" podCreationTimestamp="2025-09-12 22:00:49 +0000 UTC" firstStartedPulling="2025-09-12 22:00:50.264953073 +0000 UTC m=+6.079126920" lastFinishedPulling="2025-09-12 22:01:03.346079564 +0000 UTC m=+19.160253371" observedRunningTime="2025-09-12 22:01:08.373075377 +0000 UTC m=+24.187249304" watchObservedRunningTime="2025-09-12 22:01:08.373673285 +0000 UTC m=+24.187847132" Sep 12 22:01:09.358592 kubelet[2654]: E0912 22:01:09.358563 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:09.484013 systemd-networkd[1432]: cilium_host: Link UP Sep 12 22:01:09.484126 systemd-networkd[1432]: cilium_net: Link UP Sep 12 22:01:09.484249 systemd-networkd[1432]: cilium_host: Gained carrier Sep 12 22:01:09.484631 systemd-networkd[1432]: cilium_net: Gained carrier Sep 12 22:01:09.588140 systemd-networkd[1432]: cilium_vxlan: Link UP Sep 12 22:01:09.588153 systemd-networkd[1432]: cilium_vxlan: Gained carrier Sep 12 22:01:09.633708 systemd-networkd[1432]: cilium_net: Gained IPv6LL Sep 12 22:01:09.944556 kernel: NET: Registered PF_ALG protocol family Sep 12 22:01:10.145722 systemd-networkd[1432]: cilium_host: Gained IPv6LL Sep 12 22:01:10.362070 kubelet[2654]: E0912 22:01:10.361970 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:10.603044 systemd-networkd[1432]: lxc_health: Link UP Sep 12 22:01:10.608325 systemd-networkd[1432]: lxc_health: Gained carrier Sep 12 22:01:10.944101 systemd-networkd[1432]: lxcce558da32a73: Link UP Sep 12 22:01:10.952605 kernel: eth0: renamed from tmp65d86 Sep 12 22:01:10.953536 systemd-networkd[1432]: lxc290158e26bae: Link UP Sep 12 22:01:10.955427 systemd-networkd[1432]: lxcce558da32a73: Gained carrier Sep 12 22:01:10.959889 kernel: eth0: renamed from tmpcbf49 Sep 12 22:01:10.960538 systemd-networkd[1432]: lxc290158e26bae: Gained carrier Sep 12 22:01:11.168671 systemd-networkd[1432]: cilium_vxlan: Gained IPv6LL Sep 12 22:01:11.680728 systemd-networkd[1432]: lxc_health: Gained IPv6LL Sep 12 22:01:12.128654 systemd-networkd[1432]: lxcce558da32a73: Gained IPv6LL Sep 12 22:01:12.138236 kubelet[2654]: E0912 22:01:12.138198 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:12.192642 systemd-networkd[1432]: lxc290158e26bae: Gained IPv6LL Sep 12 22:01:12.366119 kubelet[2654]: E0912 22:01:12.366029 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:13.367824 kubelet[2654]: E0912 22:01:13.367659 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:13.617493 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:43390.service - OpenSSH per-connection server daemon (10.0.0.1:43390). Sep 12 22:01:13.677397 sshd[3834]: Accepted publickey for core from 10.0.0.1 port 43390 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:13.678839 sshd-session[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:13.683319 systemd-logind[1498]: New session 8 of user core. Sep 12 22:01:13.692695 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 22:01:13.826775 sshd[3837]: Connection closed by 10.0.0.1 port 43390 Sep 12 22:01:13.827295 sshd-session[3834]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:13.831068 systemd-logind[1498]: Session 8 logged out. Waiting for processes to exit. Sep 12 22:01:13.832282 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:43390.service: Deactivated successfully. Sep 12 22:01:13.836646 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 22:01:13.838854 systemd-logind[1498]: Removed session 8. Sep 12 22:01:14.621289 containerd[1516]: time="2025-09-12T22:01:14.621248998Z" level=info msg="connecting to shim 65d86e3b092c4b6b01f223c6d18d80fba0bf12656649cfa188f42ee184e36639" address="unix:///run/containerd/s/654abb9a3907cf04887584834b6a8e6fe3bd6137199162d8dfb478f2dbb17429" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:01:14.621634 containerd[1516]: time="2025-09-12T22:01:14.621261761Z" level=info msg="connecting to shim cbf4927cd8c378623568a53b9cf68c4d7dd1ce6aadeb3b53df58710ca5e97f0d" address="unix:///run/containerd/s/27a414da0d213816f76a961954648b46c30b64ba581a2a86dd2773994bfb8c12" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:01:14.642658 systemd[1]: Started cri-containerd-cbf4927cd8c378623568a53b9cf68c4d7dd1ce6aadeb3b53df58710ca5e97f0d.scope - libcontainer container cbf4927cd8c378623568a53b9cf68c4d7dd1ce6aadeb3b53df58710ca5e97f0d. Sep 12 22:01:14.646383 systemd[1]: Started cri-containerd-65d86e3b092c4b6b01f223c6d18d80fba0bf12656649cfa188f42ee184e36639.scope - libcontainer container 65d86e3b092c4b6b01f223c6d18d80fba0bf12656649cfa188f42ee184e36639. Sep 12 22:01:14.654356 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:01:14.660227 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:01:14.679242 containerd[1516]: time="2025-09-12T22:01:14.679207034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6bgw9,Uid:40aca368-5485-4462-ae8e-be4cfd18d945,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbf4927cd8c378623568a53b9cf68c4d7dd1ce6aadeb3b53df58710ca5e97f0d\"" Sep 12 22:01:14.680101 kubelet[2654]: E0912 22:01:14.680077 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:14.682291 containerd[1516]: time="2025-09-12T22:01:14.682240222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mcfqt,Uid:38f585d2-7427-482b-aaff-37691c45a506,Namespace:kube-system,Attempt:0,} returns sandbox id \"65d86e3b092c4b6b01f223c6d18d80fba0bf12656649cfa188f42ee184e36639\"" Sep 12 22:01:14.683112 kubelet[2654]: E0912 22:01:14.682888 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:14.683363 containerd[1516]: time="2025-09-12T22:01:14.683335474Z" level=info msg="CreateContainer within sandbox \"cbf4927cd8c378623568a53b9cf68c4d7dd1ce6aadeb3b53df58710ca5e97f0d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:01:14.685070 containerd[1516]: time="2025-09-12T22:01:14.685039124Z" level=info msg="CreateContainer within sandbox \"65d86e3b092c4b6b01f223c6d18d80fba0bf12656649cfa188f42ee184e36639\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:01:14.693120 containerd[1516]: time="2025-09-12T22:01:14.692862521Z" level=info msg="Container ba4065bb2b532cfd74eca00933a8723e19d59a552f790193b42b19c98bacab3c: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:01:14.697051 containerd[1516]: time="2025-09-12T22:01:14.696542114Z" level=info msg="Container 98dac2786b35be98a1161d42aae8fd68b216415ce26a5f9a725d5d7e43589f95: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:01:14.700338 containerd[1516]: time="2025-09-12T22:01:14.700306004Z" level=info msg="CreateContainer within sandbox \"cbf4927cd8c378623568a53b9cf68c4d7dd1ce6aadeb3b53df58710ca5e97f0d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba4065bb2b532cfd74eca00933a8723e19d59a552f790193b42b19c98bacab3c\"" Sep 12 22:01:14.700824 containerd[1516]: time="2025-09-12T22:01:14.700798539Z" level=info msg="StartContainer for \"ba4065bb2b532cfd74eca00933a8723e19d59a552f790193b42b19c98bacab3c\"" Sep 12 22:01:14.701724 containerd[1516]: time="2025-09-12T22:01:14.701699914Z" level=info msg="connecting to shim ba4065bb2b532cfd74eca00933a8723e19d59a552f790193b42b19c98bacab3c" address="unix:///run/containerd/s/27a414da0d213816f76a961954648b46c30b64ba581a2a86dd2773994bfb8c12" protocol=ttrpc version=3 Sep 12 22:01:14.703929 containerd[1516]: time="2025-09-12T22:01:14.703831647Z" level=info msg="CreateContainer within sandbox \"65d86e3b092c4b6b01f223c6d18d80fba0bf12656649cfa188f42ee184e36639\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98dac2786b35be98a1161d42aae8fd68b216415ce26a5f9a725d5d7e43589f95\"" Sep 12 22:01:14.705165 containerd[1516]: time="2025-09-12T22:01:14.705122217Z" level=info msg="StartContainer for \"98dac2786b35be98a1161d42aae8fd68b216415ce26a5f9a725d5d7e43589f95\"" Sep 12 22:01:14.706056 containerd[1516]: time="2025-09-12T22:01:14.706025152Z" level=info msg="connecting to shim 98dac2786b35be98a1161d42aae8fd68b216415ce26a5f9a725d5d7e43589f95" address="unix:///run/containerd/s/654abb9a3907cf04887584834b6a8e6fe3bd6137199162d8dfb478f2dbb17429" protocol=ttrpc version=3 Sep 12 22:01:14.721657 systemd[1]: Started cri-containerd-ba4065bb2b532cfd74eca00933a8723e19d59a552f790193b42b19c98bacab3c.scope - libcontainer container ba4065bb2b532cfd74eca00933a8723e19d59a552f790193b42b19c98bacab3c. Sep 12 22:01:14.724023 systemd[1]: Started cri-containerd-98dac2786b35be98a1161d42aae8fd68b216415ce26a5f9a725d5d7e43589f95.scope - libcontainer container 98dac2786b35be98a1161d42aae8fd68b216415ce26a5f9a725d5d7e43589f95. Sep 12 22:01:14.752061 containerd[1516]: time="2025-09-12T22:01:14.752024349Z" level=info msg="StartContainer for \"98dac2786b35be98a1161d42aae8fd68b216415ce26a5f9a725d5d7e43589f95\" returns successfully" Sep 12 22:01:14.752886 containerd[1516]: time="2025-09-12T22:01:14.752857671Z" level=info msg="StartContainer for \"ba4065bb2b532cfd74eca00933a8723e19d59a552f790193b42b19c98bacab3c\" returns successfully" Sep 12 22:01:15.376728 kubelet[2654]: E0912 22:01:15.376624 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:15.380850 kubelet[2654]: E0912 22:01:15.380812 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:15.391390 kubelet[2654]: I0912 22:01:15.391087 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mcfqt" podStartSLOduration=26.391066767 podStartE2EDuration="26.391066767s" podCreationTimestamp="2025-09-12 22:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:01:15.390234692 +0000 UTC m=+31.204408539" watchObservedRunningTime="2025-09-12 22:01:15.391066767 +0000 UTC m=+31.205240574" Sep 12 22:01:16.386209 kubelet[2654]: E0912 22:01:16.386181 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:16.386705 kubelet[2654]: E0912 22:01:16.386299 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:17.385733 kubelet[2654]: E0912 22:01:17.385610 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:17.385733 kubelet[2654]: E0912 22:01:17.385658 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:18.847027 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:43410.service - OpenSSH per-connection server daemon (10.0.0.1:43410). Sep 12 22:01:18.905734 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 43410 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:18.907062 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:18.913520 systemd-logind[1498]: New session 9 of user core. Sep 12 22:01:18.923712 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 22:01:19.064419 sshd[4027]: Connection closed by 10.0.0.1 port 43410 Sep 12 22:01:19.065704 sshd-session[4024]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:19.069613 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:43410.service: Deactivated successfully. Sep 12 22:01:19.071282 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 22:01:19.077356 systemd-logind[1498]: Session 9 logged out. Waiting for processes to exit. Sep 12 22:01:19.078496 systemd-logind[1498]: Removed session 9. Sep 12 22:01:24.081414 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:43214.service - OpenSSH per-connection server daemon (10.0.0.1:43214). Sep 12 22:01:24.137623 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 43214 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:24.139293 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:24.148801 systemd-logind[1498]: New session 10 of user core. Sep 12 22:01:24.164156 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 22:01:24.306243 sshd[4048]: Connection closed by 10.0.0.1 port 43214 Sep 12 22:01:24.306599 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:24.309940 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:43214.service: Deactivated successfully. Sep 12 22:01:24.312210 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 22:01:24.316855 systemd-logind[1498]: Session 10 logged out. Waiting for processes to exit. Sep 12 22:01:24.320047 systemd-logind[1498]: Removed session 10. Sep 12 22:01:29.324892 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:43244.service - OpenSSH per-connection server daemon (10.0.0.1:43244). Sep 12 22:01:29.379084 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 43244 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:29.380228 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:29.384531 systemd-logind[1498]: New session 11 of user core. Sep 12 22:01:29.390663 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 22:01:29.504816 sshd[4066]: Connection closed by 10.0.0.1 port 43244 Sep 12 22:01:29.505130 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:29.516584 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:43244.service: Deactivated successfully. Sep 12 22:01:29.519495 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 22:01:29.522368 systemd-logind[1498]: Session 11 logged out. Waiting for processes to exit. Sep 12 22:01:29.524386 systemd-logind[1498]: Removed session 11. Sep 12 22:01:29.526272 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:43260.service - OpenSSH per-connection server daemon (10.0.0.1:43260). Sep 12 22:01:29.584084 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 43260 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:29.585171 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:29.589269 systemd-logind[1498]: New session 12 of user core. Sep 12 22:01:29.604688 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 22:01:29.757465 sshd[4083]: Connection closed by 10.0.0.1 port 43260 Sep 12 22:01:29.758133 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:29.767992 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:43260.service: Deactivated successfully. Sep 12 22:01:29.770913 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 22:01:29.774059 systemd-logind[1498]: Session 12 logged out. Waiting for processes to exit. Sep 12 22:01:29.777778 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:43280.service - OpenSSH per-connection server daemon (10.0.0.1:43280). Sep 12 22:01:29.779350 systemd-logind[1498]: Removed session 12. Sep 12 22:01:29.832380 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 43280 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:29.833665 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:29.838398 systemd-logind[1498]: New session 13 of user core. Sep 12 22:01:29.844660 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 22:01:29.956549 sshd[4098]: Connection closed by 10.0.0.1 port 43280 Sep 12 22:01:29.957060 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:29.960476 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:43280.service: Deactivated successfully. Sep 12 22:01:29.962185 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 22:01:29.962986 systemd-logind[1498]: Session 13 logged out. Waiting for processes to exit. Sep 12 22:01:29.964153 systemd-logind[1498]: Removed session 13. Sep 12 22:01:34.975467 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:39150.service - OpenSSH per-connection server daemon (10.0.0.1:39150). Sep 12 22:01:35.044058 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 39150 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:35.046041 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:35.051014 systemd-logind[1498]: New session 14 of user core. Sep 12 22:01:35.058800 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 22:01:35.187552 sshd[4114]: Connection closed by 10.0.0.1 port 39150 Sep 12 22:01:35.188338 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:35.192187 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:39150.service: Deactivated successfully. Sep 12 22:01:35.195293 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 22:01:35.197755 systemd-logind[1498]: Session 14 logged out. Waiting for processes to exit. Sep 12 22:01:35.199820 systemd-logind[1498]: Removed session 14. Sep 12 22:01:40.204381 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:54844.service - OpenSSH per-connection server daemon (10.0.0.1:54844). Sep 12 22:01:40.268445 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 54844 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:40.269711 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:40.274138 systemd-logind[1498]: New session 15 of user core. Sep 12 22:01:40.283823 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 22:01:40.405328 sshd[4132]: Connection closed by 10.0.0.1 port 54844 Sep 12 22:01:40.405722 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:40.416666 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:54844.service: Deactivated successfully. Sep 12 22:01:40.418176 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 22:01:40.420350 systemd-logind[1498]: Session 15 logged out. Waiting for processes to exit. Sep 12 22:01:40.422740 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:54848.service - OpenSSH per-connection server daemon (10.0.0.1:54848). Sep 12 22:01:40.423346 systemd-logind[1498]: Removed session 15. Sep 12 22:01:40.474112 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 54848 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:40.475622 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:40.479463 systemd-logind[1498]: New session 16 of user core. Sep 12 22:01:40.486662 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 22:01:40.671712 sshd[4148]: Connection closed by 10.0.0.1 port 54848 Sep 12 22:01:40.672520 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:40.685405 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:54848.service: Deactivated successfully. Sep 12 22:01:40.687116 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 22:01:40.687876 systemd-logind[1498]: Session 16 logged out. Waiting for processes to exit. Sep 12 22:01:40.690843 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:54874.service - OpenSSH per-connection server daemon (10.0.0.1:54874). Sep 12 22:01:40.692153 systemd-logind[1498]: Removed session 16. Sep 12 22:01:40.753188 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 54874 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:40.754754 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:40.759367 systemd-logind[1498]: New session 17 of user core. Sep 12 22:01:40.769685 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 22:01:42.016521 sshd[4162]: Connection closed by 10.0.0.1 port 54874 Sep 12 22:01:42.017612 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:42.025936 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:54874.service: Deactivated successfully. Sep 12 22:01:42.030433 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 22:01:42.033926 systemd-logind[1498]: Session 17 logged out. Waiting for processes to exit. Sep 12 22:01:42.038301 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:54878.service - OpenSSH per-connection server daemon (10.0.0.1:54878). Sep 12 22:01:42.041300 systemd-logind[1498]: Removed session 17. Sep 12 22:01:42.100847 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 54878 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:42.102412 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:42.106295 systemd-logind[1498]: New session 18 of user core. Sep 12 22:01:42.112671 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 22:01:42.334924 sshd[4185]: Connection closed by 10.0.0.1 port 54878 Sep 12 22:01:42.335678 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:42.349122 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:54878.service: Deactivated successfully. Sep 12 22:01:42.353886 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 22:01:42.355352 systemd-logind[1498]: Session 18 logged out. Waiting for processes to exit. Sep 12 22:01:42.358888 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:54890.service - OpenSSH per-connection server daemon (10.0.0.1:54890). Sep 12 22:01:42.361060 systemd-logind[1498]: Removed session 18. Sep 12 22:01:42.411741 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 54890 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:42.412981 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:42.417802 systemd-logind[1498]: New session 19 of user core. Sep 12 22:01:42.425646 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 22:01:42.537107 sshd[4200]: Connection closed by 10.0.0.1 port 54890 Sep 12 22:01:42.537641 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:42.541818 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:54890.service: Deactivated successfully. Sep 12 22:01:42.543619 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 22:01:42.544269 systemd-logind[1498]: Session 19 logged out. Waiting for processes to exit. Sep 12 22:01:42.545206 systemd-logind[1498]: Removed session 19. Sep 12 22:01:47.556061 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:54896.service - OpenSSH per-connection server daemon (10.0.0.1:54896). Sep 12 22:01:47.632358 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 54896 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:47.633866 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:47.640634 systemd-logind[1498]: New session 20 of user core. Sep 12 22:01:47.647796 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 22:01:47.772807 sshd[4223]: Connection closed by 10.0.0.1 port 54896 Sep 12 22:01:47.773306 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:47.777088 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:54896.service: Deactivated successfully. Sep 12 22:01:47.778890 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 22:01:47.779665 systemd-logind[1498]: Session 20 logged out. Waiting for processes to exit. Sep 12 22:01:47.780704 systemd-logind[1498]: Removed session 20. Sep 12 22:01:52.787722 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:51102.service - OpenSSH per-connection server daemon (10.0.0.1:51102). Sep 12 22:01:52.850135 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 51102 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:52.851351 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:52.855208 systemd-logind[1498]: New session 21 of user core. Sep 12 22:01:52.868700 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 22:01:52.983950 sshd[4241]: Connection closed by 10.0.0.1 port 51102 Sep 12 22:01:52.984315 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:52.987905 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:51102.service: Deactivated successfully. Sep 12 22:01:52.989597 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 22:01:52.990355 systemd-logind[1498]: Session 21 logged out. Waiting for processes to exit. Sep 12 22:01:52.991482 systemd-logind[1498]: Removed session 21. Sep 12 22:01:57.268672 kubelet[2654]: E0912 22:01:57.268640 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:01:58.010387 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:51112.service - OpenSSH per-connection server daemon (10.0.0.1:51112). Sep 12 22:01:58.062101 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 51112 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:58.063419 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:58.067431 systemd-logind[1498]: New session 22 of user core. Sep 12 22:01:58.076658 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 22:01:58.184294 sshd[4257]: Connection closed by 10.0.0.1 port 51112 Sep 12 22:01:58.185316 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Sep 12 22:01:58.196740 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:51112.service: Deactivated successfully. Sep 12 22:01:58.198496 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 22:01:58.200765 systemd-logind[1498]: Session 22 logged out. Waiting for processes to exit. Sep 12 22:01:58.202233 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:51128.service - OpenSSH per-connection server daemon (10.0.0.1:51128). Sep 12 22:01:58.204391 systemd-logind[1498]: Removed session 22. Sep 12 22:01:58.264870 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 51128 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:01:58.266869 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:01:58.273142 systemd-logind[1498]: New session 23 of user core. Sep 12 22:01:58.279665 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 22:01:59.268628 kubelet[2654]: E0912 22:01:59.268588 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:00.084021 kubelet[2654]: I0912 22:02:00.083963 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6bgw9" podStartSLOduration=71.083912984 podStartE2EDuration="1m11.083912984s" podCreationTimestamp="2025-09-12 22:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:01:15.414920738 +0000 UTC m=+31.229094585" watchObservedRunningTime="2025-09-12 22:02:00.083912984 +0000 UTC m=+75.898086831" Sep 12 22:02:00.097037 containerd[1516]: time="2025-09-12T22:02:00.096624236Z" level=info msg="StopContainer for \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\" with timeout 30 (s)" Sep 12 22:02:00.097588 containerd[1516]: time="2025-09-12T22:02:00.097515380Z" level=info msg="Stop container \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\" with signal terminated" Sep 12 22:02:00.108359 systemd[1]: cri-containerd-c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56.scope: Deactivated successfully. Sep 12 22:02:00.110013 containerd[1516]: time="2025-09-12T22:02:00.109970208Z" level=info msg="received exit event container_id:\"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\" id:\"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\" pid:3208 exited_at:{seconds:1757714520 nanos:109620550}" Sep 12 22:02:00.110243 containerd[1516]: time="2025-09-12T22:02:00.110144877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\" id:\"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\" pid:3208 exited_at:{seconds:1757714520 nanos:109620550}" Sep 12 22:02:00.126778 containerd[1516]: time="2025-09-12T22:02:00.125862703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" id:\"5ce8fc2fd181ee06efb4b59b290b4f8aa83c6fc7398d93837351acdcd76b3382\" pid:4301 exited_at:{seconds:1757714520 nanos:125561801}" Sep 12 22:02:00.130829 containerd[1516]: time="2025-09-12T22:02:00.130730161Z" level=info msg="StopContainer for \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" with timeout 2 (s)" Sep 12 22:02:00.131580 containerd[1516]: time="2025-09-12T22:02:00.131330484Z" level=info msg="Stop container \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" with signal terminated" Sep 12 22:02:00.136476 containerd[1516]: time="2025-09-12T22:02:00.136429447Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 22:02:00.139149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56-rootfs.mount: Deactivated successfully. Sep 12 22:02:00.143771 systemd-networkd[1432]: lxc_health: Link DOWN Sep 12 22:02:00.143777 systemd-networkd[1432]: lxc_health: Lost carrier Sep 12 22:02:00.152574 containerd[1516]: time="2025-09-12T22:02:00.152531529Z" level=info msg="StopContainer for \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\" returns successfully" Sep 12 22:02:00.155030 containerd[1516]: time="2025-09-12T22:02:00.154981417Z" level=info msg="StopPodSandbox for \"902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca\"" Sep 12 22:02:00.160142 systemd[1]: cri-containerd-a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8.scope: Deactivated successfully. Sep 12 22:02:00.160442 systemd[1]: cri-containerd-a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8.scope: Consumed 6.322s CPU time, 122.2M memory peak, 176K read from disk, 14.3M written to disk. Sep 12 22:02:00.161716 containerd[1516]: time="2025-09-12T22:02:00.161648244Z" level=info msg="received exit event container_id:\"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" id:\"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" pid:3310 exited_at:{seconds:1757714520 nanos:160554032}" Sep 12 22:02:00.161879 containerd[1516]: time="2025-09-12T22:02:00.161801914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" id:\"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" pid:3310 exited_at:{seconds:1757714520 nanos:160554032}" Sep 12 22:02:00.164400 containerd[1516]: time="2025-09-12T22:02:00.164363955Z" level=info msg="Container to stop \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:02:00.170812 systemd[1]: cri-containerd-902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca.scope: Deactivated successfully. Sep 12 22:02:00.173568 containerd[1516]: time="2025-09-12T22:02:00.173173409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca\" id:\"902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca\" pid:2889 exit_status:137 exited_at:{seconds:1757714520 nanos:172784713}" Sep 12 22:02:00.179805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8-rootfs.mount: Deactivated successfully. Sep 12 22:02:00.188121 containerd[1516]: time="2025-09-12T22:02:00.188076525Z" level=info msg="StopContainer for \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" returns successfully" Sep 12 22:02:00.188940 containerd[1516]: time="2025-09-12T22:02:00.188838038Z" level=info msg="StopPodSandbox for \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\"" Sep 12 22:02:00.189012 containerd[1516]: time="2025-09-12T22:02:00.188942591Z" level=info msg="Container to stop \"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:02:00.189012 containerd[1516]: time="2025-09-12T22:02:00.188956071Z" level=info msg="Container to stop \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:02:00.189012 containerd[1516]: time="2025-09-12T22:02:00.188964750Z" level=info msg="Container to stop \"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:02:00.189012 containerd[1516]: time="2025-09-12T22:02:00.188972510Z" level=info msg="Container to stop \"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:02:00.189012 containerd[1516]: time="2025-09-12T22:02:00.188980429Z" level=info msg="Container to stop \"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 22:02:00.194889 systemd[1]: cri-containerd-2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3.scope: Deactivated successfully. Sep 12 22:02:00.204742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca-rootfs.mount: Deactivated successfully. Sep 12 22:02:00.207861 containerd[1516]: time="2025-09-12T22:02:00.207753705Z" level=info msg="shim disconnected" id=902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca namespace=k8s.io Sep 12 22:02:00.207861 containerd[1516]: time="2025-09-12T22:02:00.207802142Z" level=warning msg="cleaning up after shim disconnected" id=902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca namespace=k8s.io Sep 12 22:02:00.207861 containerd[1516]: time="2025-09-12T22:02:00.207836180Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 22:02:00.217749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3-rootfs.mount: Deactivated successfully. Sep 12 22:02:00.227205 containerd[1516]: time="2025-09-12T22:02:00.227167501Z" level=info msg="shim disconnected" id=2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3 namespace=k8s.io Sep 12 22:02:00.227431 containerd[1516]: time="2025-09-12T22:02:00.227198659Z" level=warning msg="cleaning up after shim disconnected" id=2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3 namespace=k8s.io Sep 12 22:02:00.227431 containerd[1516]: time="2025-09-12T22:02:00.227228217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 22:02:00.230595 containerd[1516]: time="2025-09-12T22:02:00.230550891Z" level=info msg="received exit event sandbox_id:\"902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca\" exit_status:137 exited_at:{seconds:1757714520 nanos:172784713}" Sep 12 22:02:00.231046 containerd[1516]: time="2025-09-12T22:02:00.230557891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" id:\"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" pid:2805 exit_status:137 exited_at:{seconds:1757714520 nanos:195265479}" Sep 12 22:02:00.231206 containerd[1516]: time="2025-09-12T22:02:00.231185812Z" level=info msg="received exit event sandbox_id:\"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" exit_status:137 exited_at:{seconds:1757714520 nanos:195265479}" Sep 12 22:02:00.232725 containerd[1516]: time="2025-09-12T22:02:00.232688919Z" level=info msg="TearDown network for sandbox \"902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca\" successfully" Sep 12 22:02:00.232793 containerd[1516]: time="2025-09-12T22:02:00.232717317Z" level=info msg="StopPodSandbox for \"902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca\" returns successfully" Sep 12 22:02:00.232921 containerd[1516]: time="2025-09-12T22:02:00.232897066Z" level=info msg="TearDown network for sandbox \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" successfully" Sep 12 22:02:00.232981 containerd[1516]: time="2025-09-12T22:02:00.232969261Z" level=info msg="StopPodSandbox for \"2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3\" returns successfully" Sep 12 22:02:00.233966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-902fb0c94e2c9e4766c384375fb136d6f77290c2f435c35e06279434ffc76eca-shm.mount: Deactivated successfully. Sep 12 22:02:00.234070 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2dce8cb1d6f43d6856b5ae6f3c020387cd4e73f5d08be4ccf530fc10f95df2d3-shm.mount: Deactivated successfully. Sep 12 22:02:00.259994 kubelet[2654]: I0912 22:02:00.259951 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-clustermesh-secrets\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.259994 kubelet[2654]: I0912 22:02:00.259990 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-xtables-lock\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.259994 kubelet[2654]: I0912 22:02:00.260006 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-hostproc\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.261552 kubelet[2654]: I0912 22:02:00.260025 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-hubble-tls\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.261552 kubelet[2654]: I0912 22:02:00.260042 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjwfm\" (UniqueName: \"kubernetes.io/projected/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-kube-api-access-fjwfm\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.261552 kubelet[2654]: I0912 22:02:00.260058 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-cgroup\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.261552 kubelet[2654]: I0912 22:02:00.260075 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcmf7\" (UniqueName: \"kubernetes.io/projected/65f93ffa-c339-4fba-809f-d74e68bf7c96-kube-api-access-hcmf7\") pod \"65f93ffa-c339-4fba-809f-d74e68bf7c96\" (UID: \"65f93ffa-c339-4fba-809f-d74e68bf7c96\") " Sep 12 22:02:00.261552 kubelet[2654]: I0912 22:02:00.260091 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-run\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.261552 kubelet[2654]: I0912 22:02:00.260105 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-lib-modules\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.261702 kubelet[2654]: I0912 22:02:00.260119 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cni-path\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.261702 kubelet[2654]: I0912 22:02:00.260134 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-host-proc-sys-kernel\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.261702 kubelet[2654]: I0912 22:02:00.260151 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65f93ffa-c339-4fba-809f-d74e68bf7c96-cilium-config-path\") pod \"65f93ffa-c339-4fba-809f-d74e68bf7c96\" (UID: \"65f93ffa-c339-4fba-809f-d74e68bf7c96\") " Sep 12 22:02:00.261702 kubelet[2654]: I0912 22:02:00.260166 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-etc-cni-netd\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.261702 kubelet[2654]: I0912 22:02:00.260181 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-host-proc-sys-net\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.261702 kubelet[2654]: I0912 22:02:00.260198 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-config-path\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.261818 kubelet[2654]: I0912 22:02:00.260214 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-bpf-maps\") pod \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\" (UID: \"b7af3e26-aad0-4be9-9ea5-5eb501e638b7\") " Sep 12 22:02:00.267066 kubelet[2654]: I0912 22:02:00.266751 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-hostproc" (OuterVolumeSpecName: "hostproc") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 22:02:00.267066 kubelet[2654]: I0912 22:02:00.266832 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 22:02:00.268111 kubelet[2654]: I0912 22:02:00.268002 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 22:02:00.270916 kubelet[2654]: I0912 22:02:00.270886 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-kube-api-access-fjwfm" (OuterVolumeSpecName: "kube-api-access-fjwfm") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "kube-api-access-fjwfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 22:02:00.271256 kubelet[2654]: I0912 22:02:00.271043 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 22:02:00.271341 kubelet[2654]: I0912 22:02:00.271325 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cni-path" (OuterVolumeSpecName: "cni-path") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 22:02:00.271409 kubelet[2654]: I0912 22:02:00.271397 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 22:02:00.273374 kubelet[2654]: I0912 22:02:00.273339 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 22:02:00.274567 kubelet[2654]: I0912 22:02:00.273659 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 22:02:00.274780 kubelet[2654]: I0912 22:02:00.274758 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 22:02:00.274933 kubelet[2654]: I0912 22:02:00.274916 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 22:02:00.275016 kubelet[2654]: I0912 22:02:00.275000 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 22:02:00.277226 kubelet[2654]: I0912 22:02:00.277086 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65f93ffa-c339-4fba-809f-d74e68bf7c96-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "65f93ffa-c339-4fba-809f-d74e68bf7c96" (UID: "65f93ffa-c339-4fba-809f-d74e68bf7c96"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 22:02:00.278293 kubelet[2654]: I0912 22:02:00.278214 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 22:02:00.278769 kubelet[2654]: I0912 22:02:00.278728 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f93ffa-c339-4fba-809f-d74e68bf7c96-kube-api-access-hcmf7" (OuterVolumeSpecName: "kube-api-access-hcmf7") pod "65f93ffa-c339-4fba-809f-d74e68bf7c96" (UID: "65f93ffa-c339-4fba-809f-d74e68bf7c96"). InnerVolumeSpecName "kube-api-access-hcmf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 22:02:00.278976 kubelet[2654]: I0912 22:02:00.278942 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b7af3e26-aad0-4be9-9ea5-5eb501e638b7" (UID: "b7af3e26-aad0-4be9-9ea5-5eb501e638b7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 22:02:00.285231 systemd[1]: Removed slice kubepods-besteffort-pod65f93ffa_c339_4fba_809f_d74e68bf7c96.slice - libcontainer container kubepods-besteffort-pod65f93ffa_c339_4fba_809f_d74e68bf7c96.slice. Sep 12 22:02:00.286595 systemd[1]: Removed slice kubepods-burstable-podb7af3e26_aad0_4be9_9ea5_5eb501e638b7.slice - libcontainer container kubepods-burstable-podb7af3e26_aad0_4be9_9ea5_5eb501e638b7.slice. Sep 12 22:02:00.286697 systemd[1]: kubepods-burstable-podb7af3e26_aad0_4be9_9ea5_5eb501e638b7.slice: Consumed 6.410s CPU time, 122.5M memory peak, 220K read from disk, 14.4M written to disk. Sep 12 22:02:00.360836 kubelet[2654]: I0912 22:02:00.360727 2654 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.360836 kubelet[2654]: I0912 22:02:00.360760 2654 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.360836 kubelet[2654]: I0912 22:02:00.360769 2654 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.360836 kubelet[2654]: I0912 22:02:00.360778 2654 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.360836 kubelet[2654]: I0912 22:02:00.360789 2654 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65f93ffa-c339-4fba-809f-d74e68bf7c96-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.360836 kubelet[2654]: I0912 22:02:00.360797 2654 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.360836 kubelet[2654]: I0912 22:02:00.360805 2654 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.360836 kubelet[2654]: I0912 22:02:00.360812 2654 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.361070 kubelet[2654]: I0912 22:02:00.360820 2654 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.361070 kubelet[2654]: I0912 22:02:00.360828 2654 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.361070 kubelet[2654]: I0912 22:02:00.360836 2654 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.361070 kubelet[2654]: I0912 22:02:00.360843 2654 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.361070 kubelet[2654]: I0912 22:02:00.360850 2654 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.361070 kubelet[2654]: I0912 22:02:00.360858 2654 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjwfm\" (UniqueName: \"kubernetes.io/projected/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-kube-api-access-fjwfm\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.361070 kubelet[2654]: I0912 22:02:00.360865 2654 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7af3e26-aad0-4be9-9ea5-5eb501e638b7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.361070 kubelet[2654]: I0912 22:02:00.360873 2654 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcmf7\" (UniqueName: \"kubernetes.io/projected/65f93ffa-c339-4fba-809f-d74e68bf7c96-kube-api-access-hcmf7\") on node \"localhost\" DevicePath \"\"" Sep 12 22:02:00.498175 kubelet[2654]: I0912 22:02:00.498142 2654 scope.go:117] "RemoveContainer" containerID="a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8" Sep 12 22:02:00.503659 containerd[1516]: time="2025-09-12T22:02:00.502989159Z" level=info msg="RemoveContainer for \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\"" Sep 12 22:02:00.514627 containerd[1516]: time="2025-09-12T22:02:00.514561601Z" level=info msg="RemoveContainer for \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" returns successfully" Sep 12 22:02:00.515964 kubelet[2654]: I0912 22:02:00.515860 2654 scope.go:117] "RemoveContainer" containerID="1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8" Sep 12 22:02:00.518102 containerd[1516]: time="2025-09-12T22:02:00.518036626Z" level=info msg="RemoveContainer for \"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\"" Sep 12 22:02:00.523311 containerd[1516]: time="2025-09-12T22:02:00.523232024Z" level=info msg="RemoveContainer for \"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\" returns successfully" Sep 12 22:02:00.523654 kubelet[2654]: I0912 22:02:00.523610 2654 scope.go:117] "RemoveContainer" containerID="3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff" Sep 12 22:02:00.527749 containerd[1516]: time="2025-09-12T22:02:00.527713106Z" level=info msg="RemoveContainer for \"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\"" Sep 12 22:02:00.532091 containerd[1516]: time="2025-09-12T22:02:00.532058636Z" level=info msg="RemoveContainer for \"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\" returns successfully" Sep 12 22:02:00.532321 kubelet[2654]: I0912 22:02:00.532274 2654 scope.go:117] "RemoveContainer" containerID="8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482" Sep 12 22:02:00.533842 containerd[1516]: time="2025-09-12T22:02:00.533813128Z" level=info msg="RemoveContainer for \"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\"" Sep 12 22:02:00.537265 containerd[1516]: time="2025-09-12T22:02:00.537229956Z" level=info msg="RemoveContainer for \"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\" returns successfully" Sep 12 22:02:00.537436 kubelet[2654]: I0912 22:02:00.537412 2654 scope.go:117] "RemoveContainer" containerID="8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38" Sep 12 22:02:00.538672 containerd[1516]: time="2025-09-12T22:02:00.538644188Z" level=info msg="RemoveContainer for \"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\"" Sep 12 22:02:00.541937 containerd[1516]: time="2025-09-12T22:02:00.541829471Z" level=info msg="RemoveContainer for \"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\" returns successfully" Sep 12 22:02:00.542021 kubelet[2654]: I0912 22:02:00.541976 2654 scope.go:117] "RemoveContainer" containerID="a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8" Sep 12 22:02:00.542190 containerd[1516]: time="2025-09-12T22:02:00.542153211Z" level=error msg="ContainerStatus for \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\": not found" Sep 12 22:02:00.545422 kubelet[2654]: E0912 22:02:00.545379 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\": not found" containerID="a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8" Sep 12 22:02:00.545544 kubelet[2654]: I0912 22:02:00.545438 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8"} err="failed to get container status \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\": rpc error: code = NotFound desc = an error occurred when try to find container \"a69baac08612c26abe64fb1260ac4ee2197c63c72b76ce60c810670418bedcf8\": not found" Sep 12 22:02:00.545583 kubelet[2654]: I0912 22:02:00.545549 2654 scope.go:117] "RemoveContainer" containerID="1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8" Sep 12 22:02:00.545822 containerd[1516]: time="2025-09-12T22:02:00.545784625Z" level=error msg="ContainerStatus for \"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\": not found" Sep 12 22:02:00.545951 kubelet[2654]: E0912 22:02:00.545932 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\": not found" containerID="1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8" Sep 12 22:02:00.545996 kubelet[2654]: I0912 22:02:00.545974 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8"} err="failed to get container status \"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f778eb048251df4ca12fbf542eb494a949f8f433c5d855a330b59c9aa74c4b8\": not found" Sep 12 22:02:00.546027 kubelet[2654]: I0912 22:02:00.546018 2654 scope.go:117] "RemoveContainer" containerID="3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff" Sep 12 22:02:00.546195 containerd[1516]: time="2025-09-12T22:02:00.546167922Z" level=error msg="ContainerStatus for \"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\": not found" Sep 12 22:02:00.546288 kubelet[2654]: E0912 22:02:00.546271 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\": not found" containerID="3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff" Sep 12 22:02:00.546330 kubelet[2654]: I0912 22:02:00.546291 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff"} err="failed to get container status \"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b3c9f1607dc888af9ab36e1ccaab0863426cce58d3cab61e6ac67c8ed4648ff\": not found" Sep 12 22:02:00.546330 kubelet[2654]: I0912 22:02:00.546308 2654 scope.go:117] "RemoveContainer" containerID="8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482" Sep 12 22:02:00.546447 containerd[1516]: time="2025-09-12T22:02:00.546423706Z" level=error msg="ContainerStatus for \"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\": not found" Sep 12 22:02:00.546551 kubelet[2654]: E0912 22:02:00.546532 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\": not found" containerID="8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482" Sep 12 22:02:00.546593 kubelet[2654]: I0912 22:02:00.546556 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482"} err="failed to get container status \"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d8c7a1e9989b89f1dc4030fcf56fa82dc7e498df659bdf419871930ed6c3482\": not found" Sep 12 22:02:00.546593 kubelet[2654]: I0912 22:02:00.546573 2654 scope.go:117] "RemoveContainer" containerID="8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38" Sep 12 22:02:00.546757 containerd[1516]: time="2025-09-12T22:02:00.546725647Z" level=error msg="ContainerStatus for \"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\": not found" Sep 12 22:02:00.546859 kubelet[2654]: E0912 22:02:00.546840 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\": not found" containerID="8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38" Sep 12 22:02:00.546897 kubelet[2654]: I0912 22:02:00.546864 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38"} err="failed to get container status \"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ebfbffbac50e9449419f6135e04d6bd8c3dfc76dd22881201e8c8be0bf36e38\": not found" Sep 12 22:02:00.546897 kubelet[2654]: I0912 22:02:00.546881 2654 scope.go:117] "RemoveContainer" containerID="c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56" Sep 12 22:02:00.548484 containerd[1516]: time="2025-09-12T22:02:00.548457060Z" level=info msg="RemoveContainer for \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\"" Sep 12 22:02:00.551367 containerd[1516]: time="2025-09-12T22:02:00.551258326Z" level=info msg="RemoveContainer for \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\" returns successfully" Sep 12 22:02:00.551437 kubelet[2654]: I0912 22:02:00.551421 2654 scope.go:117] "RemoveContainer" containerID="c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56" Sep 12 22:02:00.551758 containerd[1516]: time="2025-09-12T22:02:00.551709058Z" level=error msg="ContainerStatus for \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\": not found" Sep 12 22:02:00.551874 kubelet[2654]: E0912 22:02:00.551853 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\": not found" containerID="c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56" Sep 12 22:02:00.551917 kubelet[2654]: I0912 22:02:00.551879 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56"} err="failed to get container status \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\": rpc error: code = NotFound desc = an error occurred when try to find container \"c85d537651d48a152c60e59942c4acdb014c29ed4b833bcd03ba620172a37f56\": not found" Sep 12 22:02:01.138965 systemd[1]: var-lib-kubelet-pods-65f93ffa\x2dc339\x2d4fba\x2d809f\x2dd74e68bf7c96-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhcmf7.mount: Deactivated successfully. Sep 12 22:02:01.139072 systemd[1]: var-lib-kubelet-pods-b7af3e26\x2daad0\x2d4be9\x2d9ea5\x2d5eb501e638b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfjwfm.mount: Deactivated successfully. Sep 12 22:02:01.139123 systemd[1]: var-lib-kubelet-pods-b7af3e26\x2daad0\x2d4be9\x2d9ea5\x2d5eb501e638b7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 22:02:01.139171 systemd[1]: var-lib-kubelet-pods-b7af3e26\x2daad0\x2d4be9\x2d9ea5\x2d5eb501e638b7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 22:02:02.056026 sshd[4274]: Connection closed by 10.0.0.1 port 51128 Sep 12 22:02:02.055934 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Sep 12 22:02:02.067154 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:51128.service: Deactivated successfully. Sep 12 22:02:02.069228 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 22:02:02.069488 systemd[1]: session-23.scope: Consumed 1.131s CPU time, 23M memory peak. Sep 12 22:02:02.070115 systemd-logind[1498]: Session 23 logged out. Waiting for processes to exit. Sep 12 22:02:02.073185 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:55984.service - OpenSSH per-connection server daemon (10.0.0.1:55984). Sep 12 22:02:02.073956 systemd-logind[1498]: Removed session 23. Sep 12 22:02:02.138560 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 55984 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:02:02.139889 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:02:02.145124 systemd-logind[1498]: New session 24 of user core. Sep 12 22:02:02.150700 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 22:02:02.272556 kubelet[2654]: I0912 22:02:02.272006 2654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f93ffa-c339-4fba-809f-d74e68bf7c96" path="/var/lib/kubelet/pods/65f93ffa-c339-4fba-809f-d74e68bf7c96/volumes" Sep 12 22:02:02.272556 kubelet[2654]: I0912 22:02:02.272373 2654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7af3e26-aad0-4be9-9ea5-5eb501e638b7" path="/var/lib/kubelet/pods/b7af3e26-aad0-4be9-9ea5-5eb501e638b7/volumes" Sep 12 22:02:03.083235 sshd[4428]: Connection closed by 10.0.0.1 port 55984 Sep 12 22:02:03.084845 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Sep 12 22:02:03.096964 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:55984.service: Deactivated successfully. Sep 12 22:02:03.102599 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 22:02:03.106901 systemd-logind[1498]: Session 24 logged out. Waiting for processes to exit. Sep 12 22:02:03.111065 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:55994.service - OpenSSH per-connection server daemon (10.0.0.1:55994). Sep 12 22:02:03.115716 systemd-logind[1498]: Removed session 24. Sep 12 22:02:03.115919 kubelet[2654]: E0912 22:02:03.115726 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b7af3e26-aad0-4be9-9ea5-5eb501e638b7" containerName="clean-cilium-state" Sep 12 22:02:03.115919 kubelet[2654]: E0912 22:02:03.115748 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b7af3e26-aad0-4be9-9ea5-5eb501e638b7" containerName="cilium-agent" Sep 12 22:02:03.115919 kubelet[2654]: E0912 22:02:03.115754 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b7af3e26-aad0-4be9-9ea5-5eb501e638b7" containerName="mount-bpf-fs" Sep 12 22:02:03.115919 kubelet[2654]: E0912 22:02:03.115760 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b7af3e26-aad0-4be9-9ea5-5eb501e638b7" containerName="apply-sysctl-overwrites" Sep 12 22:02:03.115919 kubelet[2654]: E0912 22:02:03.115767 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="65f93ffa-c339-4fba-809f-d74e68bf7c96" containerName="cilium-operator" Sep 12 22:02:03.115919 kubelet[2654]: E0912 22:02:03.115779 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b7af3e26-aad0-4be9-9ea5-5eb501e638b7" containerName="mount-cgroup" Sep 12 22:02:03.115919 kubelet[2654]: I0912 22:02:03.115802 2654 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f93ffa-c339-4fba-809f-d74e68bf7c96" containerName="cilium-operator" Sep 12 22:02:03.115919 kubelet[2654]: I0912 22:02:03.115808 2654 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7af3e26-aad0-4be9-9ea5-5eb501e638b7" containerName="cilium-agent" Sep 12 22:02:03.125732 systemd[1]: Created slice kubepods-burstable-podca96404f_8930_4208_bb86_9b783d9785b7.slice - libcontainer container kubepods-burstable-podca96404f_8930_4208_bb86_9b783d9785b7.slice. Sep 12 22:02:03.127487 kubelet[2654]: W0912 22:02:03.127426 2654 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 12 22:02:03.127487 kubelet[2654]: W0912 22:02:03.127458 2654 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 12 22:02:03.127487 kubelet[2654]: E0912 22:02:03.127511 2654 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 22:02:03.127487 kubelet[2654]: E0912 22:02:03.127522 2654 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 22:02:03.127740 kubelet[2654]: W0912 22:02:03.127428 2654 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 12 22:02:03.127740 kubelet[2654]: E0912 22:02:03.127551 2654 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 22:02:03.128453 kubelet[2654]: W0912 22:02:03.128393 2654 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 12 22:02:03.128604 kubelet[2654]: E0912 22:02:03.128473 2654 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 22:02:03.176780 kubelet[2654]: I0912 22:02:03.176730 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca96404f-8930-4208-bb86-9b783d9785b7-lib-modules\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.176884 kubelet[2654]: I0912 22:02:03.176813 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca96404f-8930-4208-bb86-9b783d9785b7-cni-path\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.176884 kubelet[2654]: I0912 22:02:03.176832 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca96404f-8930-4208-bb86-9b783d9785b7-xtables-lock\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.176884 kubelet[2654]: I0912 22:02:03.176875 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca96404f-8930-4208-bb86-9b783d9785b7-cilium-run\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.176973 kubelet[2654]: I0912 22:02:03.176891 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca96404f-8930-4208-bb86-9b783d9785b7-bpf-maps\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.176973 kubelet[2654]: I0912 22:02:03.176908 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca96404f-8930-4208-bb86-9b783d9785b7-host-proc-sys-net\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.176973 kubelet[2654]: I0912 22:02:03.176956 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf8cl\" (UniqueName: \"kubernetes.io/projected/ca96404f-8930-4208-bb86-9b783d9785b7-kube-api-access-mf8cl\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.177041 kubelet[2654]: I0912 22:02:03.176975 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ca96404f-8930-4208-bb86-9b783d9785b7-cilium-ipsec-secrets\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.177041 kubelet[2654]: I0912 22:02:03.177002 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca96404f-8930-4208-bb86-9b783d9785b7-cilium-cgroup\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.177041 kubelet[2654]: I0912 22:02:03.177017 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca96404f-8930-4208-bb86-9b783d9785b7-etc-cni-netd\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.177041 kubelet[2654]: I0912 22:02:03.177033 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca96404f-8930-4208-bb86-9b783d9785b7-host-proc-sys-kernel\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.177127 kubelet[2654]: I0912 22:02:03.177079 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca96404f-8930-4208-bb86-9b783d9785b7-clustermesh-secrets\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.177127 kubelet[2654]: I0912 22:02:03.177094 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca96404f-8930-4208-bb86-9b783d9785b7-cilium-config-path\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.177127 kubelet[2654]: I0912 22:02:03.177111 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca96404f-8930-4208-bb86-9b783d9785b7-hubble-tls\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.177187 kubelet[2654]: I0912 22:02:03.177144 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca96404f-8930-4208-bb86-9b783d9785b7-hostproc\") pod \"cilium-nv6cf\" (UID: \"ca96404f-8930-4208-bb86-9b783d9785b7\") " pod="kube-system/cilium-nv6cf" Sep 12 22:02:03.179790 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 55994 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:02:03.181126 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:02:03.186461 systemd-logind[1498]: New session 25 of user core. Sep 12 22:02:03.200717 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 22:02:03.250543 sshd[4443]: Connection closed by 10.0.0.1 port 55994 Sep 12 22:02:03.250641 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Sep 12 22:02:03.257620 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:55994.service: Deactivated successfully. Sep 12 22:02:03.259320 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 22:02:03.260032 systemd-logind[1498]: Session 25 logged out. Waiting for processes to exit. Sep 12 22:02:03.262433 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:56008.service - OpenSSH per-connection server daemon (10.0.0.1:56008). Sep 12 22:02:03.263452 systemd-logind[1498]: Removed session 25. Sep 12 22:02:03.324134 sshd[4450]: Accepted publickey for core from 10.0.0.1 port 56008 ssh2: RSA SHA256:89WB56THnhzjx8XsKgQlSeZZaxZLOzxRKY4RxNTnHBI Sep 12 22:02:03.325427 sshd-session[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:02:03.330325 systemd-logind[1498]: New session 26 of user core. Sep 12 22:02:03.340699 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 22:02:04.280426 kubelet[2654]: E0912 22:02:04.280128 2654 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Sep 12 22:02:04.280426 kubelet[2654]: E0912 22:02:04.280246 2654 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca96404f-8930-4208-bb86-9b783d9785b7-cilium-ipsec-secrets podName:ca96404f-8930-4208-bb86-9b783d9785b7 nodeName:}" failed. No retries permitted until 2025-09-12 22:02:04.780220616 +0000 UTC m=+80.594394463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/ca96404f-8930-4208-bb86-9b783d9785b7-cilium-ipsec-secrets") pod "cilium-nv6cf" (UID: "ca96404f-8930-4208-bb86-9b783d9785b7") : failed to sync secret cache: timed out waiting for the condition Sep 12 22:02:04.280872 kubelet[2654]: E0912 22:02:04.280440 2654 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 12 22:02:04.280872 kubelet[2654]: E0912 22:02:04.280468 2654 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-nv6cf: failed to sync secret cache: timed out waiting for the condition Sep 12 22:02:04.280872 kubelet[2654]: E0912 22:02:04.280540 2654 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca96404f-8930-4208-bb86-9b783d9785b7-hubble-tls podName:ca96404f-8930-4208-bb86-9b783d9785b7 nodeName:}" failed. No retries permitted until 2025-09-12 22:02:04.780526281 +0000 UTC m=+80.594700128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/ca96404f-8930-4208-bb86-9b783d9785b7-hubble-tls") pod "cilium-nv6cf" (UID: "ca96404f-8930-4208-bb86-9b783d9785b7") : failed to sync secret cache: timed out waiting for the condition Sep 12 22:02:04.281019 kubelet[2654]: E0912 22:02:04.280986 2654 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 12 22:02:04.281094 kubelet[2654]: E0912 22:02:04.281066 2654 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca96404f-8930-4208-bb86-9b783d9785b7-cilium-config-path podName:ca96404f-8930-4208-bb86-9b783d9785b7 nodeName:}" failed. No retries permitted until 2025-09-12 22:02:04.781052296 +0000 UTC m=+80.595226143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/ca96404f-8930-4208-bb86-9b783d9785b7-cilium-config-path") pod "cilium-nv6cf" (UID: "ca96404f-8930-4208-bb86-9b783d9785b7") : failed to sync configmap cache: timed out waiting for the condition Sep 12 22:02:04.345308 kubelet[2654]: E0912 22:02:04.345220 2654 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 22:02:04.933732 kubelet[2654]: E0912 22:02:04.933613 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:04.934392 containerd[1516]: time="2025-09-12T22:02:04.934288009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nv6cf,Uid:ca96404f-8930-4208-bb86-9b783d9785b7,Namespace:kube-system,Attempt:0,}" Sep 12 22:02:04.966840 containerd[1516]: time="2025-09-12T22:02:04.966749702Z" level=info msg="connecting to shim 655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5" address="unix:///run/containerd/s/f88414e2c9a8994b8624954d3827063f8d22dff87afcd7b70849d655fa1966f8" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:02:04.991699 systemd[1]: Started cri-containerd-655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5.scope - libcontainer container 655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5. Sep 12 22:02:05.014886 containerd[1516]: time="2025-09-12T22:02:05.014835259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nv6cf,Uid:ca96404f-8930-4208-bb86-9b783d9785b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5\"" Sep 12 22:02:05.015640 kubelet[2654]: E0912 22:02:05.015613 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:05.018382 containerd[1516]: time="2025-09-12T22:02:05.018323824Z" level=info msg="CreateContainer within sandbox \"655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 22:02:05.027529 containerd[1516]: time="2025-09-12T22:02:05.026547219Z" level=info msg="Container 270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:02:05.032137 containerd[1516]: time="2025-09-12T22:02:05.032097093Z" level=info msg="CreateContainer within sandbox \"655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7\"" Sep 12 22:02:05.032724 containerd[1516]: time="2025-09-12T22:02:05.032692587Z" level=info msg="StartContainer for \"270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7\"" Sep 12 22:02:05.033753 containerd[1516]: time="2025-09-12T22:02:05.033701382Z" level=info msg="connecting to shim 270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7" address="unix:///run/containerd/s/f88414e2c9a8994b8624954d3827063f8d22dff87afcd7b70849d655fa1966f8" protocol=ttrpc version=3 Sep 12 22:02:05.056737 systemd[1]: Started cri-containerd-270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7.scope - libcontainer container 270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7. Sep 12 22:02:05.080343 containerd[1516]: time="2025-09-12T22:02:05.080306196Z" level=info msg="StartContainer for \"270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7\" returns successfully" Sep 12 22:02:05.088099 systemd[1]: cri-containerd-270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7.scope: Deactivated successfully. Sep 12 22:02:05.088951 containerd[1516]: time="2025-09-12T22:02:05.088136649Z" level=info msg="received exit event container_id:\"270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7\" id:\"270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7\" pid:4524 exited_at:{seconds:1757714525 nanos:87900819}" Sep 12 22:02:05.090495 containerd[1516]: time="2025-09-12T22:02:05.089545986Z" level=info msg="TaskExit event in podsandbox handler container_id:\"270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7\" id:\"270d5c1aa088e6b54edd9c853bce428b7bcfd24e977c205b0ba247cfe094f3f7\" pid:4524 exited_at:{seconds:1757714525 nanos:87900819}" Sep 12 22:02:05.513091 kubelet[2654]: E0912 22:02:05.513007 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:05.517074 containerd[1516]: time="2025-09-12T22:02:05.516552454Z" level=info msg="CreateContainer within sandbox \"655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 22:02:05.525666 containerd[1516]: time="2025-09-12T22:02:05.525634131Z" level=info msg="Container 70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:02:05.530419 containerd[1516]: time="2025-09-12T22:02:05.530377961Z" level=info msg="CreateContainer within sandbox \"655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686\"" Sep 12 22:02:05.531938 containerd[1516]: time="2025-09-12T22:02:05.531900813Z" level=info msg="StartContainer for \"70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686\"" Sep 12 22:02:05.533448 containerd[1516]: time="2025-09-12T22:02:05.533415706Z" level=info msg="connecting to shim 70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686" address="unix:///run/containerd/s/f88414e2c9a8994b8624954d3827063f8d22dff87afcd7b70849d655fa1966f8" protocol=ttrpc version=3 Sep 12 22:02:05.552669 systemd[1]: Started cri-containerd-70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686.scope - libcontainer container 70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686. Sep 12 22:02:05.575342 containerd[1516]: time="2025-09-12T22:02:05.575236132Z" level=info msg="StartContainer for \"70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686\" returns successfully" Sep 12 22:02:05.581393 systemd[1]: cri-containerd-70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686.scope: Deactivated successfully. Sep 12 22:02:05.583489 containerd[1516]: time="2025-09-12T22:02:05.583455447Z" level=info msg="received exit event container_id:\"70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686\" id:\"70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686\" pid:4570 exited_at:{seconds:1757714525 nanos:583282575}" Sep 12 22:02:05.583720 containerd[1516]: time="2025-09-12T22:02:05.583569002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686\" id:\"70203d35bbcc6448dadbb4d36cf35a6f860f9982ccf6e2d9863f9e93249d7686\" pid:4570 exited_at:{seconds:1757714525 nanos:583282575}" Sep 12 22:02:06.209524 kubelet[2654]: I0912 22:02:06.208557 2654 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T22:02:06Z","lastTransitionTime":"2025-09-12T22:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 22:02:06.517922 kubelet[2654]: E0912 22:02:06.517673 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:06.520562 containerd[1516]: time="2025-09-12T22:02:06.519925673Z" level=info msg="CreateContainer within sandbox \"655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 22:02:06.527262 containerd[1516]: time="2025-09-12T22:02:06.527227133Z" level=info msg="Container 5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:02:06.537902 containerd[1516]: time="2025-09-12T22:02:06.537868256Z" level=info msg="CreateContainer within sandbox \"655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886\"" Sep 12 22:02:06.538249 containerd[1516]: time="2025-09-12T22:02:06.538230041Z" level=info msg="StartContainer for \"5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886\"" Sep 12 22:02:06.540755 containerd[1516]: time="2025-09-12T22:02:06.540717738Z" level=info msg="connecting to shim 5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886" address="unix:///run/containerd/s/f88414e2c9a8994b8624954d3827063f8d22dff87afcd7b70849d655fa1966f8" protocol=ttrpc version=3 Sep 12 22:02:06.559694 systemd[1]: Started cri-containerd-5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886.scope - libcontainer container 5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886. Sep 12 22:02:06.587875 systemd[1]: cri-containerd-5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886.scope: Deactivated successfully. Sep 12 22:02:06.589678 containerd[1516]: time="2025-09-12T22:02:06.589648446Z" level=info msg="StartContainer for \"5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886\" returns successfully" Sep 12 22:02:06.590516 containerd[1516]: time="2025-09-12T22:02:06.590415134Z" level=info msg="received exit event container_id:\"5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886\" id:\"5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886\" pid:4614 exited_at:{seconds:1757714526 nanos:589807559}" Sep 12 22:02:06.590516 containerd[1516]: time="2025-09-12T22:02:06.590472972Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886\" id:\"5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886\" pid:4614 exited_at:{seconds:1757714526 nanos:589807559}" Sep 12 22:02:06.607543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5726513a19253bf582657b3d7899191f6efe60af15547a2c77448ea62be03886-rootfs.mount: Deactivated successfully. Sep 12 22:02:07.525717 kubelet[2654]: E0912 22:02:07.525558 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:07.528602 containerd[1516]: time="2025-09-12T22:02:07.528564872Z" level=info msg="CreateContainer within sandbox \"655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 22:02:07.535304 containerd[1516]: time="2025-09-12T22:02:07.535274817Z" level=info msg="Container 0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:02:07.541430 containerd[1516]: time="2025-09-12T22:02:07.541392504Z" level=info msg="CreateContainer within sandbox \"655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7\"" Sep 12 22:02:07.542415 containerd[1516]: time="2025-09-12T22:02:07.542388626Z" level=info msg="StartContainer for \"0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7\"" Sep 12 22:02:07.543269 containerd[1516]: time="2025-09-12T22:02:07.543236634Z" level=info msg="connecting to shim 0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7" address="unix:///run/containerd/s/f88414e2c9a8994b8624954d3827063f8d22dff87afcd7b70849d655fa1966f8" protocol=ttrpc version=3 Sep 12 22:02:07.567637 systemd[1]: Started cri-containerd-0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7.scope - libcontainer container 0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7. Sep 12 22:02:07.587721 systemd[1]: cri-containerd-0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7.scope: Deactivated successfully. Sep 12 22:02:07.588193 containerd[1516]: time="2025-09-12T22:02:07.588164086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7\" id:\"0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7\" pid:4653 exited_at:{seconds:1757714527 nanos:587933135}" Sep 12 22:02:07.589633 containerd[1516]: time="2025-09-12T22:02:07.589612191Z" level=info msg="received exit event container_id:\"0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7\" id:\"0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7\" pid:4653 exited_at:{seconds:1757714527 nanos:587933135}" Sep 12 22:02:07.591485 containerd[1516]: time="2025-09-12T22:02:07.591370404Z" level=info msg="StartContainer for \"0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7\" returns successfully" Sep 12 22:02:07.605635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e796acb1e6b73330755740ca8f6a5497209bf17845066a4dbad5d72be7d07e7-rootfs.mount: Deactivated successfully. Sep 12 22:02:08.269661 kubelet[2654]: E0912 22:02:08.269627 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:08.529390 kubelet[2654]: E0912 22:02:08.529292 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:08.531720 containerd[1516]: time="2025-09-12T22:02:08.531597896Z" level=info msg="CreateContainer within sandbox \"655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 22:02:08.547178 containerd[1516]: time="2025-09-12T22:02:08.547130113Z" level=info msg="Container 68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:02:08.553227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3574461152.mount: Deactivated successfully. Sep 12 22:02:08.556190 containerd[1516]: time="2025-09-12T22:02:08.556142197Z" level=info msg="CreateContainer within sandbox \"655975e964e4d5e90f5ba710f009c5a62959d1959782419ab38637695cab4ff5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef\"" Sep 12 22:02:08.557960 containerd[1516]: time="2025-09-12T22:02:08.557932615Z" level=info msg="StartContainer for \"68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef\"" Sep 12 22:02:08.559440 containerd[1516]: time="2025-09-12T22:02:08.559376004Z" level=info msg="connecting to shim 68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef" address="unix:///run/containerd/s/f88414e2c9a8994b8624954d3827063f8d22dff87afcd7b70849d655fa1966f8" protocol=ttrpc version=3 Sep 12 22:02:08.583705 systemd[1]: Started cri-containerd-68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef.scope - libcontainer container 68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef. Sep 12 22:02:08.613937 containerd[1516]: time="2025-09-12T22:02:08.613904495Z" level=info msg="StartContainer for \"68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef\" returns successfully" Sep 12 22:02:08.666113 containerd[1516]: time="2025-09-12T22:02:08.666066789Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef\" id:\"8da2ca7922ba1425c4e465b04c934d3b029acb4ccb6496c9528eb4af99533ee6\" pid:4719 exited_at:{seconds:1757714528 nanos:665786879}" Sep 12 22:02:08.871540 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 22:02:09.542085 kubelet[2654]: E0912 22:02:09.541787 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:09.561202 kubelet[2654]: I0912 22:02:09.560899 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nv6cf" podStartSLOduration=6.560881216 podStartE2EDuration="6.560881216s" podCreationTimestamp="2025-09-12 22:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:02:09.558388536 +0000 UTC m=+85.372562383" watchObservedRunningTime="2025-09-12 22:02:09.560881216 +0000 UTC m=+85.375055063" Sep 12 22:02:10.934367 kubelet[2654]: E0912 22:02:10.934328 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:11.703003 systemd-networkd[1432]: lxc_health: Link UP Sep 12 22:02:11.711543 systemd-networkd[1432]: lxc_health: Gained carrier Sep 12 22:02:11.909398 containerd[1516]: time="2025-09-12T22:02:11.909363252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef\" id:\"a0bf55e6e9896139a0621806f62fd8fcee139cf79ee52a9122709ec910715c88\" pid:5223 exited_at:{seconds:1757714531 nanos:907864812}" Sep 12 22:02:12.938531 kubelet[2654]: E0912 22:02:12.937576 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:13.548376 kubelet[2654]: E0912 22:02:13.548262 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:13.568676 systemd-networkd[1432]: lxc_health: Gained IPv6LL Sep 12 22:02:14.063855 containerd[1516]: time="2025-09-12T22:02:14.063812746Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef\" id:\"c5da36c609c7af020459aa1cc46eb49138c0c65c2d9bf647498a7592ee33a177\" pid:5259 exited_at:{seconds:1757714534 nanos:63139598}" Sep 12 22:02:14.550063 kubelet[2654]: E0912 22:02:14.549956 2654 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:02:16.226795 containerd[1516]: time="2025-09-12T22:02:16.226664218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef\" id:\"1cb8bc667ed8ef3b6cc95e74fbebdd538efcc74384dbe3adbec78da78787d4a6\" pid:5290 exited_at:{seconds:1757714536 nanos:226385621}" Sep 12 22:02:18.333508 containerd[1516]: time="2025-09-12T22:02:18.333460153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68c3f43863a93acfeb1ab85a7b01dd77d8b0324c85346b2d016b2d2291bf44ef\" id:\"716ec67484743c6e533c0f051120adc4d841c78d21c163be24c2afd23bd7e38b\" pid:5313 exited_at:{seconds:1757714538 nanos:332426083}" Sep 12 22:02:18.339440 sshd[4454]: Connection closed by 10.0.0.1 port 56008 Sep 12 22:02:18.340972 sshd-session[4450]: pam_unix(sshd:session): session closed for user core Sep 12 22:02:18.344209 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:56008.service: Deactivated successfully. Sep 12 22:02:18.346041 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 22:02:18.346885 systemd-logind[1498]: Session 26 logged out. Waiting for processes to exit. Sep 12 22:02:18.347982 systemd-logind[1498]: Removed session 26.