Sep 6 00:05:39.833771 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 00:05:39.833792 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 6 00:05:39.833802 kernel: KASLR enabled Sep 6 00:05:39.833808 kernel: efi: EFI v2.7 by EDK II Sep 6 00:05:39.833814 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 6 00:05:39.833820 kernel: random: crng init done Sep 6 00:05:39.833827 kernel: ACPI: Early table checksum verification disabled Sep 6 00:05:39.833833 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 6 00:05:39.833839 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 6 00:05:39.833847 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:05:39.833854 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:05:39.833860 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:05:39.833866 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:05:39.833872 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:05:39.833880 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:05:39.833889 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:05:39.833895 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:05:39.833902 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:05:39.833908 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 6 00:05:39.833915 kernel: NUMA: Failed to initialise from firmware Sep 6 00:05:39.833921 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:05:39.833933 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 6 00:05:39.833940 kernel: Zone ranges: Sep 6 00:05:39.833946 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:05:39.833953 kernel: DMA32 empty Sep 6 00:05:39.833961 kernel: Normal empty Sep 6 00:05:39.833967 kernel: Movable zone start for each node Sep 6 00:05:39.833974 kernel: Early memory node ranges Sep 6 00:05:39.833980 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 6 00:05:39.833987 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 6 00:05:39.833994 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 6 00:05:39.834000 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 6 00:05:39.834006 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 6 00:05:39.834013 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 6 00:05:39.834019 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 6 00:05:39.834026 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:05:39.834033 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 6 00:05:39.834041 kernel: psci: probing for conduit method from ACPI. Sep 6 00:05:39.834048 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 00:05:39.834054 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 00:05:39.834063 kernel: psci: Trusted OS migration not required Sep 6 00:05:39.834071 kernel: psci: SMC Calling Convention v1.1 Sep 6 00:05:39.834078 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 6 00:05:39.834086 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 6 00:05:39.834093 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 6 00:05:39.834100 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 6 00:05:39.834107 kernel: Detected PIPT I-cache on CPU0 Sep 6 00:05:39.834114 kernel: CPU features: detected: GIC system register CPU interface Sep 6 00:05:39.834121 kernel: CPU features: detected: Hardware dirty bit management Sep 6 00:05:39.834129 kernel: CPU features: detected: Spectre-v4 Sep 6 00:05:39.834135 kernel: CPU features: detected: Spectre-BHB Sep 6 00:05:39.834142 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 00:05:39.834149 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 00:05:39.834158 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 00:05:39.834165 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 00:05:39.834172 kernel: alternatives: applying boot alternatives Sep 6 00:05:39.834180 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 6 00:05:39.834188 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:05:39.834195 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:05:39.834202 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:05:39.834209 kernel: Fallback order for Node 0: 0 Sep 6 00:05:39.834216 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 6 00:05:39.834223 kernel: Policy zone: DMA Sep 6 00:05:39.834230 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:05:39.834238 kernel: software IO TLB: area num 4. Sep 6 00:05:39.834246 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 6 00:05:39.834253 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Sep 6 00:05:39.834260 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 00:05:39.834267 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 00:05:39.834274 kernel: rcu: RCU event tracing is enabled. Sep 6 00:05:39.834282 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 00:05:39.834289 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 00:05:39.834296 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:05:39.834304 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:05:39.834311 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 00:05:39.834319 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 00:05:39.834326 kernel: GICv3: 256 SPIs implemented Sep 6 00:05:39.834333 kernel: GICv3: 0 Extended SPIs implemented Sep 6 00:05:39.834340 kernel: Root IRQ handler: gic_handle_irq Sep 6 00:05:39.834347 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 6 00:05:39.834354 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 6 00:05:39.834365 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 6 00:05:39.834372 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 6 00:05:39.834380 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 6 00:05:39.834387 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 6 00:05:39.834394 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 6 00:05:39.834404 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 6 00:05:39.834413 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:05:39.834421 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 00:05:39.834430 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 00:05:39.834439 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 00:05:39.834448 kernel: arm-pv: using stolen time PV Sep 6 00:05:39.834456 kernel: Console: colour dummy device 80x25 Sep 6 00:05:39.834463 kernel: ACPI: Core revision 20230628 Sep 6 00:05:39.834471 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 00:05:39.834479 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:05:39.834487 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 6 00:05:39.834496 kernel: landlock: Up and running. Sep 6 00:05:39.834507 kernel: SELinux: Initializing. Sep 6 00:05:39.834516 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:05:39.834524 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:05:39.834534 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 6 00:05:39.834541 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 6 00:05:39.834549 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:05:39.834556 kernel: rcu: Max phase no-delay instances is 400. Sep 6 00:05:39.834622 kernel: Platform MSI: ITS@0x8080000 domain created Sep 6 00:05:39.834634 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 6 00:05:39.834641 kernel: Remapping and enabling EFI services. Sep 6 00:05:39.834648 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:05:39.834655 kernel: Detected PIPT I-cache on CPU1 Sep 6 00:05:39.834662 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 6 00:05:39.834669 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 6 00:05:39.834676 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:05:39.834684 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 00:05:39.834691 kernel: Detected PIPT I-cache on CPU2 Sep 6 00:05:39.834698 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 6 00:05:39.834708 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 6 00:05:39.834716 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:05:39.834734 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 6 00:05:39.834743 kernel: Detected PIPT I-cache on CPU3 Sep 6 00:05:39.834751 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 6 00:05:39.834759 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 6 00:05:39.834766 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:05:39.834774 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 6 00:05:39.834781 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 00:05:39.834791 kernel: SMP: Total of 4 processors activated. Sep 6 00:05:39.834810 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 00:05:39.834818 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 00:05:39.834826 kernel: CPU features: detected: Common not Private translations Sep 6 00:05:39.834833 kernel: CPU features: detected: CRC32 instructions Sep 6 00:05:39.834840 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 6 00:05:39.834848 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 00:05:39.834855 kernel: CPU features: detected: LSE atomic instructions Sep 6 00:05:39.834865 kernel: CPU features: detected: Privileged Access Never Sep 6 00:05:39.834872 kernel: CPU features: detected: RAS Extension Support Sep 6 00:05:39.834880 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 6 00:05:39.834887 kernel: CPU: All CPU(s) started at EL1 Sep 6 00:05:39.834894 kernel: alternatives: applying system-wide alternatives Sep 6 00:05:39.834902 kernel: devtmpfs: initialized Sep 6 00:05:39.834910 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:05:39.834918 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 00:05:39.834925 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:05:39.834934 kernel: SMBIOS 3.0.0 present. Sep 6 00:05:39.834941 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 6 00:05:39.834948 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:05:39.834956 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 00:05:39.834964 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 00:05:39.834971 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 00:05:39.834979 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:05:39.834986 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 6 00:05:39.834994 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:05:39.835003 kernel: cpuidle: using governor menu Sep 6 00:05:39.835010 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 00:05:39.835018 kernel: ASID allocator initialised with 32768 entries Sep 6 00:05:39.835025 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:05:39.835033 kernel: Serial: AMBA PL011 UART driver Sep 6 00:05:39.835040 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 6 00:05:39.835047 kernel: Modules: 0 pages in range for non-PLT usage Sep 6 00:05:39.835055 kernel: Modules: 509008 pages in range for PLT usage Sep 6 00:05:39.835062 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:05:39.835071 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 6 00:05:39.835078 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 00:05:39.835086 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 6 00:05:39.835093 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:05:39.835100 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 6 00:05:39.835108 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 00:05:39.835115 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 6 00:05:39.835122 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:05:39.835130 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:05:39.835139 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:05:39.835146 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:05:39.835153 kernel: ACPI: Interpreter enabled Sep 6 00:05:39.835161 kernel: ACPI: Using GIC for interrupt routing Sep 6 00:05:39.835168 kernel: ACPI: MCFG table detected, 1 entries Sep 6 00:05:39.835175 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 6 00:05:39.835183 kernel: printk: console [ttyAMA0] enabled Sep 6 00:05:39.835190 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:05:39.835344 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:05:39.835425 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 00:05:39.835494 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 00:05:39.835570 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 6 00:05:39.835718 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 6 00:05:39.835730 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 6 00:05:39.835738 kernel: PCI host bridge to bus 0000:00 Sep 6 00:05:39.835834 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 6 00:05:39.835944 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 00:05:39.836008 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 6 00:05:39.836068 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:05:39.836153 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 6 00:05:39.836234 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:05:39.836307 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 6 00:05:39.836383 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 6 00:05:39.836453 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:05:39.836523 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:05:39.836615 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 6 00:05:39.836699 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 6 00:05:39.836780 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 6 00:05:39.836850 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 00:05:39.836919 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 6 00:05:39.836929 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 00:05:39.836937 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 00:05:39.836945 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 00:05:39.836952 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 00:05:39.836960 kernel: iommu: Default domain type: Translated Sep 6 00:05:39.836968 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 00:05:39.836975 kernel: efivars: Registered efivars operations Sep 6 00:05:39.836982 kernel: vgaarb: loaded Sep 6 00:05:39.836992 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 00:05:39.836999 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:05:39.837007 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:05:39.837015 kernel: pnp: PnP ACPI init Sep 6 00:05:39.837093 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 6 00:05:39.837104 kernel: pnp: PnP ACPI: found 1 devices Sep 6 00:05:39.837111 kernel: NET: Registered PF_INET protocol family Sep 6 00:05:39.837119 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:05:39.837128 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:05:39.837136 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:05:39.837144 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:05:39.837152 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 6 00:05:39.837159 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:05:39.837167 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:05:39.837174 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:05:39.837182 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:05:39.837189 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:05:39.837198 kernel: kvm [1]: HYP mode not available Sep 6 00:05:39.837206 kernel: Initialise system trusted keyrings Sep 6 00:05:39.837213 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:05:39.837220 kernel: Key type asymmetric registered Sep 6 00:05:39.837228 kernel: Asymmetric key parser 'x509' registered Sep 6 00:05:39.837235 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 6 00:05:39.837243 kernel: io scheduler mq-deadline registered Sep 6 00:05:39.837250 kernel: io scheduler kyber registered Sep 6 00:05:39.837257 kernel: io scheduler bfq registered Sep 6 00:05:39.837267 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 00:05:39.837275 kernel: ACPI: button: Power Button [PWRB] Sep 6 00:05:39.837283 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 00:05:39.837355 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 6 00:05:39.837366 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:05:39.837373 kernel: thunder_xcv, ver 1.0 Sep 6 00:05:39.837393 kernel: thunder_bgx, ver 1.0 Sep 6 00:05:39.837401 kernel: nicpf, ver 1.0 Sep 6 00:05:39.837408 kernel: nicvf, ver 1.0 Sep 6 00:05:39.837494 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 00:05:39.837571 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T00:05:39 UTC (1757117139) Sep 6 00:05:39.837583 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:05:39.837590 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 6 00:05:39.837608 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 6 00:05:39.837626 kernel: watchdog: Hard watchdog permanently disabled Sep 6 00:05:39.837634 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:05:39.837641 kernel: Segment Routing with IPv6 Sep 6 00:05:39.837654 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:05:39.837661 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:05:39.837669 kernel: Key type dns_resolver registered Sep 6 00:05:39.837676 kernel: registered taskstats version 1 Sep 6 00:05:39.837684 kernel: Loading compiled-in X.509 certificates Sep 6 00:05:39.837691 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 6 00:05:39.837698 kernel: Key type .fscrypt registered Sep 6 00:05:39.837706 kernel: Key type fscrypt-provisioning registered Sep 6 00:05:39.837713 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:05:39.837722 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:05:39.837729 kernel: ima: No architecture policies found Sep 6 00:05:39.837737 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 00:05:39.837744 kernel: clk: Disabling unused clocks Sep 6 00:05:39.837751 kernel: Freeing unused kernel memory: 39424K Sep 6 00:05:39.837759 kernel: Run /init as init process Sep 6 00:05:39.837766 kernel: with arguments: Sep 6 00:05:39.837773 kernel: /init Sep 6 00:05:39.837781 kernel: with environment: Sep 6 00:05:39.837789 kernel: HOME=/ Sep 6 00:05:39.837797 kernel: TERM=linux Sep 6 00:05:39.837804 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:05:39.837814 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 6 00:05:39.837823 systemd[1]: Detected virtualization kvm. Sep 6 00:05:39.837832 systemd[1]: Detected architecture arm64. Sep 6 00:05:39.837839 systemd[1]: Running in initrd. Sep 6 00:05:39.837847 systemd[1]: No hostname configured, using default hostname. Sep 6 00:05:39.837857 systemd[1]: Hostname set to . Sep 6 00:05:39.837865 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:05:39.837873 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:05:39.837881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:05:39.837889 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:05:39.837898 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 6 00:05:39.837907 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 00:05:39.837916 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 6 00:05:39.837924 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 6 00:05:39.837934 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 6 00:05:39.837942 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 6 00:05:39.837950 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:05:39.837959 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:05:39.837967 systemd[1]: Reached target paths.target - Path Units. Sep 6 00:05:39.837977 systemd[1]: Reached target slices.target - Slice Units. Sep 6 00:05:39.837985 systemd[1]: Reached target swap.target - Swaps. Sep 6 00:05:39.837993 systemd[1]: Reached target timers.target - Timer Units. Sep 6 00:05:39.838001 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 00:05:39.838009 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 00:05:39.838017 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 6 00:05:39.838025 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 6 00:05:39.838033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:05:39.838042 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 00:05:39.838052 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:05:39.838060 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 00:05:39.838068 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 6 00:05:39.838076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 00:05:39.838085 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 6 00:05:39.838093 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:05:39.838102 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 00:05:39.838110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 00:05:39.838120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:05:39.838128 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 6 00:05:39.838136 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:05:39.838144 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:05:39.838172 systemd-journald[238]: Collecting audit messages is disabled. Sep 6 00:05:39.838194 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 6 00:05:39.838202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:05:39.838211 systemd-journald[238]: Journal started Sep 6 00:05:39.838231 systemd-journald[238]: Runtime Journal (/run/log/journal/bdc65881184c48349a223a8a2e352168) is 5.9M, max 47.3M, 41.4M free. Sep 6 00:05:39.834782 systemd-modules-load[240]: Inserted module 'overlay' Sep 6 00:05:39.842629 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 00:05:39.842471 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 00:05:39.846635 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:05:39.848247 systemd-modules-load[240]: Inserted module 'br_netfilter' Sep 6 00:05:39.849037 kernel: Bridge firewalling registered Sep 6 00:05:39.849774 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:05:39.851286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 00:05:39.853138 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 00:05:39.854538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 00:05:39.859860 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:05:39.864073 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:05:39.865229 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:05:39.870554 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:05:39.876740 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 6 00:05:39.877658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:05:39.880523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 00:05:39.888294 dracut-cmdline[276]: dracut-dracut-053 Sep 6 00:05:39.891641 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 6 00:05:39.906035 systemd-resolved[280]: Positive Trust Anchors: Sep 6 00:05:39.906056 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:05:39.906088 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 00:05:39.910819 systemd-resolved[280]: Defaulting to hostname 'linux'. Sep 6 00:05:39.911923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 00:05:39.913521 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:05:39.959625 kernel: SCSI subsystem initialized Sep 6 00:05:39.964638 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:05:39.971624 kernel: iscsi: registered transport (tcp) Sep 6 00:05:39.984756 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:05:39.984781 kernel: QLogic iSCSI HBA Driver Sep 6 00:05:40.028855 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 6 00:05:40.034767 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 6 00:05:40.050691 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:05:40.050754 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:05:40.050765 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 6 00:05:40.098636 kernel: raid6: neonx8 gen() 15306 MB/s Sep 6 00:05:40.115621 kernel: raid6: neonx4 gen() 15353 MB/s Sep 6 00:05:40.132619 kernel: raid6: neonx2 gen() 13199 MB/s Sep 6 00:05:40.149619 kernel: raid6: neonx1 gen() 10316 MB/s Sep 6 00:05:40.166730 kernel: raid6: int64x8 gen() 6880 MB/s Sep 6 00:05:40.183627 kernel: raid6: int64x4 gen() 7315 MB/s Sep 6 00:05:40.200627 kernel: raid6: int64x2 gen() 6096 MB/s Sep 6 00:05:40.217636 kernel: raid6: int64x1 gen() 4974 MB/s Sep 6 00:05:40.217680 kernel: raid6: using algorithm neonx4 gen() 15353 MB/s Sep 6 00:05:40.234639 kernel: raid6: .... xor() 11949 MB/s, rmw enabled Sep 6 00:05:40.234664 kernel: raid6: using neon recovery algorithm Sep 6 00:05:40.239896 kernel: xor: measuring software checksum speed Sep 6 00:05:40.239918 kernel: 8regs : 19793 MB/sec Sep 6 00:05:40.241035 kernel: 32regs : 19655 MB/sec Sep 6 00:05:40.241054 kernel: arm64_neon : 24274 MB/sec Sep 6 00:05:40.241064 kernel: xor: using function: arm64_neon (24274 MB/sec) Sep 6 00:05:40.290628 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 6 00:05:40.301167 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 6 00:05:40.308785 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:05:40.320132 systemd-udevd[461]: Using default interface naming scheme 'v255'. Sep 6 00:05:40.323373 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:05:40.325987 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 6 00:05:40.341162 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 6 00:05:40.368939 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 00:05:40.380807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 00:05:40.423864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:05:40.437011 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 6 00:05:40.449061 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 6 00:05:40.450924 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 00:05:40.453006 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:05:40.454694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 00:05:40.461822 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 6 00:05:40.474606 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 6 00:05:40.475149 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 00:05:40.475501 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 6 00:05:40.484489 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:05:40.484509 kernel: GPT:9289727 != 19775487 Sep 6 00:05:40.484518 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:05:40.484527 kernel: GPT:9289727 != 19775487 Sep 6 00:05:40.484542 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:05:40.484552 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:05:40.486095 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:05:40.486206 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:05:40.491591 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:05:40.494920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:05:40.495138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:05:40.496647 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:05:40.505617 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (522) Sep 6 00:05:40.506625 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (509) Sep 6 00:05:40.507855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:05:40.517746 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 6 00:05:40.519512 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:05:40.528928 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 6 00:05:40.535502 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 6 00:05:40.536609 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 6 00:05:40.542705 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 6 00:05:40.554746 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 6 00:05:40.556462 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:05:40.561647 disk-uuid[551]: Primary Header is updated. Sep 6 00:05:40.561647 disk-uuid[551]: Secondary Entries is updated. Sep 6 00:05:40.561647 disk-uuid[551]: Secondary Header is updated. Sep 6 00:05:40.565669 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:05:40.569622 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:05:40.573623 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:05:40.580534 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:05:41.572639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:05:41.573447 disk-uuid[552]: The operation has completed successfully. Sep 6 00:05:41.594227 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:05:41.594320 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 6 00:05:41.622766 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 6 00:05:41.625627 sh[573]: Success Sep 6 00:05:41.635911 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 00:05:41.662031 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 6 00:05:41.672966 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 6 00:05:41.674357 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 6 00:05:41.685390 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 6 00:05:41.685430 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:05:41.685442 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 6 00:05:41.687099 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 6 00:05:41.687119 kernel: BTRFS info (device dm-0): using free space tree Sep 6 00:05:41.691773 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 6 00:05:41.692966 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 6 00:05:41.698743 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 6 00:05:41.700117 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 6 00:05:41.707446 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:05:41.707509 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:05:41.707530 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:05:41.710665 kernel: BTRFS info (device vda6): auto enabling async discard Sep 6 00:05:41.717839 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:05:41.719666 kernel: BTRFS info (device vda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:05:41.725544 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 6 00:05:41.731802 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 6 00:05:41.794718 ignition[667]: Ignition 2.19.0 Sep 6 00:05:41.794719 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 00:05:41.794727 ignition[667]: Stage: fetch-offline Sep 6 00:05:41.794762 ignition[667]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:05:41.794771 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:05:41.794921 ignition[667]: parsed url from cmdline: "" Sep 6 00:05:41.794924 ignition[667]: no config URL provided Sep 6 00:05:41.794928 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:05:41.794935 ignition[667]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:05:41.794957 ignition[667]: op(1): [started] loading QEMU firmware config module Sep 6 00:05:41.794962 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 00:05:41.802189 ignition[667]: op(1): [finished] loading QEMU firmware config module Sep 6 00:05:41.802209 ignition[667]: QEMU firmware config was not found. Ignoring... Sep 6 00:05:41.805743 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 00:05:41.829052 systemd-networkd[766]: lo: Link UP Sep 6 00:05:41.829064 systemd-networkd[766]: lo: Gained carrier Sep 6 00:05:41.829883 systemd-networkd[766]: Enumeration completed Sep 6 00:05:41.830001 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 00:05:41.830293 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:05:41.830296 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:05:41.831270 systemd-networkd[766]: eth0: Link UP Sep 6 00:05:41.831273 systemd-networkd[766]: eth0: Gained carrier Sep 6 00:05:41.831281 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:05:41.831826 systemd[1]: Reached target network.target - Network. Sep 6 00:05:41.852651 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:05:41.868908 ignition[667]: parsing config with SHA512: 3c21208ddb469b4e0efa4b171dad9ca35d501696706ace515a98492c29fe108b368c0336fc1e6c855560c1892a68598ff18aa9e1126702fb006e10471d0ff856 Sep 6 00:05:41.876968 unknown[667]: fetched base config from "system" Sep 6 00:05:41.876978 unknown[667]: fetched user config from "qemu" Sep 6 00:05:41.877463 ignition[667]: fetch-offline: fetch-offline passed Sep 6 00:05:41.877526 ignition[667]: Ignition finished successfully Sep 6 00:05:41.879294 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 00:05:41.880701 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:05:41.888755 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 6 00:05:41.900331 ignition[770]: Ignition 2.19.0 Sep 6 00:05:41.900340 ignition[770]: Stage: kargs Sep 6 00:05:41.900516 ignition[770]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:05:41.900525 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:05:41.901509 ignition[770]: kargs: kargs passed Sep 6 00:05:41.901564 ignition[770]: Ignition finished successfully Sep 6 00:05:41.903563 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 6 00:05:41.909759 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 6 00:05:41.919524 ignition[779]: Ignition 2.19.0 Sep 6 00:05:41.919535 ignition[779]: Stage: disks Sep 6 00:05:41.919739 ignition[779]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:05:41.919749 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:05:41.920715 ignition[779]: disks: disks passed Sep 6 00:05:41.920763 ignition[779]: Ignition finished successfully Sep 6 00:05:41.923655 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 6 00:05:41.925307 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 6 00:05:41.926217 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 6 00:05:41.927826 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 00:05:41.929506 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 00:05:41.931085 systemd[1]: Reached target basic.target - Basic System. Sep 6 00:05:41.940751 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 6 00:05:41.952223 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 6 00:05:41.956696 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 6 00:05:41.964782 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 6 00:05:42.004625 kernel: EXT4-fs (vda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 6 00:05:42.005232 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 6 00:05:42.006364 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 6 00:05:42.019745 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 00:05:42.023084 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 6 00:05:42.026558 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 6 00:05:42.033415 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (797) Sep 6 00:05:42.033450 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:05:42.033462 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:05:42.033472 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:05:42.026642 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:05:42.026668 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 00:05:42.030976 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 6 00:05:42.035261 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 6 00:05:42.039713 kernel: BTRFS info (device vda6): auto enabling async discard Sep 6 00:05:42.041334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 00:05:42.074679 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:05:42.078771 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:05:42.082807 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:05:42.086439 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:05:42.155872 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 6 00:05:42.162757 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 6 00:05:42.164286 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 6 00:05:42.169608 kernel: BTRFS info (device vda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:05:42.185653 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 6 00:05:42.191422 ignition[912]: INFO : Ignition 2.19.0 Sep 6 00:05:42.191422 ignition[912]: INFO : Stage: mount Sep 6 00:05:42.192729 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:05:42.192729 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:05:42.195286 ignition[912]: INFO : mount: mount passed Sep 6 00:05:42.195286 ignition[912]: INFO : Ignition finished successfully Sep 6 00:05:42.195436 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 6 00:05:42.203746 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 6 00:05:42.684585 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 6 00:05:42.693774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 00:05:42.698612 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (925) Sep 6 00:05:42.701026 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:05:42.701071 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:05:42.701082 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:05:42.703616 kernel: BTRFS info (device vda6): auto enabling async discard Sep 6 00:05:42.704047 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 00:05:42.719428 ignition[942]: INFO : Ignition 2.19.0 Sep 6 00:05:42.719428 ignition[942]: INFO : Stage: files Sep 6 00:05:42.720705 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:05:42.720705 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:05:42.720705 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:05:42.723842 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:05:42.723842 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:05:42.723842 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:05:42.723842 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:05:42.723842 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:05:42.723364 unknown[942]: wrote ssh authorized keys file for user: core Sep 6 00:05:42.729521 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:05:42.729521 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:05:42.729521 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:05:42.729521 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 6 00:05:42.786308 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:05:43.087575 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:05:43.087575 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:05:43.087575 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 6 00:05:43.302856 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 6 00:05:43.370649 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:05:43.370649 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:05:43.373553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 6 00:05:43.690301 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 6 00:05:43.832948 systemd-networkd[766]: eth0: Gained IPv6LL Sep 6 00:05:44.157641 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 6 00:05:44.159851 ignition[942]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 00:05:44.179764 ignition[942]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:05:44.185609 ignition[942]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:05:44.186937 ignition[942]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 00:05:44.186937 ignition[942]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:05:44.186937 ignition[942]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:05:44.186937 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:05:44.186937 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:05:44.186937 ignition[942]: INFO : files: files passed Sep 6 00:05:44.186937 ignition[942]: INFO : Ignition finished successfully Sep 6 00:05:44.188343 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 6 00:05:44.196872 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 6 00:05:44.199371 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 6 00:05:44.202382 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:05:44.202513 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 6 00:05:44.208457 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Sep 6 00:05:44.211690 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:05:44.211690 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:05:44.214298 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:05:44.215606 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 00:05:44.217954 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 6 00:05:44.230771 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 6 00:05:44.250909 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:05:44.251673 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 6 00:05:44.252907 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 6 00:05:44.254436 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 6 00:05:44.255879 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 6 00:05:44.256719 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 6 00:05:44.272338 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 00:05:44.291819 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 6 00:05:44.300032 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:05:44.301105 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:05:44.302777 systemd[1]: Stopped target timers.target - Timer Units. Sep 6 00:05:44.304300 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:05:44.304428 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 00:05:44.306454 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 6 00:05:44.308103 systemd[1]: Stopped target basic.target - Basic System. Sep 6 00:05:44.309726 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 6 00:05:44.311198 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 00:05:44.312705 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 6 00:05:44.314412 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 6 00:05:44.315873 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 00:05:44.317443 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 6 00:05:44.319011 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 6 00:05:44.320397 systemd[1]: Stopped target swap.target - Swaps. Sep 6 00:05:44.321592 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:05:44.321736 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 6 00:05:44.323730 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:05:44.325389 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:05:44.327273 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 6 00:05:44.328676 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:05:44.329982 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:05:44.330110 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 6 00:05:44.332419 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:05:44.332537 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 00:05:44.334314 systemd[1]: Stopped target paths.target - Path Units. Sep 6 00:05:44.335624 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:05:44.339641 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:05:44.340733 systemd[1]: Stopped target slices.target - Slice Units. Sep 6 00:05:44.342425 systemd[1]: Stopped target sockets.target - Socket Units. Sep 6 00:05:44.343759 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:05:44.343855 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 00:05:44.345189 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:05:44.345274 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 00:05:44.346659 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:05:44.346775 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 00:05:44.348450 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:05:44.348564 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 6 00:05:44.355788 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 6 00:05:44.356488 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:05:44.356639 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:05:44.361824 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 6 00:05:44.362558 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:05:44.362699 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:05:44.364384 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:05:44.370391 ignition[996]: INFO : Ignition 2.19.0 Sep 6 00:05:44.370391 ignition[996]: INFO : Stage: umount Sep 6 00:05:44.370391 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:05:44.370391 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:05:44.370391 ignition[996]: INFO : umount: umount passed Sep 6 00:05:44.370391 ignition[996]: INFO : Ignition finished successfully Sep 6 00:05:44.364491 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 00:05:44.370881 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:05:44.370973 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 6 00:05:44.374179 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:05:44.374260 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 6 00:05:44.376471 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:05:44.378525 systemd[1]: Stopped target network.target - Network. Sep 6 00:05:44.379958 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:05:44.380018 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 6 00:05:44.381696 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:05:44.381736 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 6 00:05:44.383242 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:05:44.383283 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 6 00:05:44.385108 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 6 00:05:44.385150 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 6 00:05:44.386868 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 6 00:05:44.388296 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 6 00:05:44.396218 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:05:44.396342 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 6 00:05:44.398065 systemd-networkd[766]: eth0: DHCPv6 lease lost Sep 6 00:05:44.399899 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:05:44.400034 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 6 00:05:44.405282 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:05:44.405344 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:05:44.415737 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 6 00:05:44.416907 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:05:44.416977 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 00:05:44.418714 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:05:44.418762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:05:44.420489 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:05:44.420547 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 6 00:05:44.422388 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 6 00:05:44.422431 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:05:44.424369 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:05:44.434563 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:05:44.434706 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 6 00:05:44.438696 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:05:44.438857 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 6 00:05:44.440429 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:05:44.440470 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 6 00:05:44.441953 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:05:44.442069 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:05:44.444065 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:05:44.444134 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 6 00:05:44.445295 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:05:44.445328 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:05:44.446749 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:05:44.446796 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 6 00:05:44.449138 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:05:44.449179 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 6 00:05:44.451430 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:05:44.451472 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:05:44.463808 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 6 00:05:44.464670 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:05:44.464734 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:05:44.466501 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:05:44.466553 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:05:44.472458 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:05:44.472581 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 6 00:05:44.474588 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 6 00:05:44.476978 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 6 00:05:44.490580 systemd[1]: Switching root. Sep 6 00:05:44.517450 systemd-journald[238]: Journal stopped Sep 6 00:05:45.240164 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 6 00:05:45.240217 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:05:45.240230 kernel: SELinux: policy capability open_perms=1 Sep 6 00:05:45.240239 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:05:45.240249 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:05:45.240263 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:05:45.240273 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:05:45.240285 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:05:45.240295 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:05:45.240304 kernel: audit: type=1403 audit(1757117144.716:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:05:45.240315 systemd[1]: Successfully loaded SELinux policy in 30.612ms. Sep 6 00:05:45.240331 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.020ms. Sep 6 00:05:45.240350 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 6 00:05:45.240373 systemd[1]: Detected virtualization kvm. Sep 6 00:05:45.240383 systemd[1]: Detected architecture arm64. Sep 6 00:05:45.240394 systemd[1]: Detected first boot. Sep 6 00:05:45.240410 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:05:45.240421 zram_generator::config[1064]: No configuration found. Sep 6 00:05:45.240432 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:05:45.240457 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:05:45.240468 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 6 00:05:45.240480 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 6 00:05:45.240491 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 6 00:05:45.240502 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 6 00:05:45.240514 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 6 00:05:45.240525 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 6 00:05:45.240553 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 6 00:05:45.240568 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 6 00:05:45.240579 systemd[1]: Created slice user.slice - User and Session Slice. Sep 6 00:05:45.240589 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:05:45.240620 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:05:45.240631 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 6 00:05:45.240642 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 6 00:05:45.240658 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 6 00:05:45.240669 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 00:05:45.240680 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 6 00:05:45.240691 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:05:45.240701 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 6 00:05:45.240711 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:05:45.240722 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 00:05:45.240733 systemd[1]: Reached target slices.target - Slice Units. Sep 6 00:05:45.240745 systemd[1]: Reached target swap.target - Swaps. Sep 6 00:05:45.240756 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 6 00:05:45.240766 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 6 00:05:45.240777 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 6 00:05:45.240787 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 6 00:05:45.240799 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:05:45.240810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 00:05:45.240821 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:05:45.240831 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 6 00:05:45.240843 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 6 00:05:45.240853 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 6 00:05:45.240863 systemd[1]: Mounting media.mount - External Media Directory... Sep 6 00:05:45.240874 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 6 00:05:45.240886 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 6 00:05:45.240897 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 6 00:05:45.240907 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 6 00:05:45.240918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:05:45.240928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 00:05:45.240940 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 6 00:05:45.240951 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:05:45.240961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 00:05:45.240972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:05:45.240982 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 6 00:05:45.240992 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:05:45.241003 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:05:45.241014 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 00:05:45.241031 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 6 00:05:45.241041 kernel: fuse: init (API version 7.39) Sep 6 00:05:45.241050 kernel: ACPI: bus type drm_connector registered Sep 6 00:05:45.241060 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 00:05:45.241071 kernel: loop: module loaded Sep 6 00:05:45.241081 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 00:05:45.241091 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 6 00:05:45.241102 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 6 00:05:45.241113 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 00:05:45.241125 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 6 00:05:45.241136 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 6 00:05:45.241162 systemd-journald[1150]: Collecting audit messages is disabled. Sep 6 00:05:45.241183 systemd[1]: Mounted media.mount - External Media Directory. Sep 6 00:05:45.241193 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 6 00:05:45.241204 systemd-journald[1150]: Journal started Sep 6 00:05:45.241226 systemd-journald[1150]: Runtime Journal (/run/log/journal/bdc65881184c48349a223a8a2e352168) is 5.9M, max 47.3M, 41.4M free. Sep 6 00:05:45.244325 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 00:05:45.245375 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 6 00:05:45.246505 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 6 00:05:45.247778 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 6 00:05:45.249081 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:05:45.250345 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:05:45.250511 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 6 00:05:45.251881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:05:45.252039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:05:45.253255 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:05:45.253412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 00:05:45.254636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:05:45.254794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:05:45.256308 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:05:45.256464 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 6 00:05:45.257723 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:05:45.257936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:05:45.259198 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 00:05:45.260746 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 6 00:05:45.262325 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 6 00:05:45.273170 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 6 00:05:45.282696 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 6 00:05:45.284523 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 6 00:05:45.285467 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:05:45.288392 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 6 00:05:45.291754 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 6 00:05:45.292798 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:05:45.293894 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 6 00:05:45.294808 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 00:05:45.299528 systemd-journald[1150]: Time spent on flushing to /var/log/journal/bdc65881184c48349a223a8a2e352168 is 19.162ms for 847 entries. Sep 6 00:05:45.299528 systemd-journald[1150]: System Journal (/var/log/journal/bdc65881184c48349a223a8a2e352168) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:05:45.330281 systemd-journald[1150]: Received client request to flush runtime journal. Sep 6 00:05:45.297815 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:05:45.300820 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 6 00:05:45.303749 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:05:45.305149 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 6 00:05:45.306399 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 6 00:05:45.310835 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 6 00:05:45.314050 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 6 00:05:45.316097 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 6 00:05:45.324450 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Sep 6 00:05:45.324461 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Sep 6 00:05:45.327053 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 00:05:45.328338 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 00:05:45.339747 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 6 00:05:45.341132 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 6 00:05:45.342630 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:05:45.361807 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 6 00:05:45.367820 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 00:05:45.379043 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Sep 6 00:05:45.379309 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Sep 6 00:05:45.382999 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:05:45.747884 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 6 00:05:45.758795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:05:45.778522 systemd-udevd[1222]: Using default interface naming scheme 'v255'. Sep 6 00:05:45.797899 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:05:45.810773 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 00:05:45.822971 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 6 00:05:45.832107 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 6 00:05:45.842644 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1238) Sep 6 00:05:45.864833 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 6 00:05:45.884312 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 6 00:05:45.914936 systemd-networkd[1230]: lo: Link UP Sep 6 00:05:45.914948 systemd-networkd[1230]: lo: Gained carrier Sep 6 00:05:45.915757 systemd-networkd[1230]: Enumeration completed Sep 6 00:05:45.915884 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 00:05:45.916198 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:05:45.916206 systemd-networkd[1230]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:05:45.916804 systemd-networkd[1230]: eth0: Link UP Sep 6 00:05:45.916812 systemd-networkd[1230]: eth0: Gained carrier Sep 6 00:05:45.916825 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:05:45.925798 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 6 00:05:45.934031 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:05:45.936663 systemd-networkd[1230]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:05:45.943630 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 6 00:05:45.946317 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 6 00:05:45.958630 lvm[1260]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:05:45.972691 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:05:45.989197 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 6 00:05:45.990554 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:05:46.000952 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 6 00:05:46.004143 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:05:46.035090 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 6 00:05:46.036303 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 6 00:05:46.037479 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:05:46.037508 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 00:05:46.038335 systemd[1]: Reached target machines.target - Containers. Sep 6 00:05:46.040352 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 6 00:05:46.054783 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 6 00:05:46.056894 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 6 00:05:46.057982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:05:46.058990 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 6 00:05:46.061797 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 6 00:05:46.069794 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 6 00:05:46.072203 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 6 00:05:46.077697 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 6 00:05:46.091007 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:05:46.091785 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 6 00:05:46.093896 kernel: loop0: detected capacity change from 0 to 114432 Sep 6 00:05:46.103657 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:05:46.135650 kernel: loop1: detected capacity change from 0 to 203944 Sep 6 00:05:46.188640 kernel: loop2: detected capacity change from 0 to 114328 Sep 6 00:05:46.239622 kernel: loop3: detected capacity change from 0 to 114432 Sep 6 00:05:46.246622 kernel: loop4: detected capacity change from 0 to 203944 Sep 6 00:05:46.252629 kernel: loop5: detected capacity change from 0 to 114328 Sep 6 00:05:46.255710 (sd-merge)[1291]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 6 00:05:46.256107 (sd-merge)[1291]: Merged extensions into '/usr'. Sep 6 00:05:46.260183 systemd[1]: Reloading requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Sep 6 00:05:46.260204 systemd[1]: Reloading... Sep 6 00:05:46.301329 zram_generator::config[1319]: No configuration found. Sep 6 00:05:46.347579 ldconfig[1272]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:05:46.408062 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:05:46.451718 systemd[1]: Reloading finished in 191 ms. Sep 6 00:05:46.467362 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 6 00:05:46.468744 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 6 00:05:46.481771 systemd[1]: Starting ensure-sysext.service... Sep 6 00:05:46.483648 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 00:05:46.489024 systemd[1]: Reloading requested from client PID 1361 ('systemctl') (unit ensure-sysext.service)... Sep 6 00:05:46.489041 systemd[1]: Reloading... Sep 6 00:05:46.504305 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:05:46.504642 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 6 00:05:46.505304 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:05:46.505630 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Sep 6 00:05:46.505690 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Sep 6 00:05:46.508204 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 00:05:46.508220 systemd-tmpfiles[1362]: Skipping /boot Sep 6 00:05:46.516959 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 00:05:46.516978 systemd-tmpfiles[1362]: Skipping /boot Sep 6 00:05:46.535640 zram_generator::config[1389]: No configuration found. Sep 6 00:05:46.640991 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:05:46.686796 systemd[1]: Reloading finished in 197 ms. Sep 6 00:05:46.705264 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:05:46.725781 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 6 00:05:46.728428 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 6 00:05:46.730913 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 6 00:05:46.734009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 00:05:46.736886 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 6 00:05:46.742471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:05:46.753267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:05:46.759937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:05:46.764343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:05:46.765461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:05:46.769392 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 6 00:05:46.771563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:05:46.771761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:05:46.774051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:05:46.774215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:05:46.774824 augenrules[1462]: No rules Sep 6 00:05:46.776389 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:05:46.776958 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:05:46.778890 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 6 00:05:46.785005 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 6 00:05:46.797731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:05:46.799194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:05:46.801891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:05:46.804298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:05:46.808911 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:05:46.812968 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 6 00:05:46.813945 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:05:46.815097 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 6 00:05:46.816825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:05:46.816986 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:05:46.818348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:05:46.818505 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:05:46.819976 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:05:46.820181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:05:46.825573 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 6 00:05:46.829564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:05:46.839962 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:05:46.841995 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 00:05:46.843821 systemd-resolved[1439]: Positive Trust Anchors: Sep 6 00:05:46.843853 systemd-resolved[1439]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:05:46.843885 systemd-resolved[1439]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 00:05:46.846050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:05:46.850204 systemd-resolved[1439]: Defaulting to hostname 'linux'. Sep 6 00:05:46.850826 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:05:46.851722 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:05:46.851838 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:05:46.852569 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 00:05:46.854020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:05:46.854178 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:05:46.855702 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:05:46.855881 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 00:05:46.857108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:05:46.857246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:05:46.858695 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:05:46.858875 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:05:46.861623 systemd[1]: Finished ensure-sysext.service. Sep 6 00:05:46.865875 systemd[1]: Reached target network.target - Network. Sep 6 00:05:46.866590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:05:46.867494 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:05:46.867563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 00:05:46.881869 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 6 00:05:46.924048 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 6 00:05:46.924580 systemd-timesyncd[1507]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 00:05:46.924642 systemd-timesyncd[1507]: Initial clock synchronization to Sat 2025-09-06 00:05:47.258344 UTC. Sep 6 00:05:46.925628 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 00:05:46.926511 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 6 00:05:46.930119 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 6 00:05:46.931141 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 6 00:05:46.932231 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:05:46.932263 systemd[1]: Reached target paths.target - Path Units. Sep 6 00:05:46.933034 systemd[1]: Reached target time-set.target - System Time Set. Sep 6 00:05:46.935543 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 6 00:05:46.936663 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 6 00:05:46.937700 systemd[1]: Reached target timers.target - Timer Units. Sep 6 00:05:46.939270 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 6 00:05:46.941947 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 6 00:05:46.944127 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 6 00:05:46.948654 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 6 00:05:46.949587 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 00:05:46.950360 systemd[1]: Reached target basic.target - Basic System. Sep 6 00:05:46.951342 systemd[1]: System is tainted: cgroupsv1 Sep 6 00:05:46.951388 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 6 00:05:46.951416 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 6 00:05:46.952691 systemd[1]: Starting containerd.service - containerd container runtime... Sep 6 00:05:46.954586 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 6 00:05:46.956506 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 6 00:05:46.958852 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 6 00:05:46.961544 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 6 00:05:46.964169 jq[1513]: false Sep 6 00:05:46.964567 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 6 00:05:46.968383 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 6 00:05:46.971870 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 6 00:05:46.976113 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 6 00:05:46.982407 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 6 00:05:46.986748 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:05:46.989096 systemd[1]: Starting update-engine.service - Update Engine... Sep 6 00:05:46.992037 extend-filesystems[1515]: Found loop3 Sep 6 00:05:46.994349 extend-filesystems[1515]: Found loop4 Sep 6 00:05:46.994349 extend-filesystems[1515]: Found loop5 Sep 6 00:05:46.994349 extend-filesystems[1515]: Found vda Sep 6 00:05:46.994349 extend-filesystems[1515]: Found vda1 Sep 6 00:05:46.994349 extend-filesystems[1515]: Found vda2 Sep 6 00:05:46.994349 extend-filesystems[1515]: Found vda3 Sep 6 00:05:46.994349 extend-filesystems[1515]: Found usr Sep 6 00:05:46.994349 extend-filesystems[1515]: Found vda4 Sep 6 00:05:46.994349 extend-filesystems[1515]: Found vda6 Sep 6 00:05:46.994349 extend-filesystems[1515]: Found vda7 Sep 6 00:05:46.994349 extend-filesystems[1515]: Found vda9 Sep 6 00:05:46.994349 extend-filesystems[1515]: Checking size of /dev/vda9 Sep 6 00:05:46.993322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 6 00:05:46.994656 dbus-daemon[1512]: [system] SELinux support is enabled Sep 6 00:05:46.999926 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 6 00:05:47.015099 extend-filesystems[1515]: Resized partition /dev/vda9 Sep 6 00:05:47.017390 jq[1533]: true Sep 6 00:05:47.015130 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:05:47.015377 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 6 00:05:47.015732 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:05:47.016211 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 6 00:05:47.019758 extend-filesystems[1542]: resize2fs 1.47.1 (20-May-2024) Sep 6 00:05:47.026337 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1236) Sep 6 00:05:47.026795 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 00:05:47.026829 update_engine[1532]: I20250906 00:05:47.021538 1532 main.cc:92] Flatcar Update Engine starting Sep 6 00:05:47.026829 update_engine[1532]: I20250906 00:05:47.026607 1532 update_check_scheduler.cc:74] Next update check in 11m43s Sep 6 00:05:47.021066 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:05:47.021350 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 6 00:05:47.048662 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 00:05:47.050390 (ntainerd)[1546]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 6 00:05:47.068856 jq[1545]: true Sep 6 00:05:47.066945 systemd[1]: Started update-engine.service - Update Engine. Sep 6 00:05:47.068323 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:05:47.068352 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 6 00:05:47.070206 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:05:47.070230 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 6 00:05:47.071497 extend-filesystems[1542]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:05:47.071497 extend-filesystems[1542]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:05:47.071497 extend-filesystems[1542]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 00:05:47.076919 extend-filesystems[1515]: Resized filesystem in /dev/vda9 Sep 6 00:05:47.074242 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:05:47.075397 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 6 00:05:47.078276 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 00:05:47.079846 systemd-logind[1525]: New seat seat0. Sep 6 00:05:47.080328 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:05:47.080559 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 6 00:05:47.084112 systemd[1]: Started systemd-logind.service - User Login Management. Sep 6 00:05:47.092602 tar[1544]: linux-arm64/helm Sep 6 00:05:47.113810 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:05:47.115303 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 6 00:05:47.118456 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 6 00:05:47.134829 locksmithd[1561]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:05:47.201510 containerd[1546]: time="2025-09-06T00:05:47.201418857Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 6 00:05:47.226758 containerd[1546]: time="2025-09-06T00:05:47.226711012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:47.228155 containerd[1546]: time="2025-09-06T00:05:47.228116933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:05:47.228155 containerd[1546]: time="2025-09-06T00:05:47.228153435Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:05:47.228239 containerd[1546]: time="2025-09-06T00:05:47.228170811Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:05:47.228335 containerd[1546]: time="2025-09-06T00:05:47.228310320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 6 00:05:47.228335 containerd[1546]: time="2025-09-06T00:05:47.228332821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:47.228403 containerd[1546]: time="2025-09-06T00:05:47.228385074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:05:47.228403 containerd[1546]: time="2025-09-06T00:05:47.228401325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:47.228626 containerd[1546]: time="2025-09-06T00:05:47.228591212Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:05:47.228626 containerd[1546]: time="2025-09-06T00:05:47.228612589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:47.228684 containerd[1546]: time="2025-09-06T00:05:47.228626798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:05:47.228684 containerd[1546]: time="2025-09-06T00:05:47.228657800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:47.228785 containerd[1546]: time="2025-09-06T00:05:47.228764056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:47.228984 containerd[1546]: time="2025-09-06T00:05:47.228961235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:47.229123 containerd[1546]: time="2025-09-06T00:05:47.229100494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:05:47.229123 containerd[1546]: time="2025-09-06T00:05:47.229120704Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:05:47.229214 containerd[1546]: time="2025-09-06T00:05:47.229196083Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:05:47.229256 containerd[1546]: time="2025-09-06T00:05:47.229242461Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:05:47.233238 containerd[1546]: time="2025-09-06T00:05:47.233195165Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:05:47.233303 containerd[1546]: time="2025-09-06T00:05:47.233252419Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:05:47.233303 containerd[1546]: time="2025-09-06T00:05:47.233275003Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 6 00:05:47.233303 containerd[1546]: time="2025-09-06T00:05:47.233290838Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 6 00:05:47.233376 containerd[1546]: time="2025-09-06T00:05:47.233306047Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233436263Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233733865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233835663Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233850831Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233863623Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233887250Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233902542Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233915585Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233928919Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233947712Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233960129Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.233989881Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.234003882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:05:47.234035 containerd[1546]: time="2025-09-06T00:05:47.234023592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234037176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234050010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234061927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234077970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234090346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234102596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234115056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234127140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234140974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234152100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234163684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234176559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234192685Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234217937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234306 containerd[1546]: time="2025-09-06T00:05:47.234234855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234560 containerd[1546]: time="2025-09-06T00:05:47.234245522Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:05:47.234560 containerd[1546]: time="2025-09-06T00:05:47.234352070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:05:47.234560 containerd[1546]: time="2025-09-06T00:05:47.234369030Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 6 00:05:47.234560 containerd[1546]: time="2025-09-06T00:05:47.234379322Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:05:47.234560 containerd[1546]: time="2025-09-06T00:05:47.234390906Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 6 00:05:47.234560 containerd[1546]: time="2025-09-06T00:05:47.234400323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.234560 containerd[1546]: time="2025-09-06T00:05:47.234415324Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 6 00:05:47.234560 containerd[1546]: time="2025-09-06T00:05:47.234424783Z" level=info msg="NRI interface is disabled by configuration." Sep 6 00:05:47.234560 containerd[1546]: time="2025-09-06T00:05:47.234434950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:05:47.235526 containerd[1546]: time="2025-09-06T00:05:47.234780889Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:05:47.235526 containerd[1546]: time="2025-09-06T00:05:47.234841517Z" level=info msg="Connect containerd service" Sep 6 00:05:47.235526 containerd[1546]: time="2025-09-06T00:05:47.234950066Z" level=info msg="using legacy CRI server" Sep 6 00:05:47.235526 containerd[1546]: time="2025-09-06T00:05:47.234959066Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 6 00:05:47.235526 containerd[1546]: time="2025-09-06T00:05:47.235049905Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:05:47.235754 containerd[1546]: time="2025-09-06T00:05:47.235653485Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:05:47.235923 containerd[1546]: time="2025-09-06T00:05:47.235867248Z" level=info msg="Start subscribing containerd event" Sep 6 00:05:47.235951 containerd[1546]: time="2025-09-06T00:05:47.235933044Z" level=info msg="Start recovering state" Sep 6 00:05:47.236020 containerd[1546]: time="2025-09-06T00:05:47.236003465Z" level=info msg="Start event monitor" Sep 6 00:05:47.236053 containerd[1546]: time="2025-09-06T00:05:47.236025674Z" level=info msg="Start snapshots syncer" Sep 6 00:05:47.236053 containerd[1546]: time="2025-09-06T00:05:47.236038175Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:05:47.236053 containerd[1546]: time="2025-09-06T00:05:47.236047092Z" level=info msg="Start streaming server" Sep 6 00:05:47.236139 containerd[1546]: time="2025-09-06T00:05:47.236117763Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:05:47.236166 containerd[1546]: time="2025-09-06T00:05:47.236159933Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:05:47.237600 containerd[1546]: time="2025-09-06T00:05:47.236207977Z" level=info msg="containerd successfully booted in 0.036465s" Sep 6 00:05:47.236319 systemd[1]: Started containerd.service - containerd container runtime. Sep 6 00:05:47.416792 systemd-networkd[1230]: eth0: Gained IPv6LL Sep 6 00:05:47.423571 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 6 00:05:47.425247 systemd[1]: Reached target network-online.target - Network is Online. Sep 6 00:05:47.441570 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 6 00:05:47.444553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:05:47.449827 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 6 00:05:47.468204 tar[1544]: linux-arm64/LICENSE Sep 6 00:05:47.468204 tar[1544]: linux-arm64/README.md Sep 6 00:05:47.479355 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 6 00:05:47.479596 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 6 00:05:47.482055 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 6 00:05:47.484423 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 6 00:05:47.486149 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 6 00:05:48.050407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:05:48.054392 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:05:48.098129 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:05:48.123598 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 6 00:05:48.139916 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 6 00:05:48.148816 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:05:48.149081 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 6 00:05:48.161891 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 6 00:05:48.171825 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 6 00:05:48.174508 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 6 00:05:48.176669 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 6 00:05:48.177745 systemd[1]: Reached target getty.target - Login Prompts. Sep 6 00:05:48.178587 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 6 00:05:48.182772 systemd[1]: Startup finished in 5.556s (kernel) + 3.497s (userspace) = 9.053s. Sep 6 00:05:48.469758 kubelet[1630]: E0906 00:05:48.469616 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:05:48.472518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:05:48.472960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:05:52.255944 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 6 00:05:52.268854 systemd[1]: Started sshd@0-10.0.0.96:22-10.0.0.1:37554.service - OpenSSH per-connection server daemon (10.0.0.1:37554). Sep 6 00:05:52.314725 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 37554 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:52.316199 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:52.325253 systemd-logind[1525]: New session 1 of user core. Sep 6 00:05:52.326379 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 6 00:05:52.338843 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 6 00:05:52.348516 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 6 00:05:52.351812 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 6 00:05:52.358045 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:52.440401 systemd[1668]: Queued start job for default target default.target. Sep 6 00:05:52.441092 systemd[1668]: Created slice app.slice - User Application Slice. Sep 6 00:05:52.441116 systemd[1668]: Reached target paths.target - Paths. Sep 6 00:05:52.441128 systemd[1668]: Reached target timers.target - Timers. Sep 6 00:05:52.453756 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 6 00:05:52.459520 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 6 00:05:52.459579 systemd[1668]: Reached target sockets.target - Sockets. Sep 6 00:05:52.459591 systemd[1668]: Reached target basic.target - Basic System. Sep 6 00:05:52.459647 systemd[1668]: Reached target default.target - Main User Target. Sep 6 00:05:52.459672 systemd[1668]: Startup finished in 96ms. Sep 6 00:05:52.459948 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 6 00:05:52.461296 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 6 00:05:52.521885 systemd[1]: Started sshd@1-10.0.0.96:22-10.0.0.1:37560.service - OpenSSH per-connection server daemon (10.0.0.1:37560). Sep 6 00:05:52.570080 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 37560 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:52.571531 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:52.576668 systemd-logind[1525]: New session 2 of user core. Sep 6 00:05:52.587885 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 6 00:05:52.642249 sshd[1680]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:52.650856 systemd[1]: Started sshd@2-10.0.0.96:22-10.0.0.1:37564.service - OpenSSH per-connection server daemon (10.0.0.1:37564). Sep 6 00:05:52.651236 systemd[1]: sshd@1-10.0.0.96:22-10.0.0.1:37560.service: Deactivated successfully. Sep 6 00:05:52.653045 systemd-logind[1525]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:05:52.653575 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:05:52.655827 systemd-logind[1525]: Removed session 2. Sep 6 00:05:52.698011 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 37564 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:52.699445 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:52.704864 systemd-logind[1525]: New session 3 of user core. Sep 6 00:05:52.715919 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 6 00:05:52.768588 sshd[1685]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:52.782913 systemd[1]: Started sshd@3-10.0.0.96:22-10.0.0.1:37580.service - OpenSSH per-connection server daemon (10.0.0.1:37580). Sep 6 00:05:52.783312 systemd[1]: sshd@2-10.0.0.96:22-10.0.0.1:37564.service: Deactivated successfully. Sep 6 00:05:52.784754 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:05:52.787769 systemd-logind[1525]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:05:52.790183 systemd-logind[1525]: Removed session 3. Sep 6 00:05:52.817322 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 37580 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:52.818279 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:52.824125 systemd-logind[1525]: New session 4 of user core. Sep 6 00:05:52.834485 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 6 00:05:52.895205 sshd[1694]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:52.916845 systemd[1]: Started sshd@4-10.0.0.96:22-10.0.0.1:37596.service - OpenSSH per-connection server daemon (10.0.0.1:37596). Sep 6 00:05:52.917528 systemd[1]: sshd@3-10.0.0.96:22-10.0.0.1:37580.service: Deactivated successfully. Sep 6 00:05:52.920742 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:05:52.920763 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:05:52.923076 systemd-logind[1525]: Removed session 4. Sep 6 00:05:52.955435 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 37596 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:52.956233 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:52.961556 systemd-logind[1525]: New session 5 of user core. Sep 6 00:05:52.967887 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 6 00:05:53.030995 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 6 00:05:53.031273 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:05:53.046417 sudo[1708]: pam_unix(sudo:session): session closed for user root Sep 6 00:05:53.048654 sshd[1701]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:53.053026 systemd[1]: sshd@4-10.0.0.96:22-10.0.0.1:37596.service: Deactivated successfully. Sep 6 00:05:53.054895 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:05:53.058031 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:05:53.067893 systemd[1]: Started sshd@5-10.0.0.96:22-10.0.0.1:37612.service - OpenSSH per-connection server daemon (10.0.0.1:37612). Sep 6 00:05:53.069405 systemd-logind[1525]: Removed session 5. Sep 6 00:05:53.108681 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 37612 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:53.110007 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:53.120131 systemd-logind[1525]: New session 6 of user core. Sep 6 00:05:53.132903 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 6 00:05:53.185490 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 6 00:05:53.185854 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:05:53.190863 sudo[1718]: pam_unix(sudo:session): session closed for user root Sep 6 00:05:53.195422 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 6 00:05:53.195771 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:05:53.226897 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 6 00:05:53.229385 auditctl[1721]: No rules Sep 6 00:05:53.230181 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 00:05:53.230409 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 6 00:05:53.238121 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 6 00:05:53.294451 augenrules[1740]: No rules Sep 6 00:05:53.299484 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 6 00:05:53.301103 sudo[1717]: pam_unix(sudo:session): session closed for user root Sep 6 00:05:53.304513 sshd[1713]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:53.316898 systemd[1]: Started sshd@6-10.0.0.96:22-10.0.0.1:37624.service - OpenSSH per-connection server daemon (10.0.0.1:37624). Sep 6 00:05:53.317294 systemd[1]: sshd@5-10.0.0.96:22-10.0.0.1:37612.service: Deactivated successfully. Sep 6 00:05:53.324745 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:05:53.324985 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:05:53.328767 systemd-logind[1525]: Removed session 6. Sep 6 00:05:53.364286 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 37624 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:53.364965 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:53.369751 systemd-logind[1525]: New session 7 of user core. Sep 6 00:05:53.381889 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 6 00:05:53.436387 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:05:53.436697 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:05:53.726040 (dockerd)[1771]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 6 00:05:53.726114 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 6 00:05:53.972651 dockerd[1771]: time="2025-09-06T00:05:53.972160152Z" level=info msg="Starting up" Sep 6 00:05:54.035926 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport406285354-merged.mount: Deactivated successfully. Sep 6 00:05:54.225723 dockerd[1771]: time="2025-09-06T00:05:54.225669197Z" level=info msg="Loading containers: start." Sep 6 00:05:54.347630 kernel: Initializing XFRM netlink socket Sep 6 00:05:54.413232 systemd-networkd[1230]: docker0: Link UP Sep 6 00:05:54.440114 dockerd[1771]: time="2025-09-06T00:05:54.440060338Z" level=info msg="Loading containers: done." Sep 6 00:05:54.452206 dockerd[1771]: time="2025-09-06T00:05:54.452146002Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:05:54.452325 dockerd[1771]: time="2025-09-06T00:05:54.452250121Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 6 00:05:54.452663 dockerd[1771]: time="2025-09-06T00:05:54.452639438Z" level=info msg="Daemon has completed initialization" Sep 6 00:05:54.486635 dockerd[1771]: time="2025-09-06T00:05:54.486348669Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:05:54.486641 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 6 00:05:55.032965 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3405635192-merged.mount: Deactivated successfully. Sep 6 00:05:55.090791 containerd[1546]: time="2025-09-06T00:05:55.090744954Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:05:55.737002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1307488139.mount: Deactivated successfully. Sep 6 00:05:56.977588 containerd[1546]: time="2025-09-06T00:05:56.977002176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:56.977947 containerd[1546]: time="2025-09-06T00:05:56.977655471Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 6 00:05:56.978768 containerd[1546]: time="2025-09-06T00:05:56.978740273Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:56.982322 containerd[1546]: time="2025-09-06T00:05:56.982279165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:56.983452 containerd[1546]: time="2025-09-06T00:05:56.983412529Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.892624095s" Sep 6 00:05:56.983494 containerd[1546]: time="2025-09-06T00:05:56.983453112Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 6 00:05:56.987231 containerd[1546]: time="2025-09-06T00:05:56.987098767Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:05:58.111921 containerd[1546]: time="2025-09-06T00:05:58.111861695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:58.113514 containerd[1546]: time="2025-09-06T00:05:58.113472057Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 6 00:05:58.114761 containerd[1546]: time="2025-09-06T00:05:58.114739234Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:58.117456 containerd[1546]: time="2025-09-06T00:05:58.117431087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:58.119452 containerd[1546]: time="2025-09-06T00:05:58.119424736Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.131838635s" Sep 6 00:05:58.119506 containerd[1546]: time="2025-09-06T00:05:58.119458457Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 6 00:05:58.119912 containerd[1546]: time="2025-09-06T00:05:58.119885157Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:05:58.723111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:05:58.732775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:05:58.850793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:05:58.853924 (kubelet)[1986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:05:58.981618 kubelet[1986]: E0906 00:05:58.981473 1986 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:05:58.986863 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:05:58.987018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:05:59.277247 containerd[1546]: time="2025-09-06T00:05:59.277121612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:59.278070 containerd[1546]: time="2025-09-06T00:05:59.278028854Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 6 00:05:59.278932 containerd[1546]: time="2025-09-06T00:05:59.278905643Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:59.283582 containerd[1546]: time="2025-09-06T00:05:59.283066444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:59.285104 containerd[1546]: time="2025-09-06T00:05:59.285070590Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.16515249s" Sep 6 00:05:59.285222 containerd[1546]: time="2025-09-06T00:05:59.285206805Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 6 00:05:59.285894 containerd[1546]: time="2025-09-06T00:05:59.285871627Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:06:00.313058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341989126.mount: Deactivated successfully. Sep 6 00:06:00.817270 containerd[1546]: time="2025-09-06T00:06:00.816796851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:00.817763 containerd[1546]: time="2025-09-06T00:06:00.817736552Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 6 00:06:00.819332 containerd[1546]: time="2025-09-06T00:06:00.819300329Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:00.821303 containerd[1546]: time="2025-09-06T00:06:00.821279257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:00.822219 containerd[1546]: time="2025-09-06T00:06:00.821870897Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.535968827s" Sep 6 00:06:00.822219 containerd[1546]: time="2025-09-06T00:06:00.821905026Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 6 00:06:00.822391 containerd[1546]: time="2025-09-06T00:06:00.822339962Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:06:01.428418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113044755.mount: Deactivated successfully. Sep 6 00:06:02.170052 containerd[1546]: time="2025-09-06T00:06:02.170004348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:02.171182 containerd[1546]: time="2025-09-06T00:06:02.171145296Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 6 00:06:02.173547 containerd[1546]: time="2025-09-06T00:06:02.172442238Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:02.175760 containerd[1546]: time="2025-09-06T00:06:02.175723974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:02.178652 containerd[1546]: time="2025-09-06T00:06:02.178617537Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.356219676s" Sep 6 00:06:02.178766 containerd[1546]: time="2025-09-06T00:06:02.178750481Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 6 00:06:02.179343 containerd[1546]: time="2025-09-06T00:06:02.179296378Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:06:02.671105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327926130.mount: Deactivated successfully. Sep 6 00:06:02.679939 containerd[1546]: time="2025-09-06T00:06:02.679141515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:02.681568 containerd[1546]: time="2025-09-06T00:06:02.681541112Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 6 00:06:02.683458 containerd[1546]: time="2025-09-06T00:06:02.683432583Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:02.685735 containerd[1546]: time="2025-09-06T00:06:02.685709010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:02.686495 containerd[1546]: time="2025-09-06T00:06:02.686467376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 506.951851ms" Sep 6 00:06:02.686569 containerd[1546]: time="2025-09-06T00:06:02.686497867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 00:06:02.689124 containerd[1546]: time="2025-09-06T00:06:02.689085315Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:06:03.224763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount84954827.mount: Deactivated successfully. Sep 6 00:06:04.971620 containerd[1546]: time="2025-09-06T00:06:04.971561912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:04.972491 containerd[1546]: time="2025-09-06T00:06:04.972456231Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 6 00:06:04.973950 containerd[1546]: time="2025-09-06T00:06:04.973918548Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:04.976901 containerd[1546]: time="2025-09-06T00:06:04.976870902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:04.979626 containerd[1546]: time="2025-09-06T00:06:04.978584980Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.289465888s" Sep 6 00:06:04.979626 containerd[1546]: time="2025-09-06T00:06:04.978635959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 6 00:06:09.237324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:06:09.247791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:06:09.384799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:06:09.389036 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:06:09.422666 kubelet[2155]: E0906 00:06:09.422590 2155 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:06:09.425135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:06:09.425319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:06:11.023010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:06:11.032806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:06:11.054446 systemd[1]: Reloading requested from client PID 2172 ('systemctl') (unit session-7.scope)... Sep 6 00:06:11.054460 systemd[1]: Reloading... Sep 6 00:06:11.125635 zram_generator::config[2218]: No configuration found. Sep 6 00:06:11.243367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:06:11.295618 systemd[1]: Reloading finished in 240 ms. Sep 6 00:06:11.339437 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 00:06:11.339537 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 00:06:11.340042 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:06:11.341819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:06:11.450006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:06:11.454460 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 00:06:11.486606 kubelet[2269]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:06:11.486606 kubelet[2269]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:06:11.486606 kubelet[2269]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:06:11.486995 kubelet[2269]: I0906 00:06:11.486656 2269 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:06:12.929743 kubelet[2269]: I0906 00:06:12.929690 2269 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:06:12.929743 kubelet[2269]: I0906 00:06:12.929726 2269 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:06:12.930150 kubelet[2269]: I0906 00:06:12.929972 2269 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:06:12.953419 kubelet[2269]: E0906 00:06:12.953378 2269 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:06:12.953771 kubelet[2269]: I0906 00:06:12.953743 2269 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:06:12.962999 kubelet[2269]: E0906 00:06:12.962956 2269 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:06:12.963255 kubelet[2269]: I0906 00:06:12.963196 2269 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:06:12.967103 kubelet[2269]: I0906 00:06:12.967069 2269 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:06:12.968100 kubelet[2269]: I0906 00:06:12.968067 2269 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:06:12.968266 kubelet[2269]: I0906 00:06:12.968226 2269 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:06:12.968446 kubelet[2269]: I0906 00:06:12.968267 2269 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:06:12.968678 kubelet[2269]: I0906 00:06:12.968666 2269 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:06:12.968717 kubelet[2269]: I0906 00:06:12.968680 2269 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:06:12.969148 kubelet[2269]: I0906 00:06:12.969129 2269 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:06:12.971592 kubelet[2269]: I0906 00:06:12.971318 2269 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:06:12.971592 kubelet[2269]: I0906 00:06:12.971346 2269 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:06:12.971592 kubelet[2269]: I0906 00:06:12.971376 2269 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:06:12.971592 kubelet[2269]: I0906 00:06:12.971513 2269 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:06:12.975397 kubelet[2269]: I0906 00:06:12.975374 2269 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 6 00:06:12.976293 kubelet[2269]: I0906 00:06:12.976136 2269 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:06:12.976293 kubelet[2269]: W0906 00:06:12.976171 2269 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Sep 6 00:06:12.976293 kubelet[2269]: E0906 00:06:12.976230 2269 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:06:12.976416 kubelet[2269]: W0906 00:06:12.976240 2269 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Sep 6 00:06:12.976416 kubelet[2269]: W0906 00:06:12.976366 2269 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:06:12.976416 kubelet[2269]: E0906 00:06:12.976359 2269 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:06:12.977380 kubelet[2269]: I0906 00:06:12.977359 2269 server.go:1274] "Started kubelet" Sep 6 00:06:12.978736 kubelet[2269]: I0906 00:06:12.978691 2269 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:06:12.979196 kubelet[2269]: I0906 00:06:12.979159 2269 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:06:12.979348 kubelet[2269]: I0906 00:06:12.979317 2269 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:06:12.979757 kubelet[2269]: I0906 00:06:12.979622 2269 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:06:12.980884 kubelet[2269]: I0906 00:06:12.980854 2269 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:06:12.981088 kubelet[2269]: I0906 00:06:12.981071 2269 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:06:12.981956 kubelet[2269]: E0906 00:06:12.980868 2269 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.96:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.96:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186288c25c0d7521 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:06:12.977333537 +0000 UTC m=+1.519936414,LastTimestamp:2025-09-06 00:06:12.977333537 +0000 UTC m=+1.519936414,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:06:12.982305 kubelet[2269]: I0906 00:06:12.982279 2269 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:06:12.982400 kubelet[2269]: I0906 00:06:12.982384 2269 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:06:12.982463 kubelet[2269]: I0906 00:06:12.982451 2269 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:06:12.982887 kubelet[2269]: E0906 00:06:12.982787 2269 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="200ms" Sep 6 00:06:12.982887 kubelet[2269]: E0906 00:06:12.982834 2269 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:06:12.982887 kubelet[2269]: W0906 00:06:12.982834 2269 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Sep 6 00:06:12.982887 kubelet[2269]: E0906 00:06:12.982882 2269 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:06:12.983157 kubelet[2269]: E0906 00:06:12.983141 2269 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:06:12.983255 kubelet[2269]: I0906 00:06:12.983203 2269 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:06:12.985014 kubelet[2269]: I0906 00:06:12.984774 2269 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:06:12.985014 kubelet[2269]: I0906 00:06:12.984791 2269 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:06:12.995238 kubelet[2269]: I0906 00:06:12.995184 2269 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:06:12.996550 kubelet[2269]: I0906 00:06:12.996529 2269 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:06:12.996758 kubelet[2269]: I0906 00:06:12.996737 2269 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:06:12.997270 kubelet[2269]: I0906 00:06:12.996945 2269 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:06:12.997270 kubelet[2269]: E0906 00:06:12.996999 2269 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:06:12.998314 kubelet[2269]: W0906 00:06:12.998116 2269 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Sep 6 00:06:12.998402 kubelet[2269]: E0906 00:06:12.998328 2269 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:06:13.008637 kubelet[2269]: I0906 00:06:13.008618 2269 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:06:13.008750 kubelet[2269]: I0906 00:06:13.008738 2269 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:06:13.008830 kubelet[2269]: I0906 00:06:13.008822 2269 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:06:13.079890 kubelet[2269]: I0906 00:06:13.079859 2269 policy_none.go:49] "None policy: Start" Sep 6 00:06:13.080795 kubelet[2269]: I0906 00:06:13.080775 2269 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:06:13.080884 kubelet[2269]: I0906 00:06:13.080874 2269 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:06:13.083395 kubelet[2269]: E0906 00:06:13.083369 2269 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:06:13.086963 kubelet[2269]: I0906 00:06:13.085979 2269 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:06:13.086963 kubelet[2269]: I0906 00:06:13.086192 2269 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:06:13.086963 kubelet[2269]: I0906 00:06:13.086202 2269 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:06:13.087105 kubelet[2269]: I0906 00:06:13.087027 2269 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:06:13.088207 kubelet[2269]: E0906 00:06:13.088185 2269 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 6 00:06:13.184201 kubelet[2269]: E0906 00:06:13.183179 2269 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="400ms" Sep 6 00:06:13.188826 kubelet[2269]: I0906 00:06:13.188356 2269 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:06:13.188826 kubelet[2269]: E0906 00:06:13.188792 2269 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Sep 6 00:06:13.283459 kubelet[2269]: I0906 00:06:13.283413 2269 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:06:13.283459 kubelet[2269]: I0906 00:06:13.283457 2269 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0152139a0d34f90fbff87801df96bd91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0152139a0d34f90fbff87801df96bd91\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:06:13.283619 kubelet[2269]: I0906 00:06:13.283476 2269 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0152139a0d34f90fbff87801df96bd91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0152139a0d34f90fbff87801df96bd91\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:06:13.283619 kubelet[2269]: I0906 00:06:13.283495 2269 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:06:13.283619 kubelet[2269]: I0906 00:06:13.283511 2269 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:06:13.283619 kubelet[2269]: I0906 00:06:13.283527 2269 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:06:13.283619 kubelet[2269]: I0906 00:06:13.283541 2269 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0152139a0d34f90fbff87801df96bd91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0152139a0d34f90fbff87801df96bd91\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:06:13.283746 kubelet[2269]: I0906 00:06:13.283555 2269 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:06:13.283746 kubelet[2269]: I0906 00:06:13.283580 2269 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:06:13.390416 kubelet[2269]: I0906 00:06:13.390391 2269 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:06:13.390763 kubelet[2269]: E0906 00:06:13.390724 2269 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Sep 6 00:06:13.403976 kubelet[2269]: E0906 00:06:13.403953 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:13.403976 kubelet[2269]: E0906 00:06:13.403954 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:13.404903 containerd[1546]: time="2025-09-06T00:06:13.404570807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:13.404903 containerd[1546]: time="2025-09-06T00:06:13.404638094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:13.405291 kubelet[2269]: E0906 00:06:13.405013 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:13.405373 containerd[1546]: time="2025-09-06T00:06:13.405308882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0152139a0d34f90fbff87801df96bd91,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:13.585029 kubelet[2269]: E0906 00:06:13.584919 2269 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="800ms" Sep 6 00:06:13.792786 kubelet[2269]: I0906 00:06:13.792750 2269 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:06:13.793075 kubelet[2269]: E0906 00:06:13.793038 2269 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Sep 6 00:06:13.837614 kubelet[2269]: W0906 00:06:13.837483 2269 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Sep 6 00:06:13.837738 kubelet[2269]: E0906 00:06:13.837527 2269 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:06:13.998763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863440544.mount: Deactivated successfully. Sep 6 00:06:14.005526 containerd[1546]: time="2025-09-06T00:06:14.005431896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:06:14.006191 containerd[1546]: time="2025-09-06T00:06:14.005974431Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 6 00:06:14.007006 containerd[1546]: time="2025-09-06T00:06:14.006981812Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:06:14.007861 containerd[1546]: time="2025-09-06T00:06:14.007834017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:06:14.009107 containerd[1546]: time="2025-09-06T00:06:14.009078587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 6 00:06:14.009873 containerd[1546]: time="2025-09-06T00:06:14.009842853Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:06:14.011447 containerd[1546]: time="2025-09-06T00:06:14.011360733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 6 00:06:14.014625 containerd[1546]: time="2025-09-06T00:06:14.012646910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:06:14.015029 containerd[1546]: time="2025-09-06T00:06:14.014986721Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 610.319951ms" Sep 6 00:06:14.016556 containerd[1546]: time="2025-09-06T00:06:14.016410934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 611.046381ms" Sep 6 00:06:14.018790 containerd[1546]: time="2025-09-06T00:06:14.018760756Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 614.063747ms" Sep 6 00:06:14.086702 kubelet[2269]: W0906 00:06:14.086529 2269 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Sep 6 00:06:14.086702 kubelet[2269]: E0906 00:06:14.086628 2269 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:06:14.113348 containerd[1546]: time="2025-09-06T00:06:14.113009930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:14.114638 containerd[1546]: time="2025-09-06T00:06:14.113220609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:14.114638 containerd[1546]: time="2025-09-06T00:06:14.113263738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:14.114638 containerd[1546]: time="2025-09-06T00:06:14.113274710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:14.114638 containerd[1546]: time="2025-09-06T00:06:14.114229832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:14.114638 containerd[1546]: time="2025-09-06T00:06:14.114481357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:14.114638 containerd[1546]: time="2025-09-06T00:06:14.114499578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:14.114638 containerd[1546]: time="2025-09-06T00:06:14.114568376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:14.114815 containerd[1546]: time="2025-09-06T00:06:14.112931522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:14.114815 containerd[1546]: time="2025-09-06T00:06:14.114274323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:14.114815 containerd[1546]: time="2025-09-06T00:06:14.114287498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:14.114815 containerd[1546]: time="2025-09-06T00:06:14.114372154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:14.160438 kubelet[2269]: W0906 00:06:14.160357 2269 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Sep 6 00:06:14.160532 kubelet[2269]: E0906 00:06:14.160443 2269 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:06:14.166229 containerd[1546]: time="2025-09-06T00:06:14.166144766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a9647690cc12b58adf03e2704945a8885a331613d9be3835c5d2d45c5a854c6\"" Sep 6 00:06:14.166931 containerd[1546]: time="2025-09-06T00:06:14.166907350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0152139a0d34f90fbff87801df96bd91,Namespace:kube-system,Attempt:0,} returns sandbox id \"f69781fc9f716f9a008fb6f3f269aef661cfb8a16da4bd4aae9ddb9681281d27\"" Sep 6 00:06:14.168832 kubelet[2269]: E0906 00:06:14.168803 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:14.168905 kubelet[2269]: E0906 00:06:14.168835 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:14.170126 containerd[1546]: time="2025-09-06T00:06:14.169859375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6ef59c70ad80ce64e478e3d0632a5f9dfe6bbb72577563a4a0f638e568973c8\"" Sep 6 00:06:14.170571 kubelet[2269]: E0906 00:06:14.170539 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:14.171235 containerd[1546]: time="2025-09-06T00:06:14.171186959Z" level=info msg="CreateContainer within sandbox \"7a9647690cc12b58adf03e2704945a8885a331613d9be3835c5d2d45c5a854c6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:06:14.171326 containerd[1546]: time="2025-09-06T00:06:14.171302249Z" level=info msg="CreateContainer within sandbox \"f69781fc9f716f9a008fb6f3f269aef661cfb8a16da4bd4aae9ddb9681281d27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:06:14.172805 containerd[1546]: time="2025-09-06T00:06:14.172765667Z" level=info msg="CreateContainer within sandbox \"c6ef59c70ad80ce64e478e3d0632a5f9dfe6bbb72577563a4a0f638e568973c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:06:14.185575 containerd[1546]: time="2025-09-06T00:06:14.185535534Z" level=info msg="CreateContainer within sandbox \"7a9647690cc12b58adf03e2704945a8885a331613d9be3835c5d2d45c5a854c6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f7591447cc1e3f73605817c8e8ddeb5e08b47b7a6565f25ac56a44f28f3449d\"" Sep 6 00:06:14.186054 containerd[1546]: time="2025-09-06T00:06:14.186028292Z" level=info msg="StartContainer for \"3f7591447cc1e3f73605817c8e8ddeb5e08b47b7a6565f25ac56a44f28f3449d\"" Sep 6 00:06:14.189361 containerd[1546]: time="2025-09-06T00:06:14.189215543Z" level=info msg="CreateContainer within sandbox \"f69781fc9f716f9a008fb6f3f269aef661cfb8a16da4bd4aae9ddb9681281d27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a797e2ca553fdd4530d9f940f7b809d453298361ad3c0e76ec4a0a5516a5e6a\"" Sep 6 00:06:14.189696 containerd[1546]: time="2025-09-06T00:06:14.189669818Z" level=info msg="StartContainer for \"6a797e2ca553fdd4530d9f940f7b809d453298361ad3c0e76ec4a0a5516a5e6a\"" Sep 6 00:06:14.193766 containerd[1546]: time="2025-09-06T00:06:14.193395599Z" level=info msg="CreateContainer within sandbox \"c6ef59c70ad80ce64e478e3d0632a5f9dfe6bbb72577563a4a0f638e568973c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d84ee2c6fdbcea3b43db3427c3d81d9d616da27afe2372aeab050ccc2606fcc\"" Sep 6 00:06:14.193996 containerd[1546]: time="2025-09-06T00:06:14.193973253Z" level=info msg="StartContainer for \"3d84ee2c6fdbcea3b43db3427c3d81d9d616da27afe2372aeab050ccc2606fcc\"" Sep 6 00:06:14.229642 kubelet[2269]: W0906 00:06:14.229369 2269 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Sep 6 00:06:14.229854 kubelet[2269]: E0906 00:06:14.229831 2269 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:06:14.249824 containerd[1546]: time="2025-09-06T00:06:14.248840211Z" level=info msg="StartContainer for \"6a797e2ca553fdd4530d9f940f7b809d453298361ad3c0e76ec4a0a5516a5e6a\" returns successfully" Sep 6 00:06:14.252128 containerd[1546]: time="2025-09-06T00:06:14.251964150Z" level=info msg="StartContainer for \"3f7591447cc1e3f73605817c8e8ddeb5e08b47b7a6565f25ac56a44f28f3449d\" returns successfully" Sep 6 00:06:14.279500 containerd[1546]: time="2025-09-06T00:06:14.279364912Z" level=info msg="StartContainer for \"3d84ee2c6fdbcea3b43db3427c3d81d9d616da27afe2372aeab050ccc2606fcc\" returns successfully" Sep 6 00:06:14.594978 kubelet[2269]: I0906 00:06:14.594727 2269 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:06:15.009454 kubelet[2269]: E0906 00:06:15.009239 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:15.015679 kubelet[2269]: E0906 00:06:15.013333 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:15.017063 kubelet[2269]: E0906 00:06:15.017042 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:16.018626 kubelet[2269]: E0906 00:06:16.018106 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:16.381561 kubelet[2269]: E0906 00:06:16.381460 2269 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 6 00:06:16.433841 kubelet[2269]: I0906 00:06:16.433668 2269 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 6 00:06:16.433841 kubelet[2269]: E0906 00:06:16.433705 2269 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 6 00:06:16.447265 kubelet[2269]: E0906 00:06:16.446981 2269 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:06:16.472351 kubelet[2269]: E0906 00:06:16.472246 2269 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186288c25c0d7521 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:06:12.977333537 +0000 UTC m=+1.519936414,LastTimestamp:2025-09-06 00:06:12.977333537 +0000 UTC m=+1.519936414,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:06:16.926006 kubelet[2269]: E0906 00:06:16.925973 2269 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 6 00:06:16.926153 kubelet[2269]: E0906 00:06:16.926140 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:16.978244 kubelet[2269]: I0906 00:06:16.978207 2269 apiserver.go:52] "Watching apiserver" Sep 6 00:06:16.982707 kubelet[2269]: I0906 00:06:16.982679 2269 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:06:18.392244 systemd[1]: Reloading requested from client PID 2545 ('systemctl') (unit session-7.scope)... Sep 6 00:06:18.392523 systemd[1]: Reloading... Sep 6 00:06:18.453718 zram_generator::config[2584]: No configuration found. Sep 6 00:06:18.539113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:06:18.596438 systemd[1]: Reloading finished in 203 ms. Sep 6 00:06:18.621980 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:06:18.640919 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:06:18.641183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:06:18.652941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:06:18.744906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:06:18.749552 (kubelet)[2636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 00:06:18.784769 kubelet[2636]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:06:18.785669 kubelet[2636]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:06:18.785669 kubelet[2636]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:06:18.785669 kubelet[2636]: I0906 00:06:18.785171 2636 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:06:18.794653 kubelet[2636]: I0906 00:06:18.794626 2636 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:06:18.794653 kubelet[2636]: I0906 00:06:18.794651 2636 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:06:18.794890 kubelet[2636]: I0906 00:06:18.794872 2636 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:06:18.796945 kubelet[2636]: I0906 00:06:18.796165 2636 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:06:18.799278 kubelet[2636]: I0906 00:06:18.799254 2636 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:06:18.802353 kubelet[2636]: E0906 00:06:18.802314 2636 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:06:18.802353 kubelet[2636]: I0906 00:06:18.802342 2636 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:06:18.804502 kubelet[2636]: I0906 00:06:18.804483 2636 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:06:18.804932 kubelet[2636]: I0906 00:06:18.804910 2636 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:06:18.805032 kubelet[2636]: I0906 00:06:18.805007 2636 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:06:18.805173 kubelet[2636]: I0906 00:06:18.805032 2636 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:06:18.805252 kubelet[2636]: I0906 00:06:18.805180 2636 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:06:18.805252 kubelet[2636]: I0906 00:06:18.805190 2636 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:06:18.805252 kubelet[2636]: I0906 00:06:18.805230 2636 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:06:18.805329 kubelet[2636]: I0906 00:06:18.805318 2636 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:06:18.805355 kubelet[2636]: I0906 00:06:18.805330 2636 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:06:18.805355 kubelet[2636]: I0906 00:06:18.805347 2636 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:06:18.805403 kubelet[2636]: I0906 00:06:18.805358 2636 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:06:18.805994 kubelet[2636]: I0906 00:06:18.805902 2636 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 6 00:06:18.807263 kubelet[2636]: I0906 00:06:18.807217 2636 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:06:18.807840 kubelet[2636]: I0906 00:06:18.807814 2636 server.go:1274] "Started kubelet" Sep 6 00:06:18.807969 kubelet[2636]: I0906 00:06:18.807943 2636 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:06:18.809038 kubelet[2636]: I0906 00:06:18.809013 2636 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:06:18.809206 kubelet[2636]: I0906 00:06:18.809152 2636 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:06:18.809882 kubelet[2636]: I0906 00:06:18.809862 2636 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:06:18.810433 kubelet[2636]: I0906 00:06:18.810384 2636 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:06:18.810641 kubelet[2636]: I0906 00:06:18.810589 2636 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:06:18.812102 kubelet[2636]: I0906 00:06:18.812084 2636 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:06:18.812282 kubelet[2636]: I0906 00:06:18.812269 2636 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:06:18.812524 kubelet[2636]: I0906 00:06:18.812512 2636 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:06:18.817308 kubelet[2636]: I0906 00:06:18.812963 2636 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:06:18.817308 kubelet[2636]: I0906 00:06:18.813179 2636 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:06:18.817383 kubelet[2636]: E0906 00:06:18.817364 2636 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:06:18.819186 kubelet[2636]: E0906 00:06:18.819163 2636 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:06:18.820142 kubelet[2636]: I0906 00:06:18.819783 2636 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:06:18.831809 kubelet[2636]: I0906 00:06:18.831738 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:06:18.832592 kubelet[2636]: I0906 00:06:18.832572 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:06:18.832628 kubelet[2636]: I0906 00:06:18.832593 2636 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:06:18.832650 kubelet[2636]: I0906 00:06:18.832636 2636 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:06:18.833015 kubelet[2636]: E0906 00:06:18.832673 2636 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:06:18.872936 kubelet[2636]: I0906 00:06:18.872912 2636 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:06:18.872936 kubelet[2636]: I0906 00:06:18.872929 2636 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:06:18.872936 kubelet[2636]: I0906 00:06:18.872947 2636 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:06:18.873107 kubelet[2636]: I0906 00:06:18.873090 2636 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:06:18.873133 kubelet[2636]: I0906 00:06:18.873106 2636 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:06:18.873133 kubelet[2636]: I0906 00:06:18.873124 2636 policy_none.go:49] "None policy: Start" Sep 6 00:06:18.873686 kubelet[2636]: I0906 00:06:18.873670 2636 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:06:18.873745 kubelet[2636]: I0906 00:06:18.873692 2636 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:06:18.873827 kubelet[2636]: I0906 00:06:18.873797 2636 state_mem.go:75] "Updated machine memory state" Sep 6 00:06:18.875083 kubelet[2636]: I0906 00:06:18.875066 2636 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:06:18.876076 kubelet[2636]: I0906 00:06:18.875229 2636 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:06:18.876076 kubelet[2636]: I0906 00:06:18.875247 2636 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:06:18.876149 kubelet[2636]: I0906 00:06:18.876099 2636 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:06:18.978755 kubelet[2636]: I0906 00:06:18.978660 2636 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:06:18.987016 kubelet[2636]: I0906 00:06:18.986979 2636 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 6 00:06:18.987116 kubelet[2636]: I0906 00:06:18.987063 2636 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 6 00:06:19.013428 kubelet[2636]: I0906 00:06:19.013318 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:06:19.013428 kubelet[2636]: I0906 00:06:19.013353 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0152139a0d34f90fbff87801df96bd91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0152139a0d34f90fbff87801df96bd91\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:06:19.013428 kubelet[2636]: I0906 00:06:19.013371 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0152139a0d34f90fbff87801df96bd91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0152139a0d34f90fbff87801df96bd91\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:06:19.013428 kubelet[2636]: I0906 00:06:19.013385 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:06:19.013428 kubelet[2636]: I0906 00:06:19.013403 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:06:19.013713 kubelet[2636]: I0906 00:06:19.013439 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:06:19.013713 kubelet[2636]: I0906 00:06:19.013484 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:06:19.013713 kubelet[2636]: I0906 00:06:19.013509 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0152139a0d34f90fbff87801df96bd91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0152139a0d34f90fbff87801df96bd91\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:06:19.013713 kubelet[2636]: I0906 00:06:19.013525 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:06:19.239042 kubelet[2636]: E0906 00:06:19.238915 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:19.239042 kubelet[2636]: E0906 00:06:19.238983 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:19.239881 kubelet[2636]: E0906 00:06:19.239830 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:19.424127 sudo[2673]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:06:19.424407 sudo[2673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 6 00:06:19.807008 kubelet[2636]: I0906 00:06:19.805818 2636 apiserver.go:52] "Watching apiserver" Sep 6 00:06:19.813289 kubelet[2636]: I0906 00:06:19.813272 2636 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:06:19.857680 kubelet[2636]: E0906 00:06:19.852731 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:19.857680 kubelet[2636]: E0906 00:06:19.853905 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:19.857680 kubelet[2636]: E0906 00:06:19.854150 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:19.868813 sudo[2673]: pam_unix(sudo:session): session closed for user root Sep 6 00:06:19.874191 kubelet[2636]: I0906 00:06:19.874128 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.874114321 podStartE2EDuration="1.874114321s" podCreationTimestamp="2025-09-06 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:06:19.865234048 +0000 UTC m=+1.112675393" watchObservedRunningTime="2025-09-06 00:06:19.874114321 +0000 UTC m=+1.121555666" Sep 6 00:06:19.881478 kubelet[2636]: I0906 00:06:19.881413 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.880797922 podStartE2EDuration="1.880797922s" podCreationTimestamp="2025-09-06 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:06:19.874384823 +0000 UTC m=+1.121826168" watchObservedRunningTime="2025-09-06 00:06:19.880797922 +0000 UTC m=+1.128239267" Sep 6 00:06:19.881619 kubelet[2636]: I0906 00:06:19.881573 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.881565437 podStartE2EDuration="1.881565437s" podCreationTimestamp="2025-09-06 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:06:19.880652145 +0000 UTC m=+1.128093490" watchObservedRunningTime="2025-09-06 00:06:19.881565437 +0000 UTC m=+1.129006782" Sep 6 00:06:20.855835 kubelet[2636]: E0906 00:06:20.855413 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:20.855835 kubelet[2636]: E0906 00:06:20.855520 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:21.529276 sudo[1753]: pam_unix(sudo:session): session closed for user root Sep 6 00:06:21.530672 sshd[1746]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:21.534758 systemd[1]: sshd@6-10.0.0.96:22-10.0.0.1:37624.service: Deactivated successfully. Sep 6 00:06:21.536588 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:06:21.538069 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:06:21.539181 systemd-logind[1525]: Removed session 7. Sep 6 00:06:23.496281 kubelet[2636]: I0906 00:06:23.496238 2636 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:06:23.497169 containerd[1546]: time="2025-09-06T00:06:23.496984516Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:06:23.497415 kubelet[2636]: I0906 00:06:23.497207 2636 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:06:23.957648 kubelet[2636]: I0906 00:06:23.953156 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e06200e-e168-456d-b196-de01c0f1face-lib-modules\") pod \"kube-proxy-qjztg\" (UID: \"8e06200e-e168-456d-b196-de01c0f1face\") " pod="kube-system/kube-proxy-qjztg" Sep 6 00:06:23.957937 kubelet[2636]: I0906 00:06:23.957834 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e06200e-e168-456d-b196-de01c0f1face-kube-proxy\") pod \"kube-proxy-qjztg\" (UID: \"8e06200e-e168-456d-b196-de01c0f1face\") " pod="kube-system/kube-proxy-qjztg" Sep 6 00:06:23.957937 kubelet[2636]: I0906 00:06:23.957862 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e06200e-e168-456d-b196-de01c0f1face-xtables-lock\") pod \"kube-proxy-qjztg\" (UID: \"8e06200e-e168-456d-b196-de01c0f1face\") " pod="kube-system/kube-proxy-qjztg" Sep 6 00:06:23.957937 kubelet[2636]: I0906 00:06:23.957879 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlmvr\" (UniqueName: \"kubernetes.io/projected/8e06200e-e168-456d-b196-de01c0f1face-kube-api-access-hlmvr\") pod \"kube-proxy-qjztg\" (UID: \"8e06200e-e168-456d-b196-de01c0f1face\") " pod="kube-system/kube-proxy-qjztg" Sep 6 00:06:24.058869 kubelet[2636]: I0906 00:06:24.058358 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-xtables-lock\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.058869 kubelet[2636]: I0906 00:06:24.058401 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-host-proc-sys-kernel\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.058869 kubelet[2636]: I0906 00:06:24.058548 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-run\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.058869 kubelet[2636]: I0906 00:06:24.058568 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-clustermesh-secrets\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.058869 kubelet[2636]: I0906 00:06:24.058593 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cni-path\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.058869 kubelet[2636]: I0906 00:06:24.058633 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-lib-modules\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.059131 kubelet[2636]: I0906 00:06:24.058647 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-config-path\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.059131 kubelet[2636]: I0906 00:06:24.058660 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-hubble-tls\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.059131 kubelet[2636]: I0906 00:06:24.058675 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-etc-cni-netd\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.059131 kubelet[2636]: I0906 00:06:24.058690 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-host-proc-sys-net\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.059131 kubelet[2636]: I0906 00:06:24.058704 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvtm7\" (UniqueName: \"kubernetes.io/projected/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-kube-api-access-zvtm7\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.059131 kubelet[2636]: I0906 00:06:24.058719 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-bpf-maps\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.059256 kubelet[2636]: I0906 00:06:24.058736 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-hostproc\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.059256 kubelet[2636]: I0906 00:06:24.058793 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-cgroup\") pod \"cilium-mhxr2\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " pod="kube-system/cilium-mhxr2" Sep 6 00:06:24.073825 kubelet[2636]: E0906 00:06:24.073654 2636 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 6 00:06:24.073825 kubelet[2636]: E0906 00:06:24.073687 2636 projected.go:194] Error preparing data for projected volume kube-api-access-hlmvr for pod kube-system/kube-proxy-qjztg: configmap "kube-root-ca.crt" not found Sep 6 00:06:24.073825 kubelet[2636]: E0906 00:06:24.073751 2636 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e06200e-e168-456d-b196-de01c0f1face-kube-api-access-hlmvr podName:8e06200e-e168-456d-b196-de01c0f1face nodeName:}" failed. No retries permitted until 2025-09-06 00:06:24.573731807 +0000 UTC m=+5.821173112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hlmvr" (UniqueName: "kubernetes.io/projected/8e06200e-e168-456d-b196-de01c0f1face-kube-api-access-hlmvr") pod "kube-proxy-qjztg" (UID: "8e06200e-e168-456d-b196-de01c0f1face") : configmap "kube-root-ca.crt" not found Sep 6 00:06:24.170576 kubelet[2636]: E0906 00:06:24.169289 2636 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 6 00:06:24.170576 kubelet[2636]: E0906 00:06:24.169323 2636 projected.go:194] Error preparing data for projected volume kube-api-access-zvtm7 for pod kube-system/cilium-mhxr2: configmap "kube-root-ca.crt" not found Sep 6 00:06:24.170576 kubelet[2636]: E0906 00:06:24.169367 2636 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-kube-api-access-zvtm7 podName:7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90 nodeName:}" failed. No retries permitted until 2025-09-06 00:06:24.669350487 +0000 UTC m=+5.916791832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zvtm7" (UniqueName: "kubernetes.io/projected/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-kube-api-access-zvtm7") pod "cilium-mhxr2" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90") : configmap "kube-root-ca.crt" not found Sep 6 00:06:24.552498 kubelet[2636]: E0906 00:06:24.552144 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:24.664364 kubelet[2636]: I0906 00:06:24.664294 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fh5b\" (UniqueName: \"kubernetes.io/projected/581f66a8-8247-40c2-9cd3-2b0cd2a35d91-kube-api-access-5fh5b\") pod \"cilium-operator-5d85765b45-n9qxb\" (UID: \"581f66a8-8247-40c2-9cd3-2b0cd2a35d91\") " pod="kube-system/cilium-operator-5d85765b45-n9qxb" Sep 6 00:06:24.664364 kubelet[2636]: I0906 00:06:24.664341 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/581f66a8-8247-40c2-9cd3-2b0cd2a35d91-cilium-config-path\") pod \"cilium-operator-5d85765b45-n9qxb\" (UID: \"581f66a8-8247-40c2-9cd3-2b0cd2a35d91\") " pod="kube-system/cilium-operator-5d85765b45-n9qxb" Sep 6 00:06:24.853431 kubelet[2636]: E0906 00:06:24.853394 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:24.854720 containerd[1546]: time="2025-09-06T00:06:24.854242485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjztg,Uid:8e06200e-e168-456d-b196-de01c0f1face,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:24.859591 kubelet[2636]: E0906 00:06:24.859558 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:24.861321 containerd[1546]: time="2025-09-06T00:06:24.860038512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mhxr2,Uid:7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:24.863923 kubelet[2636]: E0906 00:06:24.863765 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:24.888507 containerd[1546]: time="2025-09-06T00:06:24.888172347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:24.889029 containerd[1546]: time="2025-09-06T00:06:24.888471579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:24.889029 containerd[1546]: time="2025-09-06T00:06:24.888793384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:24.889029 containerd[1546]: time="2025-09-06T00:06:24.888909299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:24.909730 containerd[1546]: time="2025-09-06T00:06:24.909434307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:24.909730 containerd[1546]: time="2025-09-06T00:06:24.909490743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:24.911283 containerd[1546]: time="2025-09-06T00:06:24.911102094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:24.911283 containerd[1546]: time="2025-09-06T00:06:24.911233258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:24.926690 containerd[1546]: time="2025-09-06T00:06:24.926571949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjztg,Uid:8e06200e-e168-456d-b196-de01c0f1face,Namespace:kube-system,Attempt:0,} returns sandbox id \"00ce6e0ac249498b55364b4c6b34b9cbaabdbbf4f45c9658acbf021f45144870\"" Sep 6 00:06:24.927301 kubelet[2636]: E0906 00:06:24.927269 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:24.930329 containerd[1546]: time="2025-09-06T00:06:24.930252863Z" level=info msg="CreateContainer within sandbox \"00ce6e0ac249498b55364b4c6b34b9cbaabdbbf4f45c9658acbf021f45144870\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:06:24.937449 kubelet[2636]: E0906 00:06:24.937424 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:24.938145 containerd[1546]: time="2025-09-06T00:06:24.938111050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n9qxb,Uid:581f66a8-8247-40c2-9cd3-2b0cd2a35d91,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:24.952168 containerd[1546]: time="2025-09-06T00:06:24.952137061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mhxr2,Uid:7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\"" Sep 6 00:06:24.953016 kubelet[2636]: E0906 00:06:24.952928 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:24.954230 containerd[1546]: time="2025-09-06T00:06:24.954186452Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:06:24.954566 containerd[1546]: time="2025-09-06T00:06:24.954534554Z" level=info msg="CreateContainer within sandbox \"00ce6e0ac249498b55364b4c6b34b9cbaabdbbf4f45c9658acbf021f45144870\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6943abb824de1e0cc2af35e8edc9cc96a96ce026165e6b931abdb155b1dcca64\"" Sep 6 00:06:24.955020 containerd[1546]: time="2025-09-06T00:06:24.954990846Z" level=info msg="StartContainer for \"6943abb824de1e0cc2af35e8edc9cc96a96ce026165e6b931abdb155b1dcca64\"" Sep 6 00:06:24.989407 containerd[1546]: time="2025-09-06T00:06:24.989205251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:24.989407 containerd[1546]: time="2025-09-06T00:06:24.989252201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:24.989407 containerd[1546]: time="2025-09-06T00:06:24.989262367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:24.989407 containerd[1546]: time="2025-09-06T00:06:24.989343459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:25.024776 containerd[1546]: time="2025-09-06T00:06:25.024720453Z" level=info msg="StartContainer for \"6943abb824de1e0cc2af35e8edc9cc96a96ce026165e6b931abdb155b1dcca64\" returns successfully" Sep 6 00:06:25.048306 containerd[1546]: time="2025-09-06T00:06:25.048261621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n9qxb,Uid:581f66a8-8247-40c2-9cd3-2b0cd2a35d91,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb43b728f5727b2908f2d4bce46c82311898507044c40af434a739c2ba0bceca\"" Sep 6 00:06:25.048949 kubelet[2636]: E0906 00:06:25.048928 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:25.871620 kubelet[2636]: E0906 00:06:25.871541 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:25.885013 kubelet[2636]: I0906 00:06:25.884959 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qjztg" podStartSLOduration=2.884943415 podStartE2EDuration="2.884943415s" podCreationTimestamp="2025-09-06 00:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:06:25.884889703 +0000 UTC m=+7.132331048" watchObservedRunningTime="2025-09-06 00:06:25.884943415 +0000 UTC m=+7.132384760" Sep 6 00:06:28.555475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131377719.mount: Deactivated successfully. Sep 6 00:06:29.232722 kubelet[2636]: E0906 00:06:29.232665 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:29.882651 kubelet[2636]: E0906 00:06:29.882401 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:29.930852 containerd[1546]: time="2025-09-06T00:06:29.930803053Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:29.932541 containerd[1546]: time="2025-09-06T00:06:29.932503362Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 6 00:06:29.933957 containerd[1546]: time="2025-09-06T00:06:29.933927817Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:29.937372 containerd[1546]: time="2025-09-06T00:06:29.937343644Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.98310484s" Sep 6 00:06:29.937441 containerd[1546]: time="2025-09-06T00:06:29.937376980Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 00:06:29.941688 containerd[1546]: time="2025-09-06T00:06:29.941615848Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:06:29.943678 containerd[1546]: time="2025-09-06T00:06:29.943173848Z" level=info msg="CreateContainer within sandbox \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:06:29.977584 containerd[1546]: time="2025-09-06T00:06:29.977544456Z" level=info msg="CreateContainer within sandbox \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23\"" Sep 6 00:06:29.978341 containerd[1546]: time="2025-09-06T00:06:29.978295582Z" level=info msg="StartContainer for \"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23\"" Sep 6 00:06:30.031706 containerd[1546]: time="2025-09-06T00:06:30.031663325Z" level=info msg="StartContainer for \"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23\" returns successfully" Sep 6 00:06:30.065687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23-rootfs.mount: Deactivated successfully. Sep 6 00:06:30.142689 containerd[1546]: time="2025-09-06T00:06:30.139248972Z" level=info msg="shim disconnected" id=ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23 namespace=k8s.io Sep 6 00:06:30.142689 containerd[1546]: time="2025-09-06T00:06:30.142620733Z" level=warning msg="cleaning up after shim disconnected" id=ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23 namespace=k8s.io Sep 6 00:06:30.142689 containerd[1546]: time="2025-09-06T00:06:30.142633298Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:30.725377 kubelet[2636]: E0906 00:06:30.724127 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:30.890711 kubelet[2636]: E0906 00:06:30.890492 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:30.897806 containerd[1546]: time="2025-09-06T00:06:30.897315960Z" level=info msg="CreateContainer within sandbox \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:06:30.932229 containerd[1546]: time="2025-09-06T00:06:30.932185983Z" level=info msg="CreateContainer within sandbox \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362\"" Sep 6 00:06:30.932869 containerd[1546]: time="2025-09-06T00:06:30.932822077Z" level=info msg="StartContainer for \"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362\"" Sep 6 00:06:30.992105 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:06:30.992653 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:06:30.992721 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:06:30.996017 containerd[1546]: time="2025-09-06T00:06:30.995941298Z" level=info msg="StartContainer for \"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362\" returns successfully" Sep 6 00:06:30.999970 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:06:31.015467 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:06:31.020913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362-rootfs.mount: Deactivated successfully. Sep 6 00:06:31.045630 containerd[1546]: time="2025-09-06T00:06:31.045566846Z" level=info msg="shim disconnected" id=2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362 namespace=k8s.io Sep 6 00:06:31.045785 containerd[1546]: time="2025-09-06T00:06:31.045642519Z" level=warning msg="cleaning up after shim disconnected" id=2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362 namespace=k8s.io Sep 6 00:06:31.045785 containerd[1546]: time="2025-09-06T00:06:31.045655605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:31.083653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1493637377.mount: Deactivated successfully. Sep 6 00:06:31.681432 containerd[1546]: time="2025-09-06T00:06:31.681216795Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:31.683376 containerd[1546]: time="2025-09-06T00:06:31.683273379Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 6 00:06:31.684283 containerd[1546]: time="2025-09-06T00:06:31.684244766Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:06:31.686644 containerd[1546]: time="2025-09-06T00:06:31.686539775Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.744891152s" Sep 6 00:06:31.686702 containerd[1546]: time="2025-09-06T00:06:31.686641780Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 00:06:31.689368 containerd[1546]: time="2025-09-06T00:06:31.689316716Z" level=info msg="CreateContainer within sandbox \"cb43b728f5727b2908f2d4bce46c82311898507044c40af434a739c2ba0bceca\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:06:31.702452 containerd[1546]: time="2025-09-06T00:06:31.702420236Z" level=info msg="CreateContainer within sandbox \"cb43b728f5727b2908f2d4bce46c82311898507044c40af434a739c2ba0bceca\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\"" Sep 6 00:06:31.703464 containerd[1546]: time="2025-09-06T00:06:31.702851746Z" level=info msg="StartContainer for \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\"" Sep 6 00:06:31.755885 containerd[1546]: time="2025-09-06T00:06:31.755840960Z" level=info msg="StartContainer for \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\" returns successfully" Sep 6 00:06:31.905013 kubelet[2636]: E0906 00:06:31.904984 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:31.907508 kubelet[2636]: E0906 00:06:31.907074 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:31.916801 containerd[1546]: time="2025-09-06T00:06:31.916750815Z" level=info msg="CreateContainer within sandbox \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:06:31.974964 containerd[1546]: time="2025-09-06T00:06:31.974832948Z" level=info msg="CreateContainer within sandbox \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa\"" Sep 6 00:06:31.977579 containerd[1546]: time="2025-09-06T00:06:31.975357778Z" level=info msg="StartContainer for \"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa\"" Sep 6 00:06:32.034307 containerd[1546]: time="2025-09-06T00:06:32.034259894Z" level=info msg="StartContainer for \"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa\" returns successfully" Sep 6 00:06:32.060682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa-rootfs.mount: Deactivated successfully. Sep 6 00:06:32.078166 containerd[1546]: time="2025-09-06T00:06:32.078098806Z" level=info msg="shim disconnected" id=763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa namespace=k8s.io Sep 6 00:06:32.078166 containerd[1546]: time="2025-09-06T00:06:32.078147906Z" level=warning msg="cleaning up after shim disconnected" id=763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa namespace=k8s.io Sep 6 00:06:32.078166 containerd[1546]: time="2025-09-06T00:06:32.078156510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:32.140692 update_engine[1532]: I20250906 00:06:32.140626 1532 update_attempter.cc:509] Updating boot flags... Sep 6 00:06:32.168634 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3279) Sep 6 00:06:32.207637 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3280) Sep 6 00:06:32.248700 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3280) Sep 6 00:06:32.906927 kubelet[2636]: E0906 00:06:32.906882 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:32.907589 kubelet[2636]: E0906 00:06:32.907546 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:32.910611 containerd[1546]: time="2025-09-06T00:06:32.909943347Z" level=info msg="CreateContainer within sandbox \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:06:32.929830 kubelet[2636]: I0906 00:06:32.929559 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-n9qxb" podStartSLOduration=2.292170581 podStartE2EDuration="8.929541573s" podCreationTimestamp="2025-09-06 00:06:24 +0000 UTC" firstStartedPulling="2025-09-06 00:06:25.049972416 +0000 UTC m=+6.297413761" lastFinishedPulling="2025-09-06 00:06:31.687343408 +0000 UTC m=+12.934784753" observedRunningTime="2025-09-06 00:06:31.963117798 +0000 UTC m=+13.210559143" watchObservedRunningTime="2025-09-06 00:06:32.929541573 +0000 UTC m=+14.176982918" Sep 6 00:06:32.931253 containerd[1546]: time="2025-09-06T00:06:32.931195824Z" level=info msg="CreateContainer within sandbox \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad\"" Sep 6 00:06:32.931770 containerd[1546]: time="2025-09-06T00:06:32.931743933Z" level=info msg="StartContainer for \"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad\"" Sep 6 00:06:32.982152 containerd[1546]: time="2025-09-06T00:06:32.982083400Z" level=info msg="StartContainer for \"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad\" returns successfully" Sep 6 00:06:33.005311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad-rootfs.mount: Deactivated successfully. Sep 6 00:06:33.014815 containerd[1546]: time="2025-09-06T00:06:33.014743532Z" level=info msg="shim disconnected" id=b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad namespace=k8s.io Sep 6 00:06:33.014815 containerd[1546]: time="2025-09-06T00:06:33.014803716Z" level=warning msg="cleaning up after shim disconnected" id=b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad namespace=k8s.io Sep 6 00:06:33.014815 containerd[1546]: time="2025-09-06T00:06:33.014811279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:33.914431 kubelet[2636]: E0906 00:06:33.914375 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:33.918043 containerd[1546]: time="2025-09-06T00:06:33.917968747Z" level=info msg="CreateContainer within sandbox \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:06:33.951500 containerd[1546]: time="2025-09-06T00:06:33.951446283Z" level=info msg="CreateContainer within sandbox \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\"" Sep 6 00:06:33.952092 containerd[1546]: time="2025-09-06T00:06:33.951977894Z" level=info msg="StartContainer for \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\"" Sep 6 00:06:34.015632 containerd[1546]: time="2025-09-06T00:06:34.015575945Z" level=info msg="StartContainer for \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\" returns successfully" Sep 6 00:06:34.132423 kubelet[2636]: I0906 00:06:34.131994 2636 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:06:34.246020 kubelet[2636]: I0906 00:06:34.245564 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8dd7b51f-4262-47b7-97ea-169b08adfb39-config-volume\") pod \"coredns-7c65d6cfc9-82q7d\" (UID: \"8dd7b51f-4262-47b7-97ea-169b08adfb39\") " pod="kube-system/coredns-7c65d6cfc9-82q7d" Sep 6 00:06:34.246020 kubelet[2636]: I0906 00:06:34.245633 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vkxb\" (UniqueName: \"kubernetes.io/projected/8dd7b51f-4262-47b7-97ea-169b08adfb39-kube-api-access-7vkxb\") pod \"coredns-7c65d6cfc9-82q7d\" (UID: \"8dd7b51f-4262-47b7-97ea-169b08adfb39\") " pod="kube-system/coredns-7c65d6cfc9-82q7d" Sep 6 00:06:34.246020 kubelet[2636]: I0906 00:06:34.245654 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkp88\" (UniqueName: \"kubernetes.io/projected/7b9492f5-2b73-416a-95eb-549d37a9672f-kube-api-access-gkp88\") pod \"coredns-7c65d6cfc9-7gm7x\" (UID: \"7b9492f5-2b73-416a-95eb-549d37a9672f\") " pod="kube-system/coredns-7c65d6cfc9-7gm7x" Sep 6 00:06:34.246020 kubelet[2636]: I0906 00:06:34.245674 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b9492f5-2b73-416a-95eb-549d37a9672f-config-volume\") pod \"coredns-7c65d6cfc9-7gm7x\" (UID: \"7b9492f5-2b73-416a-95eb-549d37a9672f\") " pod="kube-system/coredns-7c65d6cfc9-7gm7x" Sep 6 00:06:34.476869 kubelet[2636]: E0906 00:06:34.476832 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:34.477900 kubelet[2636]: E0906 00:06:34.477784 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:34.478576 containerd[1546]: time="2025-09-06T00:06:34.478543472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-82q7d,Uid:8dd7b51f-4262-47b7-97ea-169b08adfb39,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:34.478835 containerd[1546]: time="2025-09-06T00:06:34.478812093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7gm7x,Uid:7b9492f5-2b73-416a-95eb-549d37a9672f,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:34.918484 kubelet[2636]: E0906 00:06:34.918083 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:34.935344 kubelet[2636]: I0906 00:06:34.935271 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mhxr2" podStartSLOduration=6.947781698 podStartE2EDuration="11.935252073s" podCreationTimestamp="2025-09-06 00:06:23 +0000 UTC" firstStartedPulling="2025-09-06 00:06:24.953396547 +0000 UTC m=+6.200837892" lastFinishedPulling="2025-09-06 00:06:29.940866882 +0000 UTC m=+11.188308267" observedRunningTime="2025-09-06 00:06:34.934836676 +0000 UTC m=+16.182278061" watchObservedRunningTime="2025-09-06 00:06:34.935252073 +0000 UTC m=+16.182693418" Sep 6 00:06:35.919673 kubelet[2636]: E0906 00:06:35.919644 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:36.044168 systemd-networkd[1230]: cilium_host: Link UP Sep 6 00:06:36.044293 systemd-networkd[1230]: cilium_net: Link UP Sep 6 00:06:36.044429 systemd-networkd[1230]: cilium_net: Gained carrier Sep 6 00:06:36.044548 systemd-networkd[1230]: cilium_host: Gained carrier Sep 6 00:06:36.115298 systemd-networkd[1230]: cilium_vxlan: Link UP Sep 6 00:06:36.115307 systemd-networkd[1230]: cilium_vxlan: Gained carrier Sep 6 00:06:36.216761 systemd-networkd[1230]: cilium_host: Gained IPv6LL Sep 6 00:06:36.248797 systemd-networkd[1230]: cilium_net: Gained IPv6LL Sep 6 00:06:36.366630 kernel: NET: Registered PF_ALG protocol family Sep 6 00:06:36.923383 kubelet[2636]: E0906 00:06:36.923336 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:36.940535 systemd-networkd[1230]: lxc_health: Link UP Sep 6 00:06:36.949104 systemd-networkd[1230]: lxc_health: Gained carrier Sep 6 00:06:37.042517 systemd-networkd[1230]: lxcfa456c65f0e6: Link UP Sep 6 00:06:37.052279 systemd-networkd[1230]: lxc551eb2d8be2e: Link UP Sep 6 00:06:37.063199 kernel: eth0: renamed from tmp93bd9 Sep 6 00:06:37.067625 kernel: eth0: renamed from tmpdb4e1 Sep 6 00:06:37.074798 systemd-networkd[1230]: lxcfa456c65f0e6: Gained carrier Sep 6 00:06:37.075126 systemd-networkd[1230]: lxc551eb2d8be2e: Gained carrier Sep 6 00:06:37.913055 systemd-networkd[1230]: cilium_vxlan: Gained IPv6LL Sep 6 00:06:38.744866 systemd-networkd[1230]: lxcfa456c65f0e6: Gained IPv6LL Sep 6 00:06:38.869640 kubelet[2636]: E0906 00:06:38.869576 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:38.926687 kubelet[2636]: E0906 00:06:38.926651 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:38.937841 systemd-networkd[1230]: lxc_health: Gained IPv6LL Sep 6 00:06:39.000841 systemd-networkd[1230]: lxc551eb2d8be2e: Gained IPv6LL Sep 6 00:06:39.927201 kubelet[2636]: E0906 00:06:39.927143 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:40.607615 containerd[1546]: time="2025-09-06T00:06:40.607494353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:40.607615 containerd[1546]: time="2025-09-06T00:06:40.607543006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:40.607615 containerd[1546]: time="2025-09-06T00:06:40.607561052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:40.608167 containerd[1546]: time="2025-09-06T00:06:40.607649477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:40.611591 containerd[1546]: time="2025-09-06T00:06:40.609786726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:40.611772 containerd[1546]: time="2025-09-06T00:06:40.611730321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:40.613113 containerd[1546]: time="2025-09-06T00:06:40.611787137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:40.613281 containerd[1546]: time="2025-09-06T00:06:40.613242712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:40.632722 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:06:40.634697 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:06:40.655054 containerd[1546]: time="2025-09-06T00:06:40.655017827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7gm7x,Uid:7b9492f5-2b73-416a-95eb-549d37a9672f,Namespace:kube-system,Attempt:0,} returns sandbox id \"93bd9d6883ff6ca16946a932f349d00c8afc7335751ac8d21886cae044fac05a\"" Sep 6 00:06:40.655180 containerd[1546]: time="2025-09-06T00:06:40.655069842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-82q7d,Uid:8dd7b51f-4262-47b7-97ea-169b08adfb39,Namespace:kube-system,Attempt:0,} returns sandbox id \"db4e13c68d634095afa38ef4045028a65276d040eb3078a46a259a9ae973093a\"" Sep 6 00:06:40.655739 kubelet[2636]: E0906 00:06:40.655710 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:40.655839 kubelet[2636]: E0906 00:06:40.655743 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:40.657852 containerd[1546]: time="2025-09-06T00:06:40.657814065Z" level=info msg="CreateContainer within sandbox \"93bd9d6883ff6ca16946a932f349d00c8afc7335751ac8d21886cae044fac05a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:06:40.658050 containerd[1546]: time="2025-09-06T00:06:40.657814145Z" level=info msg="CreateContainer within sandbox \"db4e13c68d634095afa38ef4045028a65276d040eb3078a46a259a9ae973093a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:06:40.675003 containerd[1546]: time="2025-09-06T00:06:40.674952593Z" level=info msg="CreateContainer within sandbox \"93bd9d6883ff6ca16946a932f349d00c8afc7335751ac8d21886cae044fac05a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41e0365431e91634e946f33197c30fa26edef800bde99367d138735a88d019a8\"" Sep 6 00:06:40.675458 containerd[1546]: time="2025-09-06T00:06:40.675428329Z" level=info msg="StartContainer for \"41e0365431e91634e946f33197c30fa26edef800bde99367d138735a88d019a8\"" Sep 6 00:06:40.677212 containerd[1546]: time="2025-09-06T00:06:40.677169906Z" level=info msg="CreateContainer within sandbox \"db4e13c68d634095afa38ef4045028a65276d040eb3078a46a259a9ae973093a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60aa357b4bc84370a3ed5f24837dc0580883ccb0dcc8b8dcc145935f868a8c34\"" Sep 6 00:06:40.677926 containerd[1546]: time="2025-09-06T00:06:40.677901714Z" level=info msg="StartContainer for \"60aa357b4bc84370a3ed5f24837dc0580883ccb0dcc8b8dcc145935f868a8c34\"" Sep 6 00:06:40.726575 containerd[1546]: time="2025-09-06T00:06:40.726530424Z" level=info msg="StartContainer for \"41e0365431e91634e946f33197c30fa26edef800bde99367d138735a88d019a8\" returns successfully" Sep 6 00:06:40.726951 containerd[1546]: time="2025-09-06T00:06:40.726647138Z" level=info msg="StartContainer for \"60aa357b4bc84370a3ed5f24837dc0580883ccb0dcc8b8dcc145935f868a8c34\" returns successfully" Sep 6 00:06:40.929873 kubelet[2636]: E0906 00:06:40.929688 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:40.933663 kubelet[2636]: E0906 00:06:40.933181 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:40.941025 kubelet[2636]: I0906 00:06:40.940970 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-82q7d" podStartSLOduration=16.940955863 podStartE2EDuration="16.940955863s" podCreationTimestamp="2025-09-06 00:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:06:40.940154835 +0000 UTC m=+22.187596180" watchObservedRunningTime="2025-09-06 00:06:40.940955863 +0000 UTC m=+22.188397208" Sep 6 00:06:40.960178 kubelet[2636]: I0906 00:06:40.959922 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7gm7x" podStartSLOduration=16.959906548 podStartE2EDuration="16.959906548s" podCreationTimestamp="2025-09-06 00:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:06:40.959888823 +0000 UTC m=+22.207330168" watchObservedRunningTime="2025-09-06 00:06:40.959906548 +0000 UTC m=+22.207347853" Sep 6 00:06:41.934494 kubelet[2636]: E0906 00:06:41.934364 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:41.934939 kubelet[2636]: E0906 00:06:41.934901 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:42.936041 kubelet[2636]: E0906 00:06:42.935731 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:42.936041 kubelet[2636]: E0906 00:06:42.935954 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:49.496887 systemd[1]: Started sshd@7-10.0.0.96:22-10.0.0.1:52446.service - OpenSSH per-connection server daemon (10.0.0.1:52446). Sep 6 00:06:49.531506 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 52446 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:49.532995 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:49.536670 systemd-logind[1525]: New session 8 of user core. Sep 6 00:06:49.548945 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 6 00:06:49.676002 sshd[4033]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:49.680439 systemd[1]: sshd@7-10.0.0.96:22-10.0.0.1:52446.service: Deactivated successfully. Sep 6 00:06:49.680498 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:06:49.682581 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:06:49.683400 systemd-logind[1525]: Removed session 8. Sep 6 00:06:54.690927 systemd[1]: Started sshd@8-10.0.0.96:22-10.0.0.1:48212.service - OpenSSH per-connection server daemon (10.0.0.1:48212). Sep 6 00:06:54.732386 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 48212 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:54.734038 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:54.738671 systemd-logind[1525]: New session 9 of user core. Sep 6 00:06:54.750952 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 6 00:06:54.883116 sshd[4050]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:54.886903 systemd[1]: sshd@8-10.0.0.96:22-10.0.0.1:48212.service: Deactivated successfully. Sep 6 00:06:54.889416 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:06:54.890061 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:06:54.891321 systemd-logind[1525]: Removed session 9. Sep 6 00:06:59.894844 systemd[1]: Started sshd@9-10.0.0.96:22-10.0.0.1:48214.service - OpenSSH per-connection server daemon (10.0.0.1:48214). Sep 6 00:06:59.927256 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 48214 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:59.928616 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:59.933071 systemd-logind[1525]: New session 10 of user core. Sep 6 00:06:59.947062 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 6 00:07:00.057137 sshd[4068]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:00.065859 systemd[1]: Started sshd@10-10.0.0.96:22-10.0.0.1:43790.service - OpenSSH per-connection server daemon (10.0.0.1:43790). Sep 6 00:07:00.066290 systemd[1]: sshd@9-10.0.0.96:22-10.0.0.1:48214.service: Deactivated successfully. Sep 6 00:07:00.068098 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:07:00.068905 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:07:00.070240 systemd-logind[1525]: Removed session 10. Sep 6 00:07:00.105008 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 43790 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:00.106322 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:00.110064 systemd-logind[1525]: New session 11 of user core. Sep 6 00:07:00.121953 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 6 00:07:00.264715 sshd[4082]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:00.274925 systemd[1]: Started sshd@11-10.0.0.96:22-10.0.0.1:43804.service - OpenSSH per-connection server daemon (10.0.0.1:43804). Sep 6 00:07:00.277109 systemd[1]: sshd@10-10.0.0.96:22-10.0.0.1:43790.service: Deactivated successfully. Sep 6 00:07:00.284031 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:07:00.284883 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:07:00.288655 systemd-logind[1525]: Removed session 11. Sep 6 00:07:00.319536 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 43804 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:00.321319 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:00.326092 systemd-logind[1525]: New session 12 of user core. Sep 6 00:07:00.336948 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 6 00:07:00.451694 sshd[4096]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:00.456212 systemd[1]: sshd@11-10.0.0.96:22-10.0.0.1:43804.service: Deactivated successfully. Sep 6 00:07:00.458825 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:07:00.459177 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:07:00.460145 systemd-logind[1525]: Removed session 12. Sep 6 00:07:05.461817 systemd[1]: Started sshd@12-10.0.0.96:22-10.0.0.1:43808.service - OpenSSH per-connection server daemon (10.0.0.1:43808). Sep 6 00:07:05.493702 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 43808 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:05.494865 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:05.499190 systemd-logind[1525]: New session 13 of user core. Sep 6 00:07:05.508821 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 6 00:07:05.618343 sshd[4114]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:05.621589 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:07:05.621896 systemd[1]: sshd@12-10.0.0.96:22-10.0.0.1:43808.service: Deactivated successfully. Sep 6 00:07:05.623780 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:07:05.625947 systemd-logind[1525]: Removed session 13. Sep 6 00:07:10.640098 systemd[1]: Started sshd@13-10.0.0.96:22-10.0.0.1:54288.service - OpenSSH per-connection server daemon (10.0.0.1:54288). Sep 6 00:07:10.673013 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 54288 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:10.674263 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:10.678766 systemd-logind[1525]: New session 14 of user core. Sep 6 00:07:10.692000 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 6 00:07:10.808865 sshd[4129]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:10.826904 systemd[1]: Started sshd@14-10.0.0.96:22-10.0.0.1:54290.service - OpenSSH per-connection server daemon (10.0.0.1:54290). Sep 6 00:07:10.827309 systemd[1]: sshd@13-10.0.0.96:22-10.0.0.1:54288.service: Deactivated successfully. Sep 6 00:07:10.834153 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:07:10.835111 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:07:10.837385 systemd-logind[1525]: Removed session 14. Sep 6 00:07:10.865525 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 54290 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:10.866881 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:10.870547 systemd-logind[1525]: New session 15 of user core. Sep 6 00:07:10.879830 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 6 00:07:11.098196 sshd[4141]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:11.109882 systemd[1]: Started sshd@15-10.0.0.96:22-10.0.0.1:54300.service - OpenSSH per-connection server daemon (10.0.0.1:54300). Sep 6 00:07:11.110312 systemd[1]: sshd@14-10.0.0.96:22-10.0.0.1:54290.service: Deactivated successfully. Sep 6 00:07:11.114229 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:07:11.114232 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:07:11.115806 systemd-logind[1525]: Removed session 15. Sep 6 00:07:11.146479 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 54300 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:11.147859 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:11.152450 systemd-logind[1525]: New session 16 of user core. Sep 6 00:07:11.163032 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 6 00:07:12.316542 sshd[4154]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:12.330447 systemd[1]: Started sshd@16-10.0.0.96:22-10.0.0.1:54306.service - OpenSSH per-connection server daemon (10.0.0.1:54306). Sep 6 00:07:12.331057 systemd[1]: sshd@15-10.0.0.96:22-10.0.0.1:54300.service: Deactivated successfully. Sep 6 00:07:12.336663 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:07:12.337959 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:07:12.338969 systemd-logind[1525]: Removed session 16. Sep 6 00:07:12.390102 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 54306 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:12.391761 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:12.396817 systemd-logind[1525]: New session 17 of user core. Sep 6 00:07:12.403985 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 6 00:07:12.634859 sshd[4174]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:12.644008 systemd[1]: Started sshd@17-10.0.0.96:22-10.0.0.1:54320.service - OpenSSH per-connection server daemon (10.0.0.1:54320). Sep 6 00:07:12.644516 systemd[1]: sshd@16-10.0.0.96:22-10.0.0.1:54306.service: Deactivated successfully. Sep 6 00:07:12.648514 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:07:12.649030 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:07:12.651425 systemd-logind[1525]: Removed session 17. Sep 6 00:07:12.680290 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 54320 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:12.681620 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:12.685744 systemd-logind[1525]: New session 18 of user core. Sep 6 00:07:12.696346 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 6 00:07:12.806820 sshd[4188]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:12.810474 systemd[1]: sshd@17-10.0.0.96:22-10.0.0.1:54320.service: Deactivated successfully. Sep 6 00:07:12.812683 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:07:12.813653 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:07:12.815532 systemd-logind[1525]: Removed session 18. Sep 6 00:07:17.831170 systemd[1]: Started sshd@18-10.0.0.96:22-10.0.0.1:54326.service - OpenSSH per-connection server daemon (10.0.0.1:54326). Sep 6 00:07:17.868753 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 54326 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:17.871294 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:17.879219 systemd-logind[1525]: New session 19 of user core. Sep 6 00:07:17.890007 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 6 00:07:18.016901 sshd[4210]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:18.020429 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:07:18.020619 systemd[1]: sshd@18-10.0.0.96:22-10.0.0.1:54326.service: Deactivated successfully. Sep 6 00:07:18.022295 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:07:18.022801 systemd-logind[1525]: Removed session 19. Sep 6 00:07:23.027837 systemd[1]: Started sshd@19-10.0.0.96:22-10.0.0.1:41510.service - OpenSSH per-connection server daemon (10.0.0.1:41510). Sep 6 00:07:23.059993 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 41510 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:23.061390 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:23.065503 systemd-logind[1525]: New session 20 of user core. Sep 6 00:07:23.081863 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 6 00:07:23.186943 sshd[4227]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:23.190139 systemd[1]: sshd@19-10.0.0.96:22-10.0.0.1:41510.service: Deactivated successfully. Sep 6 00:07:23.192418 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:07:23.192856 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:07:23.193959 systemd-logind[1525]: Removed session 20. Sep 6 00:07:28.197884 systemd[1]: Started sshd@20-10.0.0.96:22-10.0.0.1:41522.service - OpenSSH per-connection server daemon (10.0.0.1:41522). Sep 6 00:07:28.235791 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 41522 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:28.237131 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:28.241416 systemd-logind[1525]: New session 21 of user core. Sep 6 00:07:28.251900 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 6 00:07:28.358436 sshd[4246]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:28.371856 systemd[1]: Started sshd@21-10.0.0.96:22-10.0.0.1:41536.service - OpenSSH per-connection server daemon (10.0.0.1:41536). Sep 6 00:07:28.372228 systemd[1]: sshd@20-10.0.0.96:22-10.0.0.1:41522.service: Deactivated successfully. Sep 6 00:07:28.377391 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:07:28.381198 systemd-logind[1525]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:07:28.382526 systemd-logind[1525]: Removed session 21. Sep 6 00:07:28.404882 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 41536 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:28.406241 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:28.411027 systemd-logind[1525]: New session 22 of user core. Sep 6 00:07:28.418918 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 6 00:07:30.432455 containerd[1546]: time="2025-09-06T00:07:30.432288998Z" level=info msg="StopContainer for \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\" with timeout 30 (s)" Sep 6 00:07:30.433563 containerd[1546]: time="2025-09-06T00:07:30.432642888Z" level=info msg="Stop container \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\" with signal terminated" Sep 6 00:07:30.467516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91-rootfs.mount: Deactivated successfully. Sep 6 00:07:30.484293 containerd[1546]: time="2025-09-06T00:07:30.484231880Z" level=info msg="shim disconnected" id=90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91 namespace=k8s.io Sep 6 00:07:30.484293 containerd[1546]: time="2025-09-06T00:07:30.484288355Z" level=warning msg="cleaning up after shim disconnected" id=90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91 namespace=k8s.io Sep 6 00:07:30.484293 containerd[1546]: time="2025-09-06T00:07:30.484296754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:07:30.497136 containerd[1546]: time="2025-09-06T00:07:30.496970161Z" level=info msg="StopContainer for \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\" with timeout 2 (s)" Sep 6 00:07:30.497560 containerd[1546]: time="2025-09-06T00:07:30.497481878Z" level=info msg="Stop container \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\" with signal terminated" Sep 6 00:07:30.503582 systemd-networkd[1230]: lxc_health: Link DOWN Sep 6 00:07:30.503588 systemd-networkd[1230]: lxc_health: Lost carrier Sep 6 00:07:30.506574 containerd[1546]: time="2025-09-06T00:07:30.506521232Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:07:30.539607 containerd[1546]: time="2025-09-06T00:07:30.539567234Z" level=info msg="StopContainer for \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\" returns successfully" Sep 6 00:07:30.540204 containerd[1546]: time="2025-09-06T00:07:30.540179142Z" level=info msg="StopPodSandbox for \"cb43b728f5727b2908f2d4bce46c82311898507044c40af434a739c2ba0bceca\"" Sep 6 00:07:30.542331 containerd[1546]: time="2025-09-06T00:07:30.542292364Z" level=info msg="Container to stop \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:30.544208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb43b728f5727b2908f2d4bce46c82311898507044c40af434a739c2ba0bceca-shm.mount: Deactivated successfully. Sep 6 00:07:30.569710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703-rootfs.mount: Deactivated successfully. Sep 6 00:07:30.572415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb43b728f5727b2908f2d4bce46c82311898507044c40af434a739c2ba0bceca-rootfs.mount: Deactivated successfully. Sep 6 00:07:30.577359 containerd[1546]: time="2025-09-06T00:07:30.577276681Z" level=info msg="shim disconnected" id=65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703 namespace=k8s.io Sep 6 00:07:30.577359 containerd[1546]: time="2025-09-06T00:07:30.577335756Z" level=warning msg="cleaning up after shim disconnected" id=65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703 namespace=k8s.io Sep 6 00:07:30.577359 containerd[1546]: time="2025-09-06T00:07:30.577346515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:07:30.577745 containerd[1546]: time="2025-09-06T00:07:30.577282321Z" level=info msg="shim disconnected" id=cb43b728f5727b2908f2d4bce46c82311898507044c40af434a739c2ba0bceca namespace=k8s.io Sep 6 00:07:30.577745 containerd[1546]: time="2025-09-06T00:07:30.577439148Z" level=warning msg="cleaning up after shim disconnected" id=cb43b728f5727b2908f2d4bce46c82311898507044c40af434a739c2ba0bceca namespace=k8s.io Sep 6 00:07:30.577745 containerd[1546]: time="2025-09-06T00:07:30.577446707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:07:30.589954 containerd[1546]: time="2025-09-06T00:07:30.589783662Z" level=info msg="TearDown network for sandbox \"cb43b728f5727b2908f2d4bce46c82311898507044c40af434a739c2ba0bceca\" successfully" Sep 6 00:07:30.589954 containerd[1546]: time="2025-09-06T00:07:30.589815540Z" level=info msg="StopPodSandbox for \"cb43b728f5727b2908f2d4bce46c82311898507044c40af434a739c2ba0bceca\" returns successfully" Sep 6 00:07:30.591817 containerd[1546]: time="2025-09-06T00:07:30.591770774Z" level=info msg="StopContainer for \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\" returns successfully" Sep 6 00:07:30.592214 containerd[1546]: time="2025-09-06T00:07:30.592192418Z" level=info msg="StopPodSandbox for \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\"" Sep 6 00:07:30.592258 containerd[1546]: time="2025-09-06T00:07:30.592228175Z" level=info msg="Container to stop \"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:30.592258 containerd[1546]: time="2025-09-06T00:07:30.592241134Z" level=info msg="Container to stop \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:30.592258 containerd[1546]: time="2025-09-06T00:07:30.592251173Z" level=info msg="Container to stop \"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:30.592338 containerd[1546]: time="2025-09-06T00:07:30.592260253Z" level=info msg="Container to stop \"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:30.592338 containerd[1546]: time="2025-09-06T00:07:30.592270332Z" level=info msg="Container to stop \"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:30.625142 containerd[1546]: time="2025-09-06T00:07:30.625076554Z" level=info msg="shim disconnected" id=3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223 namespace=k8s.io Sep 6 00:07:30.625305 containerd[1546]: time="2025-09-06T00:07:30.625173546Z" level=warning msg="cleaning up after shim disconnected" id=3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223 namespace=k8s.io Sep 6 00:07:30.625305 containerd[1546]: time="2025-09-06T00:07:30.625184425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:07:30.637900 containerd[1546]: time="2025-09-06T00:07:30.637851712Z" level=info msg="TearDown network for sandbox \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" successfully" Sep 6 00:07:30.637900 containerd[1546]: time="2025-09-06T00:07:30.637888949Z" level=info msg="StopPodSandbox for \"3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223\" returns successfully" Sep 6 00:07:30.777293 kubelet[2636]: I0906 00:07:30.776431 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-hubble-tls\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777293 kubelet[2636]: I0906 00:07:30.776490 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-lib-modules\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777293 kubelet[2636]: I0906 00:07:30.776519 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-config-path\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777293 kubelet[2636]: I0906 00:07:30.776563 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-etc-cni-netd\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777293 kubelet[2636]: I0906 00:07:30.776577 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-cgroup\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777293 kubelet[2636]: I0906 00:07:30.776592 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-run\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777781 kubelet[2636]: I0906 00:07:30.776618 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-host-proc-sys-net\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777781 kubelet[2636]: I0906 00:07:30.776634 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-host-proc-sys-kernel\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777781 kubelet[2636]: I0906 00:07:30.776652 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fh5b\" (UniqueName: \"kubernetes.io/projected/581f66a8-8247-40c2-9cd3-2b0cd2a35d91-kube-api-access-5fh5b\") pod \"581f66a8-8247-40c2-9cd3-2b0cd2a35d91\" (UID: \"581f66a8-8247-40c2-9cd3-2b0cd2a35d91\") " Sep 6 00:07:30.777781 kubelet[2636]: I0906 00:07:30.776670 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-clustermesh-secrets\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777781 kubelet[2636]: I0906 00:07:30.776685 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvtm7\" (UniqueName: \"kubernetes.io/projected/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-kube-api-access-zvtm7\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777781 kubelet[2636]: I0906 00:07:30.776699 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cni-path\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777912 kubelet[2636]: I0906 00:07:30.776715 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/581f66a8-8247-40c2-9cd3-2b0cd2a35d91-cilium-config-path\") pod \"581f66a8-8247-40c2-9cd3-2b0cd2a35d91\" (UID: \"581f66a8-8247-40c2-9cd3-2b0cd2a35d91\") " Sep 6 00:07:30.777912 kubelet[2636]: I0906 00:07:30.776730 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-xtables-lock\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777912 kubelet[2636]: I0906 00:07:30.776746 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-hostproc\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.777912 kubelet[2636]: I0906 00:07:30.776760 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-bpf-maps\") pod \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\" (UID: \"7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90\") " Sep 6 00:07:30.779570 kubelet[2636]: I0906 00:07:30.779374 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:30.779570 kubelet[2636]: I0906 00:07:30.779441 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:30.779965 kubelet[2636]: I0906 00:07:30.779940 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:30.780577 kubelet[2636]: I0906 00:07:30.780536 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:30.780577 kubelet[2636]: I0906 00:07:30.780571 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:30.780649 kubelet[2636]: I0906 00:07:30.780587 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:30.780649 kubelet[2636]: I0906 00:07:30.780613 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:30.780649 kubelet[2636]: I0906 00:07:30.780628 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cni-path" (OuterVolumeSpecName: "cni-path") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:30.784639 kubelet[2636]: I0906 00:07:30.784107 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:07:30.784639 kubelet[2636]: I0906 00:07:30.784163 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:30.785953 kubelet[2636]: I0906 00:07:30.785924 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/581f66a8-8247-40c2-9cd3-2b0cd2a35d91-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "581f66a8-8247-40c2-9cd3-2b0cd2a35d91" (UID: "581f66a8-8247-40c2-9cd3-2b0cd2a35d91"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:07:30.786032 kubelet[2636]: I0906 00:07:30.785968 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-hostproc" (OuterVolumeSpecName: "hostproc") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:30.787970 kubelet[2636]: I0906 00:07:30.787941 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:07:30.788148 kubelet[2636]: I0906 00:07:30.788128 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:07:30.788457 kubelet[2636]: I0906 00:07:30.788425 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581f66a8-8247-40c2-9cd3-2b0cd2a35d91-kube-api-access-5fh5b" (OuterVolumeSpecName: "kube-api-access-5fh5b") pod "581f66a8-8247-40c2-9cd3-2b0cd2a35d91" (UID: "581f66a8-8247-40c2-9cd3-2b0cd2a35d91"). InnerVolumeSpecName "kube-api-access-5fh5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:07:30.788837 kubelet[2636]: I0906 00:07:30.788812 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-kube-api-access-zvtm7" (OuterVolumeSpecName: "kube-api-access-zvtm7") pod "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" (UID: "7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90"). InnerVolumeSpecName "kube-api-access-zvtm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:07:30.877645 kubelet[2636]: I0906 00:07:30.877585 2636 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877645 kubelet[2636]: I0906 00:07:30.877636 2636 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877645 kubelet[2636]: I0906 00:07:30.877645 2636 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877645 kubelet[2636]: I0906 00:07:30.877654 2636 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877645 kubelet[2636]: I0906 00:07:30.877662 2636 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877855 kubelet[2636]: I0906 00:07:30.877671 2636 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877855 kubelet[2636]: I0906 00:07:30.877681 2636 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877855 kubelet[2636]: I0906 00:07:30.877688 2636 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877855 kubelet[2636]: I0906 00:07:30.877696 2636 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877855 kubelet[2636]: I0906 00:07:30.877704 2636 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877855 kubelet[2636]: I0906 00:07:30.877712 2636 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877855 kubelet[2636]: I0906 00:07:30.877720 2636 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.877855 kubelet[2636]: I0906 00:07:30.877727 2636 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvtm7\" (UniqueName: \"kubernetes.io/projected/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-kube-api-access-zvtm7\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.878022 kubelet[2636]: I0906 00:07:30.877736 2636 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.878022 kubelet[2636]: I0906 00:07:30.877744 2636 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/581f66a8-8247-40c2-9cd3-2b0cd2a35d91-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:30.878022 kubelet[2636]: I0906 00:07:30.877753 2636 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fh5b\" (UniqueName: \"kubernetes.io/projected/581f66a8-8247-40c2-9cd3-2b0cd2a35d91-kube-api-access-5fh5b\") on node \"localhost\" DevicePath \"\"" Sep 6 00:07:31.048572 kubelet[2636]: I0906 00:07:31.046719 2636 scope.go:117] "RemoveContainer" containerID="65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703" Sep 6 00:07:31.053013 containerd[1546]: time="2025-09-06T00:07:31.051802385Z" level=info msg="RemoveContainer for \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\"" Sep 6 00:07:31.061291 containerd[1546]: time="2025-09-06T00:07:31.060461893Z" level=info msg="RemoveContainer for \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\" returns successfully" Sep 6 00:07:31.063377 kubelet[2636]: I0906 00:07:31.061532 2636 scope.go:117] "RemoveContainer" containerID="b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad" Sep 6 00:07:31.064675 containerd[1546]: time="2025-09-06T00:07:31.063581524Z" level=info msg="RemoveContainer for \"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad\"" Sep 6 00:07:31.067639 containerd[1546]: time="2025-09-06T00:07:31.067540567Z" level=info msg="RemoveContainer for \"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad\" returns successfully" Sep 6 00:07:31.068988 kubelet[2636]: I0906 00:07:31.067740 2636 scope.go:117] "RemoveContainer" containerID="763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa" Sep 6 00:07:31.070420 containerd[1546]: time="2025-09-06T00:07:31.070384780Z" level=info msg="RemoveContainer for \"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa\"" Sep 6 00:07:31.077430 containerd[1546]: time="2025-09-06T00:07:31.077390380Z" level=info msg="RemoveContainer for \"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa\" returns successfully" Sep 6 00:07:31.077788 kubelet[2636]: I0906 00:07:31.077665 2636 scope.go:117] "RemoveContainer" containerID="2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362" Sep 6 00:07:31.078850 containerd[1546]: time="2025-09-06T00:07:31.078729952Z" level=info msg="RemoveContainer for \"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362\"" Sep 6 00:07:31.082947 containerd[1546]: time="2025-09-06T00:07:31.082897019Z" level=info msg="RemoveContainer for \"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362\" returns successfully" Sep 6 00:07:31.084173 kubelet[2636]: I0906 00:07:31.084091 2636 scope.go:117] "RemoveContainer" containerID="ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23" Sep 6 00:07:31.085891 containerd[1546]: time="2025-09-06T00:07:31.085862142Z" level=info msg="RemoveContainer for \"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23\"" Sep 6 00:07:31.090146 containerd[1546]: time="2025-09-06T00:07:31.090104723Z" level=info msg="RemoveContainer for \"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23\" returns successfully" Sep 6 00:07:31.090928 kubelet[2636]: I0906 00:07:31.090828 2636 scope.go:117] "RemoveContainer" containerID="65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703" Sep 6 00:07:31.091343 containerd[1546]: time="2025-09-06T00:07:31.091310387Z" level=error msg="ContainerStatus for \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\": not found" Sep 6 00:07:31.103045 kubelet[2636]: E0906 00:07:31.102749 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\": not found" containerID="65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703" Sep 6 00:07:31.103632 kubelet[2636]: I0906 00:07:31.102788 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703"} err="failed to get container status \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\": rpc error: code = NotFound desc = an error occurred when try to find container \"65db0c6dffb10004355eae5720107058ee07d3e01e5f061fbd55edeecfa94703\": not found" Sep 6 00:07:31.103632 kubelet[2636]: I0906 00:07:31.103582 2636 scope.go:117] "RemoveContainer" containerID="b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad" Sep 6 00:07:31.104407 containerd[1546]: time="2025-09-06T00:07:31.104367503Z" level=error msg="ContainerStatus for \"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad\": not found" Sep 6 00:07:31.104834 kubelet[2636]: E0906 00:07:31.104735 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad\": not found" containerID="b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad" Sep 6 00:07:31.104834 kubelet[2636]: I0906 00:07:31.104760 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad"} err="failed to get container status \"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"b32bbf41a610ce62e16f33d50a768d8513c3945a3d3b5f37eb5ff933f444d5ad\": not found" Sep 6 00:07:31.104834 kubelet[2636]: I0906 00:07:31.104778 2636 scope.go:117] "RemoveContainer" containerID="763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa" Sep 6 00:07:31.105453 containerd[1546]: time="2025-09-06T00:07:31.105399180Z" level=error msg="ContainerStatus for \"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa\": not found" Sep 6 00:07:31.105658 kubelet[2636]: E0906 00:07:31.105540 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa\": not found" containerID="763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa" Sep 6 00:07:31.105697 kubelet[2636]: I0906 00:07:31.105666 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa"} err="failed to get container status \"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"763acdf1332a92427f0e1e666d494fb30c04a13e85ab906a9261c5c73e31d1fa\": not found" Sep 6 00:07:31.105697 kubelet[2636]: I0906 00:07:31.105684 2636 scope.go:117] "RemoveContainer" containerID="2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362" Sep 6 00:07:31.105923 containerd[1546]: time="2025-09-06T00:07:31.105894901Z" level=error msg="ContainerStatus for \"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362\": not found" Sep 6 00:07:31.106017 kubelet[2636]: E0906 00:07:31.106001 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362\": not found" containerID="2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362" Sep 6 00:07:31.106280 kubelet[2636]: I0906 00:07:31.106254 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362"} err="failed to get container status \"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c49a4dfbe90ac1bd7fa852e08bbd01649b6af014a42374ef84a88e8dfb23362\": not found" Sep 6 00:07:31.106357 kubelet[2636]: I0906 00:07:31.106343 2636 scope.go:117] "RemoveContainer" containerID="ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23" Sep 6 00:07:31.106883 containerd[1546]: time="2025-09-06T00:07:31.106852504Z" level=error msg="ContainerStatus for \"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23\": not found" Sep 6 00:07:31.107184 kubelet[2636]: E0906 00:07:31.107081 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23\": not found" containerID="ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23" Sep 6 00:07:31.107184 kubelet[2636]: I0906 00:07:31.107103 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23"} err="failed to get container status \"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23\": rpc error: code = NotFound desc = an error occurred when try to find container \"ebf687483bcee04e16b605a9569857c1a1dd2367b83dd5ab30b4b2b01913da23\": not found" Sep 6 00:07:31.107184 kubelet[2636]: I0906 00:07:31.107122 2636 scope.go:117] "RemoveContainer" containerID="90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91" Sep 6 00:07:31.110620 containerd[1546]: time="2025-09-06T00:07:31.110295429Z" level=info msg="RemoveContainer for \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\"" Sep 6 00:07:31.131982 containerd[1546]: time="2025-09-06T00:07:31.131888462Z" level=info msg="RemoveContainer for \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\" returns successfully" Sep 6 00:07:31.132378 kubelet[2636]: I0906 00:07:31.132143 2636 scope.go:117] "RemoveContainer" containerID="90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91" Sep 6 00:07:31.145884 containerd[1546]: time="2025-09-06T00:07:31.145773512Z" level=error msg="ContainerStatus for \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\": not found" Sep 6 00:07:31.146355 kubelet[2636]: E0906 00:07:31.146311 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\": not found" containerID="90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91" Sep 6 00:07:31.146460 kubelet[2636]: I0906 00:07:31.146386 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91"} err="failed to get container status \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\": rpc error: code = NotFound desc = an error occurred when try to find container \"90b5406154ef7b8a01fa41e4cbbe237f5757446e17fd063dd6b159bee73afb91\": not found" Sep 6 00:07:31.467083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223-rootfs.mount: Deactivated successfully. Sep 6 00:07:31.467226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3eb5caefa04e8e21443bfc96e4f8716433f04722582c930ca95580619a9ee223-shm.mount: Deactivated successfully. Sep 6 00:07:31.467326 systemd[1]: var-lib-kubelet-pods-581f66a8\x2d8247\x2d40c2\x2d9cd3\x2d2b0cd2a35d91-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5fh5b.mount: Deactivated successfully. Sep 6 00:07:31.467422 systemd[1]: var-lib-kubelet-pods-7832da3a\x2dfd1e\x2d40b4\x2da39c\x2dffc4d9e0eb90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzvtm7.mount: Deactivated successfully. Sep 6 00:07:31.467508 systemd[1]: var-lib-kubelet-pods-7832da3a\x2dfd1e\x2d40b4\x2da39c\x2dffc4d9e0eb90-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:07:31.467591 systemd[1]: var-lib-kubelet-pods-7832da3a\x2dfd1e\x2d40b4\x2da39c\x2dffc4d9e0eb90-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:07:32.364968 sshd[4259]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:32.374845 systemd[1]: Started sshd@22-10.0.0.96:22-10.0.0.1:39710.service - OpenSSH per-connection server daemon (10.0.0.1:39710). Sep 6 00:07:32.375225 systemd[1]: sshd@21-10.0.0.96:22-10.0.0.1:41536.service: Deactivated successfully. Sep 6 00:07:32.378198 systemd-logind[1525]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:07:32.378274 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:07:32.379655 systemd-logind[1525]: Removed session 22. Sep 6 00:07:32.407906 sshd[4426]: Accepted publickey for core from 10.0.0.1 port 39710 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:32.409366 sshd[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:32.414628 systemd-logind[1525]: New session 23 of user core. Sep 6 00:07:32.423883 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 6 00:07:32.835659 kubelet[2636]: I0906 00:07:32.835623 2636 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="581f66a8-8247-40c2-9cd3-2b0cd2a35d91" path="/var/lib/kubelet/pods/581f66a8-8247-40c2-9cd3-2b0cd2a35d91/volumes" Sep 6 00:07:32.836083 kubelet[2636]: I0906 00:07:32.836060 2636 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" path="/var/lib/kubelet/pods/7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90/volumes" Sep 6 00:07:33.560984 sshd[4426]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:33.566920 systemd[1]: Started sshd@23-10.0.0.96:22-10.0.0.1:39714.service - OpenSSH per-connection server daemon (10.0.0.1:39714). Sep 6 00:07:33.567591 systemd[1]: sshd@22-10.0.0.96:22-10.0.0.1:39710.service: Deactivated successfully. Sep 6 00:07:33.577537 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:07:33.581784 systemd-logind[1525]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:07:33.584487 kubelet[2636]: E0906 00:07:33.581789 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" containerName="apply-sysctl-overwrites" Sep 6 00:07:33.584487 kubelet[2636]: E0906 00:07:33.581830 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" containerName="mount-bpf-fs" Sep 6 00:07:33.584487 kubelet[2636]: E0906 00:07:33.581838 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" containerName="mount-cgroup" Sep 6 00:07:33.584487 kubelet[2636]: E0906 00:07:33.581844 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="581f66a8-8247-40c2-9cd3-2b0cd2a35d91" containerName="cilium-operator" Sep 6 00:07:33.584487 kubelet[2636]: E0906 00:07:33.581849 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" containerName="clean-cilium-state" Sep 6 00:07:33.584487 kubelet[2636]: E0906 00:07:33.581855 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" containerName="cilium-agent" Sep 6 00:07:33.584487 kubelet[2636]: I0906 00:07:33.581884 2636 memory_manager.go:354] "RemoveStaleState removing state" podUID="581f66a8-8247-40c2-9cd3-2b0cd2a35d91" containerName="cilium-operator" Sep 6 00:07:33.584487 kubelet[2636]: I0906 00:07:33.584269 2636 memory_manager.go:354] "RemoveStaleState removing state" podUID="7832da3a-fd1e-40b4-a39c-ffc4d9e0eb90" containerName="cilium-agent" Sep 6 00:07:33.591625 systemd-logind[1525]: Removed session 23. Sep 6 00:07:33.634690 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 39714 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:33.636400 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:33.642677 systemd-logind[1525]: New session 24 of user core. Sep 6 00:07:33.651892 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 6 00:07:33.693286 kubelet[2636]: I0906 00:07:33.693218 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47a8a729-94ec-4e83-929a-6e2d0beeaa03-cilium-run\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.693286 kubelet[2636]: I0906 00:07:33.693275 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47a8a729-94ec-4e83-929a-6e2d0beeaa03-hostproc\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.693286 kubelet[2636]: I0906 00:07:33.693296 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47a8a729-94ec-4e83-929a-6e2d0beeaa03-host-proc-sys-net\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.693481 kubelet[2636]: I0906 00:07:33.693317 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47a8a729-94ec-4e83-929a-6e2d0beeaa03-xtables-lock\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.693481 kubelet[2636]: I0906 00:07:33.693334 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47a8a729-94ec-4e83-929a-6e2d0beeaa03-cilium-cgroup\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.693481 kubelet[2636]: I0906 00:07:33.693348 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47a8a729-94ec-4e83-929a-6e2d0beeaa03-lib-modules\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.693481 kubelet[2636]: I0906 00:07:33.693363 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47a8a729-94ec-4e83-929a-6e2d0beeaa03-cni-path\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.693481 kubelet[2636]: I0906 00:07:33.693377 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/47a8a729-94ec-4e83-929a-6e2d0beeaa03-cilium-ipsec-secrets\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.696112 kubelet[2636]: I0906 00:07:33.693393 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxpfc\" (UniqueName: \"kubernetes.io/projected/47a8a729-94ec-4e83-929a-6e2d0beeaa03-kube-api-access-nxpfc\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.696157 kubelet[2636]: I0906 00:07:33.696116 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47a8a729-94ec-4e83-929a-6e2d0beeaa03-cilium-config-path\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.696157 kubelet[2636]: I0906 00:07:33.696138 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47a8a729-94ec-4e83-929a-6e2d0beeaa03-host-proc-sys-kernel\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.696213 kubelet[2636]: I0906 00:07:33.696158 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47a8a729-94ec-4e83-929a-6e2d0beeaa03-bpf-maps\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.696213 kubelet[2636]: I0906 00:07:33.696173 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47a8a729-94ec-4e83-929a-6e2d0beeaa03-etc-cni-netd\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.696380 kubelet[2636]: I0906 00:07:33.696189 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47a8a729-94ec-4e83-929a-6e2d0beeaa03-hubble-tls\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.696410 kubelet[2636]: I0906 00:07:33.696388 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47a8a729-94ec-4e83-929a-6e2d0beeaa03-clustermesh-secrets\") pod \"cilium-vg9nv\" (UID: \"47a8a729-94ec-4e83-929a-6e2d0beeaa03\") " pod="kube-system/cilium-vg9nv" Sep 6 00:07:33.702550 sshd[4440]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:33.711864 systemd[1]: Started sshd@24-10.0.0.96:22-10.0.0.1:39724.service - OpenSSH per-connection server daemon (10.0.0.1:39724). Sep 6 00:07:33.712263 systemd[1]: sshd@23-10.0.0.96:22-10.0.0.1:39714.service: Deactivated successfully. Sep 6 00:07:33.715092 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:07:33.716135 systemd-logind[1525]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:07:33.716943 systemd-logind[1525]: Removed session 24. Sep 6 00:07:33.744114 sshd[4449]: Accepted publickey for core from 10.0.0.1 port 39724 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:07:33.745388 sshd[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:07:33.749135 systemd-logind[1525]: New session 25 of user core. Sep 6 00:07:33.759850 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 6 00:07:33.896871 kubelet[2636]: E0906 00:07:33.896712 2636 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:33.904492 kubelet[2636]: E0906 00:07:33.904456 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:33.905079 containerd[1546]: time="2025-09-06T00:07:33.905040967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vg9nv,Uid:47a8a729-94ec-4e83-929a-6e2d0beeaa03,Namespace:kube-system,Attempt:0,}" Sep 6 00:07:33.922263 containerd[1546]: time="2025-09-06T00:07:33.922004243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:07:33.922263 containerd[1546]: time="2025-09-06T00:07:33.922065879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:07:33.922263 containerd[1546]: time="2025-09-06T00:07:33.922082797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:07:33.922263 containerd[1546]: time="2025-09-06T00:07:33.922173471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:07:33.963751 containerd[1546]: time="2025-09-06T00:07:33.963714084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vg9nv,Uid:47a8a729-94ec-4e83-929a-6e2d0beeaa03,Namespace:kube-system,Attempt:0,} returns sandbox id \"adbe49a8abd666eeb6bcb9845ce4da964ad1f51995fd1d6d6f271c31e20284db\"" Sep 6 00:07:33.964462 kubelet[2636]: E0906 00:07:33.964441 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:33.966904 containerd[1546]: time="2025-09-06T00:07:33.966743789Z" level=info msg="CreateContainer within sandbox \"adbe49a8abd666eeb6bcb9845ce4da964ad1f51995fd1d6d6f271c31e20284db\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:07:33.976844 containerd[1546]: time="2025-09-06T00:07:33.976751599Z" level=info msg="CreateContainer within sandbox \"adbe49a8abd666eeb6bcb9845ce4da964ad1f51995fd1d6d6f271c31e20284db\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"59779c902c08ddcd8fe2447062a64897ef05fd3b61f187b00fc88a995923a26b\"" Sep 6 00:07:33.978659 containerd[1546]: time="2025-09-06T00:07:33.978226654Z" level=info msg="StartContainer for \"59779c902c08ddcd8fe2447062a64897ef05fd3b61f187b00fc88a995923a26b\"" Sep 6 00:07:34.025708 containerd[1546]: time="2025-09-06T00:07:34.025656156Z" level=info msg="StartContainer for \"59779c902c08ddcd8fe2447062a64897ef05fd3b61f187b00fc88a995923a26b\" returns successfully" Sep 6 00:07:34.058957 containerd[1546]: time="2025-09-06T00:07:34.058903300Z" level=info msg="shim disconnected" id=59779c902c08ddcd8fe2447062a64897ef05fd3b61f187b00fc88a995923a26b namespace=k8s.io Sep 6 00:07:34.059152 containerd[1546]: time="2025-09-06T00:07:34.059135964Z" level=warning msg="cleaning up after shim disconnected" id=59779c902c08ddcd8fe2447062a64897ef05fd3b61f187b00fc88a995923a26b namespace=k8s.io Sep 6 00:07:34.059217 containerd[1546]: time="2025-09-06T00:07:34.059192920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:07:34.075873 kubelet[2636]: E0906 00:07:34.075818 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:34.078848 containerd[1546]: time="2025-09-06T00:07:34.078810013Z" level=info msg="CreateContainer within sandbox \"adbe49a8abd666eeb6bcb9845ce4da964ad1f51995fd1d6d6f271c31e20284db\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:07:34.105324 containerd[1546]: time="2025-09-06T00:07:34.105190094Z" level=info msg="CreateContainer within sandbox \"adbe49a8abd666eeb6bcb9845ce4da964ad1f51995fd1d6d6f271c31e20284db\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c71c196c0ec0c2418a2e0b8252878a1bf687a89c9908cb5d9da16bcf92ab541\"" Sep 6 00:07:34.105820 containerd[1546]: time="2025-09-06T00:07:34.105663743Z" level=info msg="StartContainer for \"5c71c196c0ec0c2418a2e0b8252878a1bf687a89c9908cb5d9da16bcf92ab541\"" Sep 6 00:07:34.159443 containerd[1546]: time="2025-09-06T00:07:34.159269809Z" level=info msg="StartContainer for \"5c71c196c0ec0c2418a2e0b8252878a1bf687a89c9908cb5d9da16bcf92ab541\" returns successfully" Sep 6 00:07:34.184005 containerd[1546]: time="2025-09-06T00:07:34.183943124Z" level=info msg="shim disconnected" id=5c71c196c0ec0c2418a2e0b8252878a1bf687a89c9908cb5d9da16bcf92ab541 namespace=k8s.io Sep 6 00:07:34.184005 containerd[1546]: time="2025-09-06T00:07:34.183993801Z" level=warning msg="cleaning up after shim disconnected" id=5c71c196c0ec0c2418a2e0b8252878a1bf687a89c9908cb5d9da16bcf92ab541 namespace=k8s.io Sep 6 00:07:34.184005 containerd[1546]: time="2025-09-06T00:07:34.184002120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:07:35.079375 kubelet[2636]: E0906 00:07:35.079324 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:35.081279 containerd[1546]: time="2025-09-06T00:07:35.081186726Z" level=info msg="CreateContainer within sandbox \"adbe49a8abd666eeb6bcb9845ce4da964ad1f51995fd1d6d6f271c31e20284db\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:07:35.101234 containerd[1546]: time="2025-09-06T00:07:35.100825018Z" level=info msg="CreateContainer within sandbox \"adbe49a8abd666eeb6bcb9845ce4da964ad1f51995fd1d6d6f271c31e20284db\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d133751e62ebdd45383ba497830d615c39be0b405d33fd69061984aae66f572\"" Sep 6 00:07:35.101638 containerd[1546]: time="2025-09-06T00:07:35.101611649Z" level=info msg="StartContainer for \"1d133751e62ebdd45383ba497830d615c39be0b405d33fd69061984aae66f572\"" Sep 6 00:07:35.164485 containerd[1546]: time="2025-09-06T00:07:35.163897756Z" level=info msg="StartContainer for \"1d133751e62ebdd45383ba497830d615c39be0b405d33fd69061984aae66f572\" returns successfully" Sep 6 00:07:35.217588 containerd[1546]: time="2025-09-06T00:07:35.217519324Z" level=info msg="shim disconnected" id=1d133751e62ebdd45383ba497830d615c39be0b405d33fd69061984aae66f572 namespace=k8s.io Sep 6 00:07:35.217588 containerd[1546]: time="2025-09-06T00:07:35.217574201Z" level=warning msg="cleaning up after shim disconnected" id=1d133751e62ebdd45383ba497830d615c39be0b405d33fd69061984aae66f572 namespace=k8s.io Sep 6 00:07:35.217588 containerd[1546]: time="2025-09-06T00:07:35.217587560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:07:35.802230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d133751e62ebdd45383ba497830d615c39be0b405d33fd69061984aae66f572-rootfs.mount: Deactivated successfully. Sep 6 00:07:36.083338 kubelet[2636]: E0906 00:07:36.083289 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:36.086801 containerd[1546]: time="2025-09-06T00:07:36.086682534Z" level=info msg="CreateContainer within sandbox \"adbe49a8abd666eeb6bcb9845ce4da964ad1f51995fd1d6d6f271c31e20284db\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:07:36.098628 containerd[1546]: time="2025-09-06T00:07:36.098573078Z" level=info msg="CreateContainer within sandbox \"adbe49a8abd666eeb6bcb9845ce4da964ad1f51995fd1d6d6f271c31e20284db\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"35870ea96f433a1f47fbeb6f643581767a3fc8dc2136a068c4629b44b31849ec\"" Sep 6 00:07:36.100786 containerd[1546]: time="2025-09-06T00:07:36.099860723Z" level=info msg="StartContainer for \"35870ea96f433a1f47fbeb6f643581767a3fc8dc2136a068c4629b44b31849ec\"" Sep 6 00:07:36.147232 containerd[1546]: time="2025-09-06T00:07:36.147188395Z" level=info msg="StartContainer for \"35870ea96f433a1f47fbeb6f643581767a3fc8dc2136a068c4629b44b31849ec\" returns successfully" Sep 6 00:07:36.164578 containerd[1546]: time="2025-09-06T00:07:36.164520902Z" level=info msg="shim disconnected" id=35870ea96f433a1f47fbeb6f643581767a3fc8dc2136a068c4629b44b31849ec namespace=k8s.io Sep 6 00:07:36.164578 containerd[1546]: time="2025-09-06T00:07:36.164573499Z" level=warning msg="cleaning up after shim disconnected" id=35870ea96f433a1f47fbeb6f643581767a3fc8dc2136a068c4629b44b31849ec namespace=k8s.io Sep 6 00:07:36.164578 containerd[1546]: time="2025-09-06T00:07:36.164581578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:07:36.802309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35870ea96f433a1f47fbeb6f643581767a3fc8dc2136a068c4629b44b31849ec-rootfs.mount: Deactivated successfully. Sep 6 00:07:37.087993 kubelet[2636]: E0906 00:07:37.087949 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:37.091345 containerd[1546]: time="2025-09-06T00:07:37.091224122Z" level=info msg="CreateContainer within sandbox \"adbe49a8abd666eeb6bcb9845ce4da964ad1f51995fd1d6d6f271c31e20284db\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:07:37.102429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466259928.mount: Deactivated successfully. Sep 6 00:07:37.103414 containerd[1546]: time="2025-09-06T00:07:37.103372099Z" level=info msg="CreateContainer within sandbox \"adbe49a8abd666eeb6bcb9845ce4da964ad1f51995fd1d6d6f271c31e20284db\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"05474eb4e97cebf21e75c5a34b2ea01e7674b79115d23f6cfe1cdb13c5dae613\"" Sep 6 00:07:37.104729 containerd[1546]: time="2025-09-06T00:07:37.104700306Z" level=info msg="StartContainer for \"05474eb4e97cebf21e75c5a34b2ea01e7674b79115d23f6cfe1cdb13c5dae613\"" Sep 6 00:07:37.160073 containerd[1546]: time="2025-09-06T00:07:37.159852496Z" level=info msg="StartContainer for \"05474eb4e97cebf21e75c5a34b2ea01e7674b79115d23f6cfe1cdb13c5dae613\" returns successfully" Sep 6 00:07:37.428668 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 6 00:07:37.802371 systemd[1]: run-containerd-runc-k8s.io-05474eb4e97cebf21e75c5a34b2ea01e7674b79115d23f6cfe1cdb13c5dae613-runc.0p4UFG.mount: Deactivated successfully. Sep 6 00:07:38.093075 kubelet[2636]: E0906 00:07:38.092994 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:38.110291 kubelet[2636]: I0906 00:07:38.110233 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vg9nv" podStartSLOduration=5.110218218 podStartE2EDuration="5.110218218s" podCreationTimestamp="2025-09-06 00:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:07:38.106368734 +0000 UTC m=+79.353810079" watchObservedRunningTime="2025-09-06 00:07:38.110218218 +0000 UTC m=+79.357659563" Sep 6 00:07:38.835193 kubelet[2636]: E0906 00:07:38.834997 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:39.905791 kubelet[2636]: E0906 00:07:39.905691 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:40.256210 systemd-networkd[1230]: lxc_health: Link UP Sep 6 00:07:40.261692 systemd-networkd[1230]: lxc_health: Gained carrier Sep 6 00:07:41.833476 kubelet[2636]: E0906 00:07:41.833426 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:41.907910 kubelet[2636]: E0906 00:07:41.906751 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:42.100638 kubelet[2636]: E0906 00:07:42.100293 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:42.298366 systemd-networkd[1230]: lxc_health: Gained IPv6LL Sep 6 00:07:43.104308 kubelet[2636]: E0906 00:07:43.104270 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:44.443570 systemd[1]: run-containerd-runc-k8s.io-05474eb4e97cebf21e75c5a34b2ea01e7674b79115d23f6cfe1cdb13c5dae613-runc.LnHI7J.mount: Deactivated successfully. Sep 6 00:07:45.835939 kubelet[2636]: E0906 00:07:45.834139 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:46.630940 sshd[4449]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:46.634099 systemd[1]: sshd@24-10.0.0.96:22-10.0.0.1:39724.service: Deactivated successfully. Sep 6 00:07:46.635934 systemd-logind[1525]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:07:46.636019 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:07:46.637044 systemd-logind[1525]: Removed session 25.