Sep 12 17:09:34.896563 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:09:34.896586 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:09:34.896596 kernel: KASLR enabled Sep 12 17:09:34.896602 kernel: efi: EFI v2.7 by EDK II Sep 12 17:09:34.896608 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 12 17:09:34.896614 kernel: random: crng init done Sep 12 17:09:34.896621 kernel: ACPI: Early table checksum verification disabled Sep 12 17:09:34.896627 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 12 17:09:34.896634 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:09:34.896641 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:09:34.896648 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:09:34.896653 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:09:34.896659 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:09:34.896666 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:09:34.896673 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:09:34.896681 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:09:34.896688 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:09:34.896694 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:09:34.896701 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 17:09:34.896707 kernel: NUMA: Failed to initialise from firmware Sep 12 17:09:34.896764 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:09:34.896770 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 12 17:09:34.896777 kernel: Zone ranges: Sep 12 17:09:34.896783 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:09:34.896789 kernel: DMA32 empty Sep 12 17:09:34.896799 kernel: Normal empty Sep 12 17:09:34.896805 kernel: Movable zone start for each node Sep 12 17:09:34.896811 kernel: Early memory node ranges Sep 12 17:09:34.896818 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 12 17:09:34.896824 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 12 17:09:34.896831 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 12 17:09:34.896838 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 12 17:09:34.896844 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 12 17:09:34.896851 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 12 17:09:34.896857 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 17:09:34.896864 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:09:34.896870 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 17:09:34.896878 kernel: psci: probing for conduit method from ACPI. Sep 12 17:09:34.896885 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:09:34.896891 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:09:34.896907 kernel: psci: Trusted OS migration not required Sep 12 17:09:34.896915 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:09:34.896922 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 17:09:34.896931 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:09:34.896938 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:09:34.896945 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 17:09:34.896952 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:09:34.896959 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:09:34.896966 kernel: CPU features: detected: Hardware dirty bit management Sep 12 17:09:34.896973 kernel: CPU features: detected: Spectre-v4 Sep 12 17:09:34.896979 kernel: CPU features: detected: Spectre-BHB Sep 12 17:09:34.896986 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:09:34.896993 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:09:34.897001 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:09:34.897008 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:09:34.897015 kernel: alternatives: applying boot alternatives Sep 12 17:09:34.897023 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:09:34.897030 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:09:34.897037 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:09:34.897044 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:09:34.897050 kernel: Fallback order for Node 0: 0 Sep 12 17:09:34.897057 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 12 17:09:34.897064 kernel: Policy zone: DMA Sep 12 17:09:34.897070 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:09:34.897078 kernel: software IO TLB: area num 4. Sep 12 17:09:34.897085 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 12 17:09:34.897093 kernel: Memory: 2386340K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 185948K reserved, 0K cma-reserved) Sep 12 17:09:34.897100 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:09:34.897107 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:09:34.897114 kernel: rcu: RCU event tracing is enabled. Sep 12 17:09:34.897121 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:09:34.897128 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:09:34.897135 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:09:34.897142 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:09:34.897149 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:09:34.897157 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:09:34.897164 kernel: GICv3: 256 SPIs implemented Sep 12 17:09:34.897171 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:09:34.897178 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:09:34.897185 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:09:34.897192 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 17:09:34.897199 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 17:09:34.897206 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:09:34.897213 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:09:34.897220 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 12 17:09:34.897227 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 12 17:09:34.897234 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:09:34.897242 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:09:34.897249 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:09:34.897256 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:09:34.897264 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:09:34.897270 kernel: arm-pv: using stolen time PV Sep 12 17:09:34.897277 kernel: Console: colour dummy device 80x25 Sep 12 17:09:34.897284 kernel: ACPI: Core revision 20230628 Sep 12 17:09:34.897292 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:09:34.897299 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:09:34.897306 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:09:34.897315 kernel: landlock: Up and running. Sep 12 17:09:34.897322 kernel: SELinux: Initializing. Sep 12 17:09:34.897329 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:09:34.897336 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:09:34.897343 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:09:34.897350 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:09:34.897357 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:09:34.897364 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:09:34.897371 kernel: Platform MSI: ITS@0x8080000 domain created Sep 12 17:09:34.897379 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 12 17:09:34.897386 kernel: Remapping and enabling EFI services. Sep 12 17:09:34.897393 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:09:34.897400 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:09:34.897407 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 17:09:34.897414 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 12 17:09:34.897421 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:09:34.897428 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:09:34.897435 kernel: Detected PIPT I-cache on CPU2 Sep 12 17:09:34.897442 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 17:09:34.897450 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 12 17:09:34.897457 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:09:34.897470 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 17:09:34.897478 kernel: Detected PIPT I-cache on CPU3 Sep 12 17:09:34.897485 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 17:09:34.897493 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 12 17:09:34.897500 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:09:34.897507 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 17:09:34.897515 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:09:34.897528 kernel: SMP: Total of 4 processors activated. Sep 12 17:09:34.897537 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:09:34.897544 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:09:34.897552 kernel: CPU features: detected: Common not Private translations Sep 12 17:09:34.897559 kernel: CPU features: detected: CRC32 instructions Sep 12 17:09:34.897567 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 17:09:34.897574 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:09:34.897581 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:09:34.897590 kernel: CPU features: detected: Privileged Access Never Sep 12 17:09:34.897597 kernel: CPU features: detected: RAS Extension Support Sep 12 17:09:34.897605 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:09:34.897613 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:09:34.897620 kernel: alternatives: applying system-wide alternatives Sep 12 17:09:34.897627 kernel: devtmpfs: initialized Sep 12 17:09:34.897634 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:09:34.897642 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:09:34.897649 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:09:34.897657 kernel: SMBIOS 3.0.0 present. Sep 12 17:09:34.897665 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 12 17:09:34.897672 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:09:34.897680 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:09:34.897687 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:09:34.897694 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:09:34.897702 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:09:34.897709 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 12 17:09:34.897718 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:09:34.897725 kernel: cpuidle: using governor menu Sep 12 17:09:34.897733 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:09:34.897740 kernel: ASID allocator initialised with 32768 entries Sep 12 17:09:34.897747 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:09:34.897754 kernel: Serial: AMBA PL011 UART driver Sep 12 17:09:34.897762 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:09:34.897769 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 17:09:34.897776 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:09:34.897784 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:09:34.897792 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:09:34.897800 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:09:34.897807 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:09:34.897814 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:09:34.897821 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:09:34.897829 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:09:34.897836 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:09:34.897843 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:09:34.897850 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:09:34.897859 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:09:34.897866 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:09:34.897874 kernel: ACPI: Interpreter enabled Sep 12 17:09:34.897881 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:09:34.897888 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:09:34.897895 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:09:34.897930 kernel: printk: console [ttyAMA0] enabled Sep 12 17:09:34.897938 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:09:34.898095 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:09:34.898178 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:09:34.898246 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:09:34.898312 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 17:09:34.898379 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 17:09:34.898388 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 17:09:34.898396 kernel: PCI host bridge to bus 0000:00 Sep 12 17:09:34.898468 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 17:09:34.898542 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:09:34.898608 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 17:09:34.898670 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:09:34.898762 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 12 17:09:34.898842 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:09:34.898929 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 12 17:09:34.899009 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 12 17:09:34.899081 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:09:34.899149 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:09:34.899218 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 12 17:09:34.899288 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 12 17:09:34.899356 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 17:09:34.899420 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:09:34.899492 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 17:09:34.899503 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:09:34.899510 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:09:34.899518 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:09:34.899533 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:09:34.899541 kernel: iommu: Default domain type: Translated Sep 12 17:09:34.899548 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:09:34.899556 kernel: efivars: Registered efivars operations Sep 12 17:09:34.899566 kernel: vgaarb: loaded Sep 12 17:09:34.899574 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:09:34.899581 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:09:34.899589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:09:34.899597 kernel: pnp: PnP ACPI init Sep 12 17:09:34.899688 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 17:09:34.899700 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:09:34.899707 kernel: NET: Registered PF_INET protocol family Sep 12 17:09:34.899715 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:09:34.899726 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:09:34.899734 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:09:34.899742 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:09:34.899753 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:09:34.899762 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:09:34.899774 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:09:34.899781 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:09:34.899789 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:09:34.899798 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:09:34.899805 kernel: kvm [1]: HYP mode not available Sep 12 17:09:34.899813 kernel: Initialise system trusted keyrings Sep 12 17:09:34.899820 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:09:34.899827 kernel: Key type asymmetric registered Sep 12 17:09:34.899834 kernel: Asymmetric key parser 'x509' registered Sep 12 17:09:34.899842 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:09:34.899849 kernel: io scheduler mq-deadline registered Sep 12 17:09:34.899856 kernel: io scheduler kyber registered Sep 12 17:09:34.899864 kernel: io scheduler bfq registered Sep 12 17:09:34.899873 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:09:34.899881 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:09:34.899888 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:09:34.899992 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 17:09:34.900004 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:09:34.900012 kernel: thunder_xcv, ver 1.0 Sep 12 17:09:34.900019 kernel: thunder_bgx, ver 1.0 Sep 12 17:09:34.900026 kernel: nicpf, ver 1.0 Sep 12 17:09:34.900034 kernel: nicvf, ver 1.0 Sep 12 17:09:34.900117 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:09:34.900183 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:09:34 UTC (1757696974) Sep 12 17:09:34.900193 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:09:34.900200 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 12 17:09:34.900209 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:09:34.900216 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:09:34.900224 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:09:34.900231 kernel: Segment Routing with IPv6 Sep 12 17:09:34.900241 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:09:34.900249 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:09:34.900256 kernel: Key type dns_resolver registered Sep 12 17:09:34.900264 kernel: registered taskstats version 1 Sep 12 17:09:34.900272 kernel: Loading compiled-in X.509 certificates Sep 12 17:09:34.900279 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:09:34.900286 kernel: Key type .fscrypt registered Sep 12 17:09:34.900294 kernel: Key type fscrypt-provisioning registered Sep 12 17:09:34.900301 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:09:34.900310 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:09:34.900317 kernel: ima: No architecture policies found Sep 12 17:09:34.900325 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:09:34.900332 kernel: clk: Disabling unused clocks Sep 12 17:09:34.900340 kernel: Freeing unused kernel memory: 39488K Sep 12 17:09:34.900347 kernel: Run /init as init process Sep 12 17:09:34.900354 kernel: with arguments: Sep 12 17:09:34.900362 kernel: /init Sep 12 17:09:34.900369 kernel: with environment: Sep 12 17:09:34.900378 kernel: HOME=/ Sep 12 17:09:34.900385 kernel: TERM=linux Sep 12 17:09:34.900392 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:09:34.900402 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:09:34.900411 systemd[1]: Detected virtualization kvm. Sep 12 17:09:34.900420 systemd[1]: Detected architecture arm64. Sep 12 17:09:34.900427 systemd[1]: Running in initrd. Sep 12 17:09:34.900436 systemd[1]: No hostname configured, using default hostname. Sep 12 17:09:34.900444 systemd[1]: Hostname set to . Sep 12 17:09:34.900453 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:09:34.900461 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:09:34.900469 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:34.900476 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:34.900485 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:09:34.900493 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:09:34.900503 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:09:34.900511 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:09:34.900520 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:09:34.900537 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:09:34.900546 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:34.900554 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:09:34.900562 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:09:34.900572 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:09:34.900579 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:09:34.900587 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:09:34.900595 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:09:34.900603 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:09:34.900611 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:09:34.900619 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:09:34.900627 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:34.900635 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:34.900644 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:34.900653 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:09:34.900661 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:09:34.900669 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:09:34.900676 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:09:34.900684 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:09:34.900692 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:09:34.900701 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:09:34.900710 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:34.900718 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:09:34.900726 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:34.900734 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:09:34.900742 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:09:34.900772 systemd-journald[238]: Collecting audit messages is disabled. Sep 12 17:09:34.900792 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:34.900800 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:34.900808 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:09:34.900820 systemd-journald[238]: Journal started Sep 12 17:09:34.900838 systemd-journald[238]: Runtime Journal (/run/log/journal/99c8d9ea18554e08b0af16014ef6947e) is 5.9M, max 47.3M, 41.4M free. Sep 12 17:09:34.886997 systemd-modules-load[239]: Inserted module 'overlay' Sep 12 17:09:34.906705 kernel: Bridge firewalling registered Sep 12 17:09:34.906729 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:09:34.904838 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 12 17:09:34.907881 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:34.917896 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:34.920040 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:09:34.925102 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:09:34.928217 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:09:34.937772 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:34.939443 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:34.943043 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:34.945004 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:09:34.956125 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:09:34.958495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:09:34.968015 dracut-cmdline[274]: dracut-dracut-053 Sep 12 17:09:34.970674 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:09:34.985995 systemd-resolved[278]: Positive Trust Anchors: Sep 12 17:09:34.986015 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:09:34.986047 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:09:34.991164 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 12 17:09:34.992180 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:09:34.996510 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:09:35.038938 kernel: SCSI subsystem initialized Sep 12 17:09:35.043930 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:09:35.051939 kernel: iscsi: registered transport (tcp) Sep 12 17:09:35.065072 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:09:35.065148 kernel: QLogic iSCSI HBA Driver Sep 12 17:09:35.113118 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:09:35.122129 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:09:35.138974 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:09:35.139046 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:09:35.139058 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:09:35.184951 kernel: raid6: neonx8 gen() 15329 MB/s Sep 12 17:09:35.201929 kernel: raid6: neonx4 gen() 15558 MB/s Sep 12 17:09:35.218931 kernel: raid6: neonx2 gen() 12966 MB/s Sep 12 17:09:35.235943 kernel: raid6: neonx1 gen() 10220 MB/s Sep 12 17:09:35.252955 kernel: raid6: int64x8 gen() 6887 MB/s Sep 12 17:09:35.269935 kernel: raid6: int64x4 gen() 7335 MB/s Sep 12 17:09:35.286939 kernel: raid6: int64x2 gen() 6123 MB/s Sep 12 17:09:35.303995 kernel: raid6: int64x1 gen() 5050 MB/s Sep 12 17:09:35.304126 kernel: raid6: using algorithm neonx4 gen() 15558 MB/s Sep 12 17:09:35.322576 kernel: raid6: .... xor() 11932 MB/s, rmw enabled Sep 12 17:09:35.322647 kernel: raid6: using neon recovery algorithm Sep 12 17:09:35.327232 kernel: xor: measuring software checksum speed Sep 12 17:09:35.327292 kernel: 8regs : 19783 MB/sec Sep 12 17:09:35.327303 kernel: 32regs : 19650 MB/sec Sep 12 17:09:35.328315 kernel: arm64_neon : 27096 MB/sec Sep 12 17:09:35.328345 kernel: xor: using function: arm64_neon (27096 MB/sec) Sep 12 17:09:35.377950 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:09:35.389886 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:09:35.403130 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:09:35.414945 systemd-udevd[460]: Using default interface naming scheme 'v255'. Sep 12 17:09:35.418138 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:09:35.424161 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:09:35.437226 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 12 17:09:35.468005 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:09:35.477170 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:09:35.520835 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:35.532133 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:09:35.547075 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:09:35.549163 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:09:35.550891 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:35.552047 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:09:35.562088 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:09:35.572492 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:09:35.580159 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 17:09:35.580350 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:09:35.583033 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:09:35.583064 kernel: GPT:9289727 != 19775487 Sep 12 17:09:35.583075 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:09:35.584378 kernel: GPT:9289727 != 19775487 Sep 12 17:09:35.584398 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:09:35.589400 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:09:35.594394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:09:35.589478 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:35.596885 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:35.600852 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:09:35.600953 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:35.602977 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:35.614129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:35.620341 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (505) Sep 12 17:09:35.620388 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (522) Sep 12 17:09:35.625998 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:09:35.630367 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:09:35.631613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:35.642603 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:09:35.646365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:09:35.647363 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:09:35.662113 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:09:35.664315 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:35.670002 disk-uuid[550]: Primary Header is updated. Sep 12 17:09:35.670002 disk-uuid[550]: Secondary Entries is updated. Sep 12 17:09:35.670002 disk-uuid[550]: Secondary Header is updated. Sep 12 17:09:35.674292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:09:35.678924 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:09:35.684203 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:36.692216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:09:36.692278 disk-uuid[551]: The operation has completed successfully. Sep 12 17:09:36.727765 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:09:36.727886 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:09:36.746118 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:09:36.749310 sh[576]: Success Sep 12 17:09:36.759921 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:09:36.798366 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:09:36.800220 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:09:36.802391 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:09:36.815933 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:09:36.815979 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:36.815990 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:09:36.817361 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:09:36.817376 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:09:36.823895 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:09:36.826044 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:09:36.832064 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:09:36.833493 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:09:36.842330 kernel: BTRFS info (device vda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:36.842390 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:36.842402 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:09:36.845929 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:09:36.856023 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:09:36.857461 kernel: BTRFS info (device vda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:36.866159 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:09:36.875120 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:09:36.939967 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:09:36.945046 ignition[671]: Ignition 2.19.0 Sep 12 17:09:36.945058 ignition[671]: Stage: fetch-offline Sep 12 17:09:36.951121 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:09:36.945109 ignition[671]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:36.945119 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:09:36.945280 ignition[671]: parsed url from cmdline: "" Sep 12 17:09:36.945283 ignition[671]: no config URL provided Sep 12 17:09:36.945288 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:09:36.945295 ignition[671]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:09:36.945318 ignition[671]: op(1): [started] loading QEMU firmware config module Sep 12 17:09:36.945323 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:09:36.950645 ignition[671]: op(1): [finished] loading QEMU firmware config module Sep 12 17:09:36.975727 systemd-networkd[768]: lo: Link UP Sep 12 17:09:36.975740 systemd-networkd[768]: lo: Gained carrier Sep 12 17:09:36.976478 systemd-networkd[768]: Enumeration completed Sep 12 17:09:36.976583 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:09:36.978145 systemd[1]: Reached target network.target - Network. Sep 12 17:09:36.980237 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:36.980240 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:09:36.981268 systemd-networkd[768]: eth0: Link UP Sep 12 17:09:36.981272 systemd-networkd[768]: eth0: Gained carrier Sep 12 17:09:36.981279 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:37.009974 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:09:37.010115 ignition[671]: parsing config with SHA512: c28ad972dbdd79be6e331535da300db1405b1e68bec4f0bd5a06c0449384f24bd678d3490d04288020159ca5d38dcdc2f64a9319f01b1757bb6274f3d1aaddbc Sep 12 17:09:37.015778 unknown[671]: fetched base config from "system" Sep 12 17:09:37.015788 unknown[671]: fetched user config from "qemu" Sep 12 17:09:37.016328 ignition[671]: fetch-offline: fetch-offline passed Sep 12 17:09:37.016402 ignition[671]: Ignition finished successfully Sep 12 17:09:37.019941 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:09:37.021560 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:09:37.033124 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:09:37.043359 ignition[773]: Ignition 2.19.0 Sep 12 17:09:37.043370 ignition[773]: Stage: kargs Sep 12 17:09:37.043546 ignition[773]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:37.043557 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:09:37.044433 ignition[773]: kargs: kargs passed Sep 12 17:09:37.048431 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:09:37.044478 ignition[773]: Ignition finished successfully Sep 12 17:09:37.058063 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:09:37.067820 ignition[780]: Ignition 2.19.0 Sep 12 17:09:37.067831 ignition[780]: Stage: disks Sep 12 17:09:37.068035 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:37.068046 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:09:37.068986 ignition[780]: disks: disks passed Sep 12 17:09:37.069032 ignition[780]: Ignition finished successfully Sep 12 17:09:37.072954 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:09:37.074212 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:09:37.076021 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:09:37.078253 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:09:37.080491 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:09:37.083284 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:09:37.095079 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:09:37.107989 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:09:37.112878 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:09:37.124031 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:09:37.173940 kernel: EXT4-fs (vda9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:09:37.174216 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:09:37.175959 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:09:37.194056 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:09:37.195988 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:09:37.200589 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:09:37.206037 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (798) Sep 12 17:09:37.200636 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:09:37.200663 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:09:37.215244 kernel: BTRFS info (device vda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:37.215270 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:37.215280 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:09:37.203496 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:09:37.207923 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:09:37.221932 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:09:37.224278 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:09:37.261879 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:09:37.265707 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:09:37.270108 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:09:37.273931 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:09:37.350585 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:09:37.366084 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:09:37.369595 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:09:37.377092 kernel: BTRFS info (device vda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:37.399320 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:09:37.410732 ignition[914]: INFO : Ignition 2.19.0 Sep 12 17:09:37.410732 ignition[914]: INFO : Stage: mount Sep 12 17:09:37.410732 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:37.410732 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:09:37.419059 ignition[914]: INFO : mount: mount passed Sep 12 17:09:37.419059 ignition[914]: INFO : Ignition finished successfully Sep 12 17:09:37.413342 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:09:37.428074 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:09:37.814819 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:09:37.827105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:09:37.835935 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (924) Sep 12 17:09:37.838015 kernel: BTRFS info (device vda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:37.838060 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:37.838071 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:09:37.840931 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:09:37.842297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:09:37.867215 ignition[941]: INFO : Ignition 2.19.0 Sep 12 17:09:37.867215 ignition[941]: INFO : Stage: files Sep 12 17:09:37.869467 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:37.869467 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:09:37.869467 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:09:37.873175 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:09:37.873175 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:09:37.875994 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:09:37.875994 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:09:37.875994 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:09:37.875994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:09:37.875994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 17:09:37.874701 unknown[941]: wrote ssh authorized keys file for user: core Sep 12 17:09:37.972551 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:09:38.203273 systemd-networkd[768]: eth0: Gained IPv6LL Sep 12 17:09:38.658976 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:09:38.658976 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:09:38.658976 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 17:09:38.788622 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:09:38.866972 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:09:38.866972 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:09:38.870466 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 17:09:39.214622 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:09:39.552578 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:09:39.552578 ignition[941]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:09:39.555950 ignition[941]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:09:39.557605 ignition[941]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:09:39.557605 ignition[941]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:09:39.557605 ignition[941]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 17:09:39.557605 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:09:39.557605 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:09:39.557605 ignition[941]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 17:09:39.557605 ignition[941]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:09:39.582683 ignition[941]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:09:39.587269 ignition[941]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:09:39.591277 ignition[941]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:09:39.591277 ignition[941]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:09:39.591277 ignition[941]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:09:39.591277 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:09:39.591277 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:09:39.591277 ignition[941]: INFO : files: files passed Sep 12 17:09:39.591277 ignition[941]: INFO : Ignition finished successfully Sep 12 17:09:39.589604 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:09:39.598131 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:09:39.599841 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:09:39.602272 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:09:39.602394 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:09:39.615311 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:09:39.618395 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:39.618395 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:39.621855 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:39.622990 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:09:39.624986 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:09:39.634137 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:09:39.656952 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:09:39.657067 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:09:39.660959 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:09:39.662334 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:09:39.664191 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:09:39.665174 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:09:39.683391 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:09:39.697177 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:09:39.706248 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:09:39.707369 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:39.710100 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:09:39.712498 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:09:39.712652 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:09:39.714827 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:09:39.716775 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:09:39.718346 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:09:39.719778 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:09:39.721782 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:09:39.723548 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:09:39.725260 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:09:39.726930 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:09:39.728682 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:09:39.730417 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:09:39.731680 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:09:39.731817 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:09:39.733881 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:09:39.735671 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:39.737470 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:09:39.741027 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:39.742064 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:09:39.742201 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:09:39.744495 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:09:39.744624 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:09:39.746438 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:09:39.747661 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:09:39.751019 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:39.752494 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:09:39.754478 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:09:39.756059 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:09:39.756167 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:09:39.757761 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:09:39.757851 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:09:39.759419 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:09:39.759603 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:09:39.761450 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:09:39.761586 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:09:39.772146 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:09:39.774499 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:09:39.775502 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:09:39.775669 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:39.777648 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:09:39.777764 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:09:39.784422 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:09:39.784565 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:09:39.791270 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:09:39.794078 ignition[995]: INFO : Ignition 2.19.0 Sep 12 17:09:39.794078 ignition[995]: INFO : Stage: umount Sep 12 17:09:39.797211 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:39.797211 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:09:39.797211 ignition[995]: INFO : umount: umount passed Sep 12 17:09:39.797211 ignition[995]: INFO : Ignition finished successfully Sep 12 17:09:39.797343 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:09:39.797530 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:09:39.800170 systemd[1]: Stopped target network.target - Network. Sep 12 17:09:39.801044 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:09:39.801131 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:09:39.802515 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:09:39.802566 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:09:39.803856 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:09:39.803931 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:09:39.805431 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:09:39.805488 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:09:39.808298 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:09:39.809352 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:09:39.818978 systemd-networkd[768]: eth0: DHCPv6 lease lost Sep 12 17:09:39.821648 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:09:39.821810 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:09:39.824869 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:09:39.825077 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:09:39.828290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:09:39.828372 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:39.840095 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:09:39.841138 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:09:39.841223 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:09:39.843496 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:09:39.843567 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:39.845823 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:09:39.845883 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:39.851514 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:09:39.851715 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:09:39.853349 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:09:39.866265 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:09:39.866388 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:09:39.870192 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:09:39.870348 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:09:39.872337 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:09:39.872381 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:39.873947 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:09:39.873980 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:39.875771 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:09:39.875830 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:09:39.878236 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:09:39.878290 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:09:39.880549 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:09:39.880597 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:39.894088 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:09:39.895157 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:09:39.895221 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:39.897302 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:09:39.897346 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:39.899262 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:09:39.899306 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:39.901471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:09:39.901525 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:39.903802 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:09:39.904940 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:09:39.906840 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:09:39.907013 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:09:39.909635 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:09:39.910782 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:09:39.910860 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:09:39.913783 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:09:39.924355 systemd[1]: Switching root. Sep 12 17:09:39.951768 systemd-journald[238]: Journal stopped Sep 12 17:09:40.838916 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 12 17:09:40.838978 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:09:40.839001 kernel: SELinux: policy capability open_perms=1 Sep 12 17:09:40.839014 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:09:40.839023 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:09:40.839033 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:09:40.839043 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:09:40.839052 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:09:40.839062 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:09:40.839071 kernel: audit: type=1403 audit(1757696980.224:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:09:40.839085 systemd[1]: Successfully loaded SELinux policy in 32.346ms. Sep 12 17:09:40.839104 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.925ms. Sep 12 17:09:40.839118 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:09:40.839129 systemd[1]: Detected virtualization kvm. Sep 12 17:09:40.839140 systemd[1]: Detected architecture arm64. Sep 12 17:09:40.839153 systemd[1]: Detected first boot. Sep 12 17:09:40.839163 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:09:40.839174 zram_generator::config[1041]: No configuration found. Sep 12 17:09:40.839186 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:09:40.839196 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:09:40.839208 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:09:40.839219 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:09:40.839230 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:09:40.839242 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:09:40.839252 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:09:40.839263 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:09:40.839274 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:09:40.839285 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:09:40.839296 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:09:40.839309 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:09:40.839320 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:40.839332 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:40.839343 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:09:40.839353 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:09:40.839364 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:09:40.839375 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:09:40.839386 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:09:40.839399 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:40.839410 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:09:40.839420 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:09:40.839431 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:09:40.839443 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:09:40.839454 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:40.839465 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:09:40.839476 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:09:40.839487 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:09:40.839504 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:09:40.839518 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:09:40.839529 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:40.839541 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:40.839551 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:40.839562 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:09:40.839573 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:09:40.839585 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:09:40.839595 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:09:40.839609 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:09:40.839621 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:09:40.839632 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:09:40.839643 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:09:40.839654 systemd[1]: Reached target machines.target - Containers. Sep 12 17:09:40.839665 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:09:40.839676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:09:40.839687 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:09:40.839699 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:09:40.839709 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:09:40.839720 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:09:40.839731 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:09:40.839741 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:09:40.839752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:09:40.839763 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:09:40.839774 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:09:40.839786 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:09:40.839796 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:09:40.839807 kernel: fuse: init (API version 7.39) Sep 12 17:09:40.839817 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:09:40.839827 kernel: loop: module loaded Sep 12 17:09:40.839837 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:09:40.839849 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:09:40.839860 kernel: ACPI: bus type drm_connector registered Sep 12 17:09:40.839870 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:09:40.839882 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:09:40.839917 systemd-journald[1108]: Collecting audit messages is disabled. Sep 12 17:09:40.839940 systemd-journald[1108]: Journal started Sep 12 17:09:40.839962 systemd-journald[1108]: Runtime Journal (/run/log/journal/99c8d9ea18554e08b0af16014ef6947e) is 5.9M, max 47.3M, 41.4M free. Sep 12 17:09:40.622454 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:09:40.641204 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:09:40.641665 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:09:40.844468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:09:40.846932 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:09:40.846998 systemd[1]: Stopped verity-setup.service. Sep 12 17:09:40.849946 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:09:40.851677 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:09:40.852998 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:09:40.854205 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:09:40.855428 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:09:40.856623 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:09:40.858148 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:09:40.859947 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:40.861327 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:09:40.861474 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:09:40.863031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:09:40.863183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:09:40.864476 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:09:40.864637 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:09:40.865967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:09:40.866122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:09:40.867435 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:09:40.867591 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:09:40.869157 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:09:40.869294 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:09:40.870557 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:40.871856 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:09:40.873567 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:09:40.885771 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:09:40.898592 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:09:40.900962 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:09:40.902235 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:09:40.902282 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:09:40.904681 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:09:40.907425 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:09:40.911951 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:09:40.913214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:09:40.915749 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:09:40.918559 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:09:40.919970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:09:40.924187 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:09:40.927844 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:09:40.929115 systemd-journald[1108]: Time spent on flushing to /var/log/journal/99c8d9ea18554e08b0af16014ef6947e is 27.839ms for 856 entries. Sep 12 17:09:40.929115 systemd-journald[1108]: System Journal (/var/log/journal/99c8d9ea18554e08b0af16014ef6947e) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:09:40.983024 systemd-journald[1108]: Received client request to flush runtime journal. Sep 12 17:09:40.983084 kernel: loop0: detected capacity change from 0 to 114432 Sep 12 17:09:40.929433 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:09:40.936098 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:09:40.939627 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:09:40.944467 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:09:40.946161 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:40.947703 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:09:40.949084 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:09:40.950646 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:09:40.952223 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:09:40.960284 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:09:40.968298 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:09:40.973151 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:09:40.975585 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:40.989487 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:09:40.990959 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:09:40.991642 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Sep 12 17:09:40.991750 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:09:40.992979 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Sep 12 17:09:40.993784 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:09:40.999965 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:41.007332 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:09:41.008575 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:09:41.024982 kernel: loop1: detected capacity change from 0 to 211168 Sep 12 17:09:41.033482 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:09:41.046189 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:09:41.053949 kernel: loop2: detected capacity change from 0 to 114328 Sep 12 17:09:41.064735 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 12 17:09:41.064762 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 12 17:09:41.070987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:41.094964 kernel: loop3: detected capacity change from 0 to 114432 Sep 12 17:09:41.104928 kernel: loop4: detected capacity change from 0 to 211168 Sep 12 17:09:41.116940 kernel: loop5: detected capacity change from 0 to 114328 Sep 12 17:09:41.121388 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:09:41.121838 (sd-merge)[1179]: Merged extensions into '/usr'. Sep 12 17:09:41.126385 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:09:41.126404 systemd[1]: Reloading... Sep 12 17:09:41.191934 zram_generator::config[1201]: No configuration found. Sep 12 17:09:41.279535 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:09:41.299323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:09:41.336560 systemd[1]: Reloading finished in 209 ms. Sep 12 17:09:41.370956 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:09:41.372192 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:09:41.393184 systemd[1]: Starting ensure-sysext.service... Sep 12 17:09:41.395105 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:09:41.407316 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:09:41.407339 systemd[1]: Reloading... Sep 12 17:09:41.413591 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:09:41.413871 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:09:41.414552 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:09:41.414788 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Sep 12 17:09:41.414837 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Sep 12 17:09:41.417172 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:09:41.417178 systemd-tmpfiles[1240]: Skipping /boot Sep 12 17:09:41.424559 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:09:41.424573 systemd-tmpfiles[1240]: Skipping /boot Sep 12 17:09:41.460942 zram_generator::config[1270]: No configuration found. Sep 12 17:09:41.546835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:09:41.583580 systemd[1]: Reloading finished in 175 ms. Sep 12 17:09:41.599842 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:09:41.615741 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:09:41.624033 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:09:41.627094 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:09:41.629797 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:09:41.633268 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:09:41.638245 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:09:41.644276 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:09:41.648258 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:09:41.653816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:09:41.657308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:09:41.660493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:09:41.661619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:09:41.672403 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:09:41.674611 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:09:41.678436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:09:41.678544 systemd-udevd[1309]: Using default interface naming scheme 'v255'. Sep 12 17:09:41.678614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:09:41.683307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:09:41.683473 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:09:41.685493 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:09:41.685687 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:09:41.694487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:09:41.703352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:09:41.708360 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:09:41.711441 augenrules[1333]: No rules Sep 12 17:09:41.712338 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:09:41.713988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:09:41.717150 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:09:41.718699 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:09:41.721578 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:09:41.725505 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:09:41.727130 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:09:41.730070 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:09:41.731546 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:09:41.731687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:09:41.734557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:09:41.734712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:09:41.749924 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1363) Sep 12 17:09:41.756170 systemd[1]: Finished ensure-sysext.service. Sep 12 17:09:41.777888 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:09:41.779993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:09:41.783433 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:09:41.792153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:09:41.796735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:09:41.806182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:09:41.809683 systemd-resolved[1307]: Positive Trust Anchors: Sep 12 17:09:41.809703 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:09:41.809736 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:09:41.811188 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:09:41.814029 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:09:41.818212 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:09:41.822124 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:09:41.826452 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:09:41.830807 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:09:41.833644 systemd-resolved[1307]: Defaulting to hostname 'linux'. Sep 12 17:09:41.833997 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:09:41.834622 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:09:41.835949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:09:41.836126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:09:41.837336 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:09:41.837470 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:09:41.839075 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:09:41.839215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:09:41.845265 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:09:41.848784 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:09:41.850196 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:09:41.850288 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:09:41.856455 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:09:41.894678 systemd-networkd[1382]: lo: Link UP Sep 12 17:09:41.894864 systemd-networkd[1382]: lo: Gained carrier Sep 12 17:09:41.896192 systemd-networkd[1382]: Enumeration completed Sep 12 17:09:41.900199 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:41.901273 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:09:41.901350 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:41.901353 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:09:41.902839 systemd-networkd[1382]: eth0: Link UP Sep 12 17:09:41.902859 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:09:41.903006 systemd-networkd[1382]: eth0: Gained carrier Sep 12 17:09:41.903024 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:41.904304 systemd[1]: Reached target network.target - Network. Sep 12 17:09:41.905375 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:09:41.907651 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:09:41.918420 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:09:41.922018 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:09:41.924015 systemd-networkd[1382]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:09:41.924891 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Sep 12 17:09:41.926063 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:09:41.926121 systemd-timesyncd[1383]: Initial clock synchronization to Fri 2025-09-12 17:09:41.558821 UTC. Sep 12 17:09:41.938460 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:09:41.949011 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:41.971687 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:09:41.973550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:09:41.974875 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:09:41.976232 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:09:41.977714 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:09:41.979541 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:09:41.980858 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:09:41.982389 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:09:41.983767 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:09:41.983807 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:09:41.984865 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:09:41.986991 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:09:41.990020 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:09:42.006194 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:09:42.009035 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:09:42.010998 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:09:42.012353 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:09:42.013415 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:09:42.014463 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:09:42.014499 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:09:42.015741 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:09:42.017542 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:09:42.018307 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:09:42.022092 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:09:42.025147 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:09:42.026306 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:09:42.029201 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:09:42.030337 jq[1409]: false Sep 12 17:09:42.033822 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:09:42.036135 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:09:42.040529 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:09:42.051428 dbus-daemon[1408]: [system] SELinux support is enabled Sep 12 17:09:42.056429 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:09:42.056593 extend-filesystems[1410]: Found loop3 Sep 12 17:09:42.056593 extend-filesystems[1410]: Found loop4 Sep 12 17:09:42.056593 extend-filesystems[1410]: Found loop5 Sep 12 17:09:42.056593 extend-filesystems[1410]: Found vda Sep 12 17:09:42.056593 extend-filesystems[1410]: Found vda1 Sep 12 17:09:42.056593 extend-filesystems[1410]: Found vda2 Sep 12 17:09:42.056593 extend-filesystems[1410]: Found vda3 Sep 12 17:09:42.056593 extend-filesystems[1410]: Found usr Sep 12 17:09:42.056593 extend-filesystems[1410]: Found vda4 Sep 12 17:09:42.056593 extend-filesystems[1410]: Found vda6 Sep 12 17:09:42.056593 extend-filesystems[1410]: Found vda7 Sep 12 17:09:42.056593 extend-filesystems[1410]: Found vda9 Sep 12 17:09:42.056593 extend-filesystems[1410]: Checking size of /dev/vda9 Sep 12 17:09:42.058510 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:09:42.061167 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:09:42.061887 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:09:42.067761 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:09:42.069524 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:09:42.073937 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:09:42.077195 jq[1428]: true Sep 12 17:09:42.078282 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:09:42.078374 extend-filesystems[1410]: Resized partition /dev/vda9 Sep 12 17:09:42.078480 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:09:42.078773 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:09:42.078944 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:09:42.085980 extend-filesystems[1432]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:09:42.082301 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:09:42.082458 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:09:42.100930 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1344) Sep 12 17:09:42.100992 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:09:42.111749 update_engine[1427]: I20250912 17:09:42.111466 1427 main.cc:92] Flatcar Update Engine starting Sep 12 17:09:42.113079 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:09:42.113128 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:09:42.116931 jq[1434]: true Sep 12 17:09:42.114621 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:09:42.117304 update_engine[1427]: I20250912 17:09:42.117123 1427 update_check_scheduler.cc:74] Next update check in 8m33s Sep 12 17:09:42.114642 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:09:42.129777 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:09:42.130725 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:09:42.132485 tar[1433]: linux-arm64/LICENSE Sep 12 17:09:42.132680 tar[1433]: linux-arm64/helm Sep 12 17:09:42.138924 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:09:42.148192 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:09:42.160335 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:09:42.160582 systemd-logind[1418]: New seat seat0. Sep 12 17:09:42.161178 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:09:42.163267 extend-filesystems[1432]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:09:42.163267 extend-filesystems[1432]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:09:42.163267 extend-filesystems[1432]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:09:42.182725 extend-filesystems[1410]: Resized filesystem in /dev/vda9 Sep 12 17:09:42.167378 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:09:42.167586 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:09:42.186721 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:09:42.188624 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:09:42.190602 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:09:42.222237 locksmithd[1452]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:09:42.310384 containerd[1439]: time="2025-09-12T17:09:42.310234582Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:09:42.343655 containerd[1439]: time="2025-09-12T17:09:42.343604273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347072 containerd[1439]: time="2025-09-12T17:09:42.346075329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347072 containerd[1439]: time="2025-09-12T17:09:42.346120134Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:09:42.347072 containerd[1439]: time="2025-09-12T17:09:42.346151009Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:09:42.347072 containerd[1439]: time="2025-09-12T17:09:42.346317751Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:09:42.347072 containerd[1439]: time="2025-09-12T17:09:42.346334926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347072 containerd[1439]: time="2025-09-12T17:09:42.346388356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347072 containerd[1439]: time="2025-09-12T17:09:42.346401218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347072 containerd[1439]: time="2025-09-12T17:09:42.346580630Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347072 containerd[1439]: time="2025-09-12T17:09:42.346596927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347072 containerd[1439]: time="2025-09-12T17:09:42.346610666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347072 containerd[1439]: time="2025-09-12T17:09:42.346620322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347358 containerd[1439]: time="2025-09-12T17:09:42.346723939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347358 containerd[1439]: time="2025-09-12T17:09:42.346955142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347620 containerd[1439]: time="2025-09-12T17:09:42.347589593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:09:42.347693 containerd[1439]: time="2025-09-12T17:09:42.347679509Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:09:42.347971 containerd[1439]: time="2025-09-12T17:09:42.347936129Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:09:42.348165 containerd[1439]: time="2025-09-12T17:09:42.348146761Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:09:42.358951 containerd[1439]: time="2025-09-12T17:09:42.358893241Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:09:42.359116 containerd[1439]: time="2025-09-12T17:09:42.359101048Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:09:42.359347 containerd[1439]: time="2025-09-12T17:09:42.359327595Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:09:42.359469 containerd[1439]: time="2025-09-12T17:09:42.359452317Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:09:42.359554 containerd[1439]: time="2025-09-12T17:09:42.359528494Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:09:42.359767 containerd[1439]: time="2025-09-12T17:09:42.359748438Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:09:42.360307 containerd[1439]: time="2025-09-12T17:09:42.360285340Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:09:42.360548 containerd[1439]: time="2025-09-12T17:09:42.360520397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:09:42.360703 containerd[1439]: time="2025-09-12T17:09:42.360683552Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:09:42.360765 containerd[1439]: time="2025-09-12T17:09:42.360752515Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:09:42.360821 containerd[1439]: time="2025-09-12T17:09:42.360808885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:09:42.360955 containerd[1439]: time="2025-09-12T17:09:42.360938225Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361007456Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361028599Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361046499Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361059742Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361078214Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361092526Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361122141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361138552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361151910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361164810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361190571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361208737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361222515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361622 containerd[1439]: time="2025-09-12T17:09:42.361237246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361912 containerd[1439]: time="2025-09-12T17:09:42.361253314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361912 containerd[1439]: time="2025-09-12T17:09:42.361269648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361912 containerd[1439]: time="2025-09-12T17:09:42.361282624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361912 containerd[1439]: time="2025-09-12T17:09:42.361296020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361912 containerd[1439]: time="2025-09-12T17:09:42.361309034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361912 containerd[1439]: time="2025-09-12T17:09:42.361330826Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:09:42.361912 containerd[1439]: time="2025-09-12T17:09:42.361354145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361912 containerd[1439]: time="2025-09-12T17:09:42.361367274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.361912 containerd[1439]: time="2025-09-12T17:09:42.361378418Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:09:42.362248 containerd[1439]: time="2025-09-12T17:09:42.362226440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:09:42.362704 containerd[1439]: time="2025-09-12T17:09:42.362680295Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:09:42.362784 containerd[1439]: time="2025-09-12T17:09:42.362770326Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:09:42.362899 containerd[1439]: time="2025-09-12T17:09:42.362824825Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:09:42.364966 containerd[1439]: time="2025-09-12T17:09:42.362959776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.364966 containerd[1439]: time="2025-09-12T17:09:42.362990270Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:09:42.364966 containerd[1439]: time="2025-09-12T17:09:42.363002444Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:09:42.364966 containerd[1439]: time="2025-09-12T17:09:42.363013398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:09:42.365091 containerd[1439]: time="2025-09-12T17:09:42.363516181Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:09:42.365091 containerd[1439]: time="2025-09-12T17:09:42.363578427Z" level=info msg="Connect containerd service" Sep 12 17:09:42.365091 containerd[1439]: time="2025-09-12T17:09:42.363625141Z" level=info msg="using legacy CRI server" Sep 12 17:09:42.365091 containerd[1439]: time="2025-09-12T17:09:42.363632431Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:09:42.365091 containerd[1439]: time="2025-09-12T17:09:42.363729789Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:09:42.365091 containerd[1439]: time="2025-09-12T17:09:42.364645629Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:09:42.366964 containerd[1439]: time="2025-09-12T17:09:42.365379271Z" level=info msg="Start subscribing containerd event" Sep 12 17:09:42.366964 containerd[1439]: time="2025-09-12T17:09:42.365450907Z" level=info msg="Start recovering state" Sep 12 17:09:42.366964 containerd[1439]: time="2025-09-12T17:09:42.365520061Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:09:42.366964 containerd[1439]: time="2025-09-12T17:09:42.365570515Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:09:42.366964 containerd[1439]: time="2025-09-12T17:09:42.365522580Z" level=info msg="Start event monitor" Sep 12 17:09:42.366964 containerd[1439]: time="2025-09-12T17:09:42.365603222Z" level=info msg="Start snapshots syncer" Sep 12 17:09:42.366964 containerd[1439]: time="2025-09-12T17:09:42.365612840Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:09:42.366964 containerd[1439]: time="2025-09-12T17:09:42.365620282Z" level=info msg="Start streaming server" Sep 12 17:09:42.366964 containerd[1439]: time="2025-09-12T17:09:42.365881405Z" level=info msg="containerd successfully booted in 0.057272s" Sep 12 17:09:42.366004 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:09:42.474557 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:09:42.495151 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:09:42.503192 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:09:42.510173 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:09:42.510383 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:09:42.524002 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:09:42.532988 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:09:42.536490 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:09:42.539940 tar[1433]: linux-arm64/README.md Sep 12 17:09:42.541069 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:09:42.542832 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:09:42.553968 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:09:43.131086 systemd-networkd[1382]: eth0: Gained IPv6LL Sep 12 17:09:43.137700 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:09:43.139339 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:09:43.157263 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:09:43.161174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:09:43.163303 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:09:43.178641 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:09:43.178854 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:09:43.181800 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:09:43.185245 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:09:43.725278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:09:43.726654 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:09:43.731724 (kubelet)[1520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:09:43.733992 systemd[1]: Startup finished in 589ms (kernel) + 5.526s (initrd) + 3.541s (userspace) = 9.658s. Sep 12 17:09:44.080593 kubelet[1520]: E0912 17:09:44.080477 1520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:09:44.083859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:09:44.084097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:09:47.361589 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:09:47.363045 systemd[1]: Started sshd@0-10.0.0.22:22-10.0.0.1:58558.service - OpenSSH per-connection server daemon (10.0.0.1:58558). Sep 12 17:09:47.462169 sshd[1533]: Accepted publickey for core from 10.0.0.1 port 58558 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:09:47.467532 sshd[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:47.491015 systemd-logind[1418]: New session 1 of user core. Sep 12 17:09:47.491772 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:09:47.501232 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:09:47.519212 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:09:47.523500 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:09:47.534461 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:09:47.628376 systemd[1537]: Queued start job for default target default.target. Sep 12 17:09:47.642016 systemd[1537]: Created slice app.slice - User Application Slice. Sep 12 17:09:47.642049 systemd[1537]: Reached target paths.target - Paths. Sep 12 17:09:47.642061 systemd[1537]: Reached target timers.target - Timers. Sep 12 17:09:47.643431 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:09:47.656603 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:09:47.656737 systemd[1537]: Reached target sockets.target - Sockets. Sep 12 17:09:47.656754 systemd[1537]: Reached target basic.target - Basic System. Sep 12 17:09:47.656795 systemd[1537]: Reached target default.target - Main User Target. Sep 12 17:09:47.656831 systemd[1537]: Startup finished in 112ms. Sep 12 17:09:47.657221 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:09:47.660769 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:09:47.729097 systemd[1]: Started sshd@1-10.0.0.22:22-10.0.0.1:58566.service - OpenSSH per-connection server daemon (10.0.0.1:58566). Sep 12 17:09:47.773092 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 58566 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:09:47.774535 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:47.779554 systemd-logind[1418]: New session 2 of user core. Sep 12 17:09:47.787172 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:09:47.841177 sshd[1548]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:47.857423 systemd[1]: sshd@1-10.0.0.22:22-10.0.0.1:58566.service: Deactivated successfully. Sep 12 17:09:47.859718 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:09:47.861325 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:09:47.871311 systemd[1]: Started sshd@2-10.0.0.22:22-10.0.0.1:58580.service - OpenSSH per-connection server daemon (10.0.0.1:58580). Sep 12 17:09:47.872851 systemd-logind[1418]: Removed session 2. Sep 12 17:09:47.911569 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 58580 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:09:47.913202 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:47.917612 systemd-logind[1418]: New session 3 of user core. Sep 12 17:09:47.926161 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:09:47.975444 sshd[1555]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:47.988685 systemd[1]: sshd@2-10.0.0.22:22-10.0.0.1:58580.service: Deactivated successfully. Sep 12 17:09:47.991778 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:09:47.995319 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:09:48.010867 systemd[1]: Started sshd@3-10.0.0.22:22-10.0.0.1:58590.service - OpenSSH per-connection server daemon (10.0.0.1:58590). Sep 12 17:09:48.011996 systemd-logind[1418]: Removed session 3. Sep 12 17:09:48.048645 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 58590 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:09:48.049775 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:48.054374 systemd-logind[1418]: New session 4 of user core. Sep 12 17:09:48.062277 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:09:48.116040 sshd[1562]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:48.125481 systemd[1]: sshd@3-10.0.0.22:22-10.0.0.1:58590.service: Deactivated successfully. Sep 12 17:09:48.128622 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:09:48.135907 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:09:48.145409 systemd[1]: Started sshd@4-10.0.0.22:22-10.0.0.1:58598.service - OpenSSH per-connection server daemon (10.0.0.1:58598). Sep 12 17:09:48.151135 systemd-logind[1418]: Removed session 4. Sep 12 17:09:48.187033 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 58598 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:09:48.188284 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:48.192989 systemd-logind[1418]: New session 5 of user core. Sep 12 17:09:48.204119 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:09:48.263933 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:09:48.264228 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:09:48.279831 sudo[1573]: pam_unix(sudo:session): session closed for user root Sep 12 17:09:48.283276 sshd[1569]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:48.299636 systemd[1]: sshd@4-10.0.0.22:22-10.0.0.1:58598.service: Deactivated successfully. Sep 12 17:09:48.301162 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:09:48.302433 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:09:48.303841 systemd[1]: Started sshd@5-10.0.0.22:22-10.0.0.1:58608.service - OpenSSH per-connection server daemon (10.0.0.1:58608). Sep 12 17:09:48.305974 systemd-logind[1418]: Removed session 5. Sep 12 17:09:48.348348 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 58608 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:09:48.351604 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:48.357383 systemd-logind[1418]: New session 6 of user core. Sep 12 17:09:48.370843 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:09:48.430362 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:09:48.430712 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:09:48.437091 sudo[1582]: pam_unix(sudo:session): session closed for user root Sep 12 17:09:48.443547 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:09:48.443829 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:09:48.460235 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:09:48.461847 auditctl[1585]: No rules Sep 12 17:09:48.462807 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:09:48.464305 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:09:48.466726 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:09:48.496601 augenrules[1603]: No rules Sep 12 17:09:48.498047 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:09:48.499383 sudo[1581]: pam_unix(sudo:session): session closed for user root Sep 12 17:09:48.501207 sshd[1578]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:48.524758 systemd[1]: sshd@5-10.0.0.22:22-10.0.0.1:58608.service: Deactivated successfully. Sep 12 17:09:48.527293 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:09:48.528850 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:09:48.532551 systemd[1]: Started sshd@6-10.0.0.22:22-10.0.0.1:58614.service - OpenSSH per-connection server daemon (10.0.0.1:58614). Sep 12 17:09:48.533559 systemd-logind[1418]: Removed session 6. Sep 12 17:09:48.577316 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 58614 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:09:48.578932 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:48.585243 systemd-logind[1418]: New session 7 of user core. Sep 12 17:09:48.596134 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:09:48.646776 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:09:48.647426 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:09:48.975224 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:09:48.975357 (dockerd)[1633]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:09:49.223915 dockerd[1633]: time="2025-09-12T17:09:49.222869577Z" level=info msg="Starting up" Sep 12 17:09:49.534032 systemd[1]: var-lib-docker-metacopy\x2dcheck4293402617-merged.mount: Deactivated successfully. Sep 12 17:09:49.553770 dockerd[1633]: time="2025-09-12T17:09:49.552576429Z" level=info msg="Loading containers: start." Sep 12 17:09:49.687113 kernel: Initializing XFRM netlink socket Sep 12 17:09:49.777822 systemd-networkd[1382]: docker0: Link UP Sep 12 17:09:49.801397 dockerd[1633]: time="2025-09-12T17:09:49.801245407Z" level=info msg="Loading containers: done." Sep 12 17:09:49.815315 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3710793443-merged.mount: Deactivated successfully. Sep 12 17:09:49.822110 dockerd[1633]: time="2025-09-12T17:09:49.821962342Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:09:49.822110 dockerd[1633]: time="2025-09-12T17:09:49.822082066Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:09:49.822268 dockerd[1633]: time="2025-09-12T17:09:49.822197076Z" level=info msg="Daemon has completed initialization" Sep 12 17:09:49.878008 dockerd[1633]: time="2025-09-12T17:09:49.877842048Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:09:49.878162 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:09:50.457763 containerd[1439]: time="2025-09-12T17:09:50.457676636Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:09:51.085453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount757869706.mount: Deactivated successfully. Sep 12 17:09:52.018099 containerd[1439]: time="2025-09-12T17:09:52.018030966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:52.019050 containerd[1439]: time="2025-09-12T17:09:52.019012454Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 12 17:09:52.020430 containerd[1439]: time="2025-09-12T17:09:52.020366630Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:52.030555 containerd[1439]: time="2025-09-12T17:09:52.030499275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:52.032060 containerd[1439]: time="2025-09-12T17:09:52.031778210Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.574044322s" Sep 12 17:09:52.032060 containerd[1439]: time="2025-09-12T17:09:52.031825512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 17:09:52.033729 containerd[1439]: time="2025-09-12T17:09:52.033678276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:09:53.315875 containerd[1439]: time="2025-09-12T17:09:53.315823480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:53.320495 containerd[1439]: time="2025-09-12T17:09:53.320447523Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 12 17:09:53.325222 containerd[1439]: time="2025-09-12T17:09:53.325184085Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:53.333440 containerd[1439]: time="2025-09-12T17:09:53.333372483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:53.334500 containerd[1439]: time="2025-09-12T17:09:53.334463477Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.300741399s" Sep 12 17:09:53.334561 containerd[1439]: time="2025-09-12T17:09:53.334506182Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 17:09:53.334929 containerd[1439]: time="2025-09-12T17:09:53.334893171Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:09:54.334279 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:09:54.345157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:09:54.447935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:09:54.453257 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:09:54.510128 containerd[1439]: time="2025-09-12T17:09:54.510059744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:54.511243 containerd[1439]: time="2025-09-12T17:09:54.511176805Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 12 17:09:54.512236 containerd[1439]: time="2025-09-12T17:09:54.512211514Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:54.515245 containerd[1439]: time="2025-09-12T17:09:54.515210225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:54.516455 containerd[1439]: time="2025-09-12T17:09:54.516423587Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.181482075s" Sep 12 17:09:54.516511 containerd[1439]: time="2025-09-12T17:09:54.516463019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 17:09:54.517817 containerd[1439]: time="2025-09-12T17:09:54.517669447Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:09:54.544290 kubelet[1855]: E0912 17:09:54.544227 1855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:09:54.547635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:09:54.547790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:09:55.456984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976052600.mount: Deactivated successfully. Sep 12 17:09:55.826777 containerd[1439]: time="2025-09-12T17:09:55.826616976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:55.828251 containerd[1439]: time="2025-09-12T17:09:55.827350635Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 12 17:09:55.828880 containerd[1439]: time="2025-09-12T17:09:55.828506810Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:55.831438 containerd[1439]: time="2025-09-12T17:09:55.831170338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:55.831882 containerd[1439]: time="2025-09-12T17:09:55.831856663Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.314146069s" Sep 12 17:09:55.831970 containerd[1439]: time="2025-09-12T17:09:55.831888245Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 17:09:55.832841 containerd[1439]: time="2025-09-12T17:09:55.832769343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:09:56.350436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount448485304.mount: Deactivated successfully. Sep 12 17:09:57.299167 containerd[1439]: time="2025-09-12T17:09:57.299108986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:57.315394 containerd[1439]: time="2025-09-12T17:09:57.315325090Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 12 17:09:57.342865 containerd[1439]: time="2025-09-12T17:09:57.342794294Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:57.359964 containerd[1439]: time="2025-09-12T17:09:57.359898904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:57.362015 containerd[1439]: time="2025-09-12T17:09:57.361239552Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.528432259s" Sep 12 17:09:57.362015 containerd[1439]: time="2025-09-12T17:09:57.361374193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 17:09:57.363455 containerd[1439]: time="2025-09-12T17:09:57.363427125Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:09:58.210016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444857197.mount: Deactivated successfully. Sep 12 17:09:58.215815 containerd[1439]: time="2025-09-12T17:09:58.215759371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:58.216720 containerd[1439]: time="2025-09-12T17:09:58.216682424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 17:09:58.218030 containerd[1439]: time="2025-09-12T17:09:58.217987277Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:58.221234 containerd[1439]: time="2025-09-12T17:09:58.221175114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:09:58.222108 containerd[1439]: time="2025-09-12T17:09:58.221976112Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 858.511812ms" Sep 12 17:09:58.222108 containerd[1439]: time="2025-09-12T17:09:58.222013031Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:09:58.222948 containerd[1439]: time="2025-09-12T17:09:58.222923313Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:09:58.639606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809975250.mount: Deactivated successfully. Sep 12 17:10:00.263103 containerd[1439]: time="2025-09-12T17:10:00.263051955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:00.263993 containerd[1439]: time="2025-09-12T17:10:00.263696112Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 12 17:10:00.265521 containerd[1439]: time="2025-09-12T17:10:00.264891335Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:00.268941 containerd[1439]: time="2025-09-12T17:10:00.268898327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:00.270208 containerd[1439]: time="2025-09-12T17:10:00.270178316Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.047219822s" Sep 12 17:10:00.270381 containerd[1439]: time="2025-09-12T17:10:00.270273958Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 17:10:04.798106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:10:04.807129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:04.903363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:04.907671 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:04.939522 kubelet[2014]: E0912 17:10:04.939459 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:04.942474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:04.942623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:05.729839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:05.745175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:05.769746 systemd[1]: Reloading requested from client PID 2030 ('systemctl') (unit session-7.scope)... Sep 12 17:10:05.769763 systemd[1]: Reloading... Sep 12 17:10:05.853765 zram_generator::config[2065]: No configuration found. Sep 12 17:10:06.001282 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:10:06.057712 systemd[1]: Reloading finished in 287 ms. Sep 12 17:10:06.096368 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:10:06.096431 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:10:06.096698 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:06.099403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:06.222486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:06.226965 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:10:06.266317 kubelet[2115]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:06.266317 kubelet[2115]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:10:06.266317 kubelet[2115]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:06.266317 kubelet[2115]: I0912 17:10:06.266012 2115 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:10:06.984572 kubelet[2115]: I0912 17:10:06.984520 2115 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:10:06.984572 kubelet[2115]: I0912 17:10:06.984553 2115 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:10:06.984779 kubelet[2115]: I0912 17:10:06.984763 2115 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:10:07.008988 kubelet[2115]: E0912 17:10:07.008936 2115 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:10:07.010054 kubelet[2115]: I0912 17:10:07.010030 2115 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:10:07.018151 kubelet[2115]: E0912 17:10:07.018109 2115 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:10:07.018151 kubelet[2115]: I0912 17:10:07.018148 2115 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:10:07.021804 kubelet[2115]: I0912 17:10:07.020727 2115 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:10:07.021804 kubelet[2115]: I0912 17:10:07.021096 2115 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:10:07.021804 kubelet[2115]: I0912 17:10:07.021129 2115 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:10:07.021804 kubelet[2115]: I0912 17:10:07.021341 2115 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:10:07.022073 kubelet[2115]: I0912 17:10:07.021348 2115 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:10:07.022073 kubelet[2115]: I0912 17:10:07.021541 2115 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:07.024481 kubelet[2115]: I0912 17:10:07.024453 2115 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:10:07.024597 kubelet[2115]: I0912 17:10:07.024585 2115 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:10:07.024688 kubelet[2115]: I0912 17:10:07.024677 2115 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:10:07.026005 kubelet[2115]: I0912 17:10:07.025986 2115 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:10:07.027367 kubelet[2115]: I0912 17:10:07.027347 2115 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:10:07.028202 kubelet[2115]: I0912 17:10:07.028179 2115 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:10:07.028400 kubelet[2115]: W0912 17:10:07.028386 2115 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:10:07.028943 kubelet[2115]: E0912 17:10:07.028848 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:10:07.028943 kubelet[2115]: E0912 17:10:07.028848 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:10:07.033249 kubelet[2115]: I0912 17:10:07.033228 2115 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:10:07.033382 kubelet[2115]: I0912 17:10:07.033371 2115 server.go:1289] "Started kubelet" Sep 12 17:10:07.033498 kubelet[2115]: I0912 17:10:07.033473 2115 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:10:07.034015 kubelet[2115]: I0912 17:10:07.033969 2115 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:10:07.034313 kubelet[2115]: I0912 17:10:07.034283 2115 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:10:07.036258 kubelet[2115]: I0912 17:10:07.034800 2115 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:10:07.036258 kubelet[2115]: I0912 17:10:07.035247 2115 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:10:07.036521 kubelet[2115]: I0912 17:10:07.036495 2115 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:10:07.037207 kubelet[2115]: E0912 17:10:07.037177 2115 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:10:07.037272 kubelet[2115]: I0912 17:10:07.037225 2115 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:10:07.038056 kubelet[2115]: E0912 17:10:07.036088 2115 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864981d7efd3241 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:10:07.033332289 +0000 UTC m=+0.802642979,LastTimestamp:2025-09-12 17:10:07.033332289 +0000 UTC m=+0.802642979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:10:07.038056 kubelet[2115]: I0912 17:10:07.037432 2115 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:10:07.038056 kubelet[2115]: I0912 17:10:07.037490 2115 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:10:07.038353 kubelet[2115]: E0912 17:10:07.038306 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="200ms" Sep 12 17:10:07.038353 kubelet[2115]: E0912 17:10:07.038327 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:10:07.039545 kubelet[2115]: I0912 17:10:07.039525 2115 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:10:07.039814 kubelet[2115]: I0912 17:10:07.039786 2115 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:10:07.040428 kubelet[2115]: E0912 17:10:07.040401 2115 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:10:07.044522 kubelet[2115]: I0912 17:10:07.044442 2115 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:10:07.055876 kubelet[2115]: I0912 17:10:07.055834 2115 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:10:07.055876 kubelet[2115]: I0912 17:10:07.055854 2115 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:10:07.055876 kubelet[2115]: I0912 17:10:07.055873 2115 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:07.058525 kubelet[2115]: I0912 17:10:07.058465 2115 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:10:07.059723 kubelet[2115]: I0912 17:10:07.059678 2115 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:10:07.059723 kubelet[2115]: I0912 17:10:07.059709 2115 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:10:07.059816 kubelet[2115]: I0912 17:10:07.059742 2115 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:10:07.059816 kubelet[2115]: I0912 17:10:07.059752 2115 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:10:07.059816 kubelet[2115]: E0912 17:10:07.059800 2115 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:10:07.063697 kubelet[2115]: E0912 17:10:07.063641 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:10:07.137374 kubelet[2115]: E0912 17:10:07.137298 2115 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:10:07.160700 kubelet[2115]: E0912 17:10:07.160631 2115 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:10:07.221626 kubelet[2115]: I0912 17:10:07.221540 2115 policy_none.go:49] "None policy: Start" Sep 12 17:10:07.221626 kubelet[2115]: I0912 17:10:07.221601 2115 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:10:07.221626 kubelet[2115]: I0912 17:10:07.221625 2115 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:10:07.238056 kubelet[2115]: E0912 17:10:07.237956 2115 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:10:07.239648 kubelet[2115]: E0912 17:10:07.239600 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="400ms" Sep 12 17:10:07.287825 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:10:07.306963 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:10:07.310586 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:10:07.322962 kubelet[2115]: E0912 17:10:07.322804 2115 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:10:07.323533 kubelet[2115]: I0912 17:10:07.323046 2115 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:10:07.323533 kubelet[2115]: I0912 17:10:07.323059 2115 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:10:07.323533 kubelet[2115]: I0912 17:10:07.323270 2115 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:10:07.324034 kubelet[2115]: E0912 17:10:07.324011 2115 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:10:07.324096 kubelet[2115]: E0912 17:10:07.324057 2115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:10:07.373352 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 17:10:07.395814 kubelet[2115]: E0912 17:10:07.395761 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:10:07.399236 systemd[1]: Created slice kubepods-burstable-podeae92bdfe251fca7bf28d1c19b468ccc.slice - libcontainer container kubepods-burstable-podeae92bdfe251fca7bf28d1c19b468ccc.slice. Sep 12 17:10:07.413772 kubelet[2115]: E0912 17:10:07.413732 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:10:07.416686 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 17:10:07.418283 kubelet[2115]: E0912 17:10:07.418248 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:10:07.424218 kubelet[2115]: I0912 17:10:07.424194 2115 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:10:07.424673 kubelet[2115]: E0912 17:10:07.424639 2115 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Sep 12 17:10:07.440319 kubelet[2115]: I0912 17:10:07.440075 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:10:07.440319 kubelet[2115]: I0912 17:10:07.440133 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eae92bdfe251fca7bf28d1c19b468ccc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eae92bdfe251fca7bf28d1c19b468ccc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:10:07.440319 kubelet[2115]: I0912 17:10:07.440155 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eae92bdfe251fca7bf28d1c19b468ccc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eae92bdfe251fca7bf28d1c19b468ccc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:10:07.440319 kubelet[2115]: I0912 17:10:07.440172 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:07.440319 kubelet[2115]: I0912 17:10:07.440187 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:07.440530 kubelet[2115]: I0912 17:10:07.440216 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:07.440530 kubelet[2115]: I0912 17:10:07.440232 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eae92bdfe251fca7bf28d1c19b468ccc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eae92bdfe251fca7bf28d1c19b468ccc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:10:07.440530 kubelet[2115]: I0912 17:10:07.440249 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:07.440530 kubelet[2115]: I0912 17:10:07.440274 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:07.626343 kubelet[2115]: I0912 17:10:07.626238 2115 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:10:07.626588 kubelet[2115]: E0912 17:10:07.626554 2115 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Sep 12 17:10:07.640235 kubelet[2115]: E0912 17:10:07.640198 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="800ms" Sep 12 17:10:07.696582 kubelet[2115]: E0912 17:10:07.696534 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:07.697224 containerd[1439]: time="2025-09-12T17:10:07.697178994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:07.714576 kubelet[2115]: E0912 17:10:07.714500 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:07.715069 containerd[1439]: time="2025-09-12T17:10:07.715026679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eae92bdfe251fca7bf28d1c19b468ccc,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:07.719332 kubelet[2115]: E0912 17:10:07.719259 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:07.719703 containerd[1439]: time="2025-09-12T17:10:07.719670344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:07.979182 kubelet[2115]: E0912 17:10:07.979049 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:10:08.028711 kubelet[2115]: I0912 17:10:08.028679 2115 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:10:08.029077 kubelet[2115]: E0912 17:10:08.029041 2115 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Sep 12 17:10:08.087401 kubelet[2115]: E0912 17:10:08.087350 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:10:08.187616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3442805605.mount: Deactivated successfully. Sep 12 17:10:08.192727 containerd[1439]: time="2025-09-12T17:10:08.192657130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:08.195058 containerd[1439]: time="2025-09-12T17:10:08.194949259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 12 17:10:08.195818 containerd[1439]: time="2025-09-12T17:10:08.195786384Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:08.197928 containerd[1439]: time="2025-09-12T17:10:08.196892485Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:08.197928 containerd[1439]: time="2025-09-12T17:10:08.197218420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:10:08.198506 containerd[1439]: time="2025-09-12T17:10:08.198461766Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:10:08.199539 containerd[1439]: time="2025-09-12T17:10:08.199484227Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:08.205524 containerd[1439]: time="2025-09-12T17:10:08.205323813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:08.206688 containerd[1439]: time="2025-09-12T17:10:08.206638018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.378593ms" Sep 12 17:10:08.207877 containerd[1439]: time="2025-09-12T17:10:08.207698904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.591594ms" Sep 12 17:10:08.210325 containerd[1439]: time="2025-09-12T17:10:08.210175929Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.434539ms" Sep 12 17:10:08.311569 containerd[1439]: time="2025-09-12T17:10:08.310972282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:08.312069 containerd[1439]: time="2025-09-12T17:10:08.311988392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:08.312197 containerd[1439]: time="2025-09-12T17:10:08.312143411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:08.313212 containerd[1439]: time="2025-09-12T17:10:08.313030026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:08.313318 containerd[1439]: time="2025-09-12T17:10:08.313231298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:08.313387 containerd[1439]: time="2025-09-12T17:10:08.313288577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:08.313502 containerd[1439]: time="2025-09-12T17:10:08.313452143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:08.313597 containerd[1439]: time="2025-09-12T17:10:08.313566220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:08.313892 containerd[1439]: time="2025-09-12T17:10:08.313820378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:08.313892 containerd[1439]: time="2025-09-12T17:10:08.313864635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:08.314157 containerd[1439]: time="2025-09-12T17:10:08.313880053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:08.314157 containerd[1439]: time="2025-09-12T17:10:08.313969365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:08.338262 systemd[1]: Started cri-containerd-8f3bd089d0bee1f5f25da5382e7cae535a21814bc0097b0714ce47c1ad0147c1.scope - libcontainer container 8f3bd089d0bee1f5f25da5382e7cae535a21814bc0097b0714ce47c1ad0147c1. Sep 12 17:10:08.339659 systemd[1]: Started cri-containerd-c0b1b678dcd188f1db44b14621cfa25d1ffcb6a88ff780139749d2a176b89c90.scope - libcontainer container c0b1b678dcd188f1db44b14621cfa25d1ffcb6a88ff780139749d2a176b89c90. Sep 12 17:10:08.343501 systemd[1]: Started cri-containerd-b33b5bf8ae8e8e53b4b2d77c255a3261488202cf180f34f967a450b83e8465d5.scope - libcontainer container b33b5bf8ae8e8e53b4b2d77c255a3261488202cf180f34f967a450b83e8465d5. Sep 12 17:10:08.374778 kubelet[2115]: E0912 17:10:08.374731 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:10:08.383727 containerd[1439]: time="2025-09-12T17:10:08.383651242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0b1b678dcd188f1db44b14621cfa25d1ffcb6a88ff780139749d2a176b89c90\"" Sep 12 17:10:08.385007 kubelet[2115]: E0912 17:10:08.384938 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:08.387922 kubelet[2115]: E0912 17:10:08.387414 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:10:08.389330 containerd[1439]: time="2025-09-12T17:10:08.389216101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b33b5bf8ae8e8e53b4b2d77c255a3261488202cf180f34f967a450b83e8465d5\"" Sep 12 17:10:08.390702 kubelet[2115]: E0912 17:10:08.390645 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:08.391418 containerd[1439]: time="2025-09-12T17:10:08.391373741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eae92bdfe251fca7bf28d1c19b468ccc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f3bd089d0bee1f5f25da5382e7cae535a21814bc0097b0714ce47c1ad0147c1\"" Sep 12 17:10:08.392121 containerd[1439]: time="2025-09-12T17:10:08.392092556Z" level=info msg="CreateContainer within sandbox \"c0b1b678dcd188f1db44b14621cfa25d1ffcb6a88ff780139749d2a176b89c90\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:10:08.392532 kubelet[2115]: E0912 17:10:08.392512 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:08.396338 containerd[1439]: time="2025-09-12T17:10:08.395958519Z" level=info msg="CreateContainer within sandbox \"b33b5bf8ae8e8e53b4b2d77c255a3261488202cf180f34f967a450b83e8465d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:10:08.399112 containerd[1439]: time="2025-09-12T17:10:08.399074632Z" level=info msg="CreateContainer within sandbox \"8f3bd089d0bee1f5f25da5382e7cae535a21814bc0097b0714ce47c1ad0147c1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:10:08.440648 kubelet[2115]: E0912 17:10:08.440601 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="1.6s" Sep 12 17:10:08.474338 containerd[1439]: time="2025-09-12T17:10:08.474171181Z" level=info msg="CreateContainer within sandbox \"c0b1b678dcd188f1db44b14621cfa25d1ffcb6a88ff780139749d2a176b89c90\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b35e8974589965752fb244998680547796aaa780ebaf1acc1ff1f324845b3d71\"" Sep 12 17:10:08.475742 containerd[1439]: time="2025-09-12T17:10:08.474952267Z" level=info msg="StartContainer for \"b35e8974589965752fb244998680547796aaa780ebaf1acc1ff1f324845b3d71\"" Sep 12 17:10:08.481856 containerd[1439]: time="2025-09-12T17:10:08.481811837Z" level=info msg="CreateContainer within sandbox \"b33b5bf8ae8e8e53b4b2d77c255a3261488202cf180f34f967a450b83e8465d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5855929cf08c3fe6d5886dc247b39b58c8e35d6cf84c9232463c33c9baff576d\"" Sep 12 17:10:08.482638 containerd[1439]: time="2025-09-12T17:10:08.482565961Z" level=info msg="StartContainer for \"5855929cf08c3fe6d5886dc247b39b58c8e35d6cf84c9232463c33c9baff576d\"" Sep 12 17:10:08.482875 containerd[1439]: time="2025-09-12T17:10:08.482822355Z" level=info msg="CreateContainer within sandbox \"8f3bd089d0bee1f5f25da5382e7cae535a21814bc0097b0714ce47c1ad0147c1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8d0324b1c6f293c69227c2a4dcf6fb1f061c7d092da69e5255b87f8211f8b9bb\"" Sep 12 17:10:08.484156 containerd[1439]: time="2025-09-12T17:10:08.483176929Z" level=info msg="StartContainer for \"8d0324b1c6f293c69227c2a4dcf6fb1f061c7d092da69e5255b87f8211f8b9bb\"" Sep 12 17:10:08.505795 systemd[1]: Started cri-containerd-b35e8974589965752fb244998680547796aaa780ebaf1acc1ff1f324845b3d71.scope - libcontainer container b35e8974589965752fb244998680547796aaa780ebaf1acc1ff1f324845b3d71. Sep 12 17:10:08.509720 systemd[1]: Started cri-containerd-5855929cf08c3fe6d5886dc247b39b58c8e35d6cf84c9232463c33c9baff576d.scope - libcontainer container 5855929cf08c3fe6d5886dc247b39b58c8e35d6cf84c9232463c33c9baff576d. Sep 12 17:10:08.511314 systemd[1]: Started cri-containerd-8d0324b1c6f293c69227c2a4dcf6fb1f061c7d092da69e5255b87f8211f8b9bb.scope - libcontainer container 8d0324b1c6f293c69227c2a4dcf6fb1f061c7d092da69e5255b87f8211f8b9bb. Sep 12 17:10:08.556225 containerd[1439]: time="2025-09-12T17:10:08.556167405Z" level=info msg="StartContainer for \"b35e8974589965752fb244998680547796aaa780ebaf1acc1ff1f324845b3d71\" returns successfully" Sep 12 17:10:08.556225 containerd[1439]: time="2025-09-12T17:10:08.556203513Z" level=info msg="StartContainer for \"5855929cf08c3fe6d5886dc247b39b58c8e35d6cf84c9232463c33c9baff576d\" returns successfully" Sep 12 17:10:08.560735 containerd[1439]: time="2025-09-12T17:10:08.560673494Z" level=info msg="StartContainer for \"8d0324b1c6f293c69227c2a4dcf6fb1f061c7d092da69e5255b87f8211f8b9bb\" returns successfully" Sep 12 17:10:08.836322 kubelet[2115]: I0912 17:10:08.836291 2115 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:10:09.079606 kubelet[2115]: E0912 17:10:09.079417 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:10:09.079957 kubelet[2115]: E0912 17:10:09.079741 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:09.083354 kubelet[2115]: E0912 17:10:09.083238 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:10:09.083435 kubelet[2115]: E0912 17:10:09.083422 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:09.084148 kubelet[2115]: E0912 17:10:09.083897 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:10:09.084148 kubelet[2115]: E0912 17:10:09.084040 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:10.089075 kubelet[2115]: E0912 17:10:10.089032 2115 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:10:10.092791 kubelet[2115]: E0912 17:10:10.092452 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:10:10.092791 kubelet[2115]: E0912 17:10:10.092598 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:10.096953 kubelet[2115]: E0912 17:10:10.093053 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:10:10.096953 kubelet[2115]: E0912 17:10:10.093166 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:10.266366 kubelet[2115]: I0912 17:10:10.266328 2115 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:10:10.338424 kubelet[2115]: I0912 17:10:10.338389 2115 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:10:10.346227 kubelet[2115]: E0912 17:10:10.346147 2115 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:10:10.346227 kubelet[2115]: I0912 17:10:10.346181 2115 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:10:10.349601 kubelet[2115]: E0912 17:10:10.349569 2115 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:10:10.349601 kubelet[2115]: I0912 17:10:10.349601 2115 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:10.351529 kubelet[2115]: E0912 17:10:10.351502 2115 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:11.030639 kubelet[2115]: I0912 17:10:11.030239 2115 apiserver.go:52] "Watching apiserver" Sep 12 17:10:11.037823 kubelet[2115]: I0912 17:10:11.037783 2115 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:10:12.174571 systemd[1]: Reloading requested from client PID 2408 ('systemctl') (unit session-7.scope)... Sep 12 17:10:12.174590 systemd[1]: Reloading... Sep 12 17:10:12.285010 zram_generator::config[2447]: No configuration found. Sep 12 17:10:12.590880 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:10:12.658472 systemd[1]: Reloading finished in 483 ms. Sep 12 17:10:12.698546 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:12.720164 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:10:12.720404 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:12.720468 systemd[1]: kubelet.service: Consumed 1.179s CPU time, 129.1M memory peak, 0B memory swap peak. Sep 12 17:10:12.729500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:12.840521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:12.845064 (kubelet)[2489]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:10:12.883025 kubelet[2489]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:12.883025 kubelet[2489]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:10:12.883025 kubelet[2489]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:12.885044 kubelet[2489]: I0912 17:10:12.883587 2489 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:10:12.895690 kubelet[2489]: I0912 17:10:12.892376 2489 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:10:12.895690 kubelet[2489]: I0912 17:10:12.892405 2489 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:10:12.895690 kubelet[2489]: I0912 17:10:12.892637 2489 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:10:12.895690 kubelet[2489]: I0912 17:10:12.894128 2489 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:10:12.898198 kubelet[2489]: I0912 17:10:12.898102 2489 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:10:12.905385 kubelet[2489]: E0912 17:10:12.905311 2489 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:10:12.905385 kubelet[2489]: I0912 17:10:12.905384 2489 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:10:12.909827 kubelet[2489]: I0912 17:10:12.909796 2489 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:10:12.910089 kubelet[2489]: I0912 17:10:12.910026 2489 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:10:12.910630 kubelet[2489]: I0912 17:10:12.910088 2489 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:10:12.910630 kubelet[2489]: I0912 17:10:12.910630 2489 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:10:12.910767 kubelet[2489]: I0912 17:10:12.910641 2489 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:10:12.910767 kubelet[2489]: I0912 17:10:12.910698 2489 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:12.910940 kubelet[2489]: I0912 17:10:12.910853 2489 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:10:12.910940 kubelet[2489]: I0912 17:10:12.910870 2489 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:10:12.910940 kubelet[2489]: I0912 17:10:12.910892 2489 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:10:12.910940 kubelet[2489]: I0912 17:10:12.910922 2489 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:10:12.913959 kubelet[2489]: I0912 17:10:12.912054 2489 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:10:12.913959 kubelet[2489]: I0912 17:10:12.912616 2489 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:10:12.919927 kubelet[2489]: I0912 17:10:12.914617 2489 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:10:12.919927 kubelet[2489]: I0912 17:10:12.914661 2489 server.go:1289] "Started kubelet" Sep 12 17:10:12.919927 kubelet[2489]: I0912 17:10:12.914882 2489 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:10:12.919927 kubelet[2489]: I0912 17:10:12.915141 2489 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:10:12.919927 kubelet[2489]: I0912 17:10:12.914790 2489 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:10:12.919927 kubelet[2489]: E0912 17:10:12.918930 2489 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:10:12.919927 kubelet[2489]: I0912 17:10:12.919092 2489 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:10:12.920781 kubelet[2489]: I0912 17:10:12.920745 2489 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:10:12.930529 kubelet[2489]: I0912 17:10:12.930489 2489 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:10:12.931482 kubelet[2489]: I0912 17:10:12.931454 2489 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:10:12.931747 kubelet[2489]: E0912 17:10:12.931687 2489 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:10:12.932634 kubelet[2489]: I0912 17:10:12.932606 2489 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:10:12.932750 kubelet[2489]: I0912 17:10:12.932735 2489 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:10:12.933616 kubelet[2489]: I0912 17:10:12.933572 2489 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:10:12.933809 kubelet[2489]: I0912 17:10:12.933781 2489 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:10:12.936200 kubelet[2489]: I0912 17:10:12.935311 2489 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:10:12.941941 kubelet[2489]: I0912 17:10:12.941878 2489 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:10:12.943123 kubelet[2489]: I0912 17:10:12.943096 2489 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:10:12.943248 kubelet[2489]: I0912 17:10:12.943236 2489 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:10:12.943325 kubelet[2489]: I0912 17:10:12.943314 2489 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:10:12.943377 kubelet[2489]: I0912 17:10:12.943369 2489 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:10:12.943485 kubelet[2489]: E0912 17:10:12.943468 2489 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:10:12.979769 kubelet[2489]: I0912 17:10:12.979741 2489 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:10:12.979965 kubelet[2489]: I0912 17:10:12.979948 2489 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:10:12.980028 kubelet[2489]: I0912 17:10:12.980016 2489 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:12.980254 kubelet[2489]: I0912 17:10:12.980237 2489 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:10:12.980325 kubelet[2489]: I0912 17:10:12.980303 2489 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:10:12.980377 kubelet[2489]: I0912 17:10:12.980368 2489 policy_none.go:49] "None policy: Start" Sep 12 17:10:12.980423 kubelet[2489]: I0912 17:10:12.980416 2489 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:10:12.980477 kubelet[2489]: I0912 17:10:12.980469 2489 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:10:12.980661 kubelet[2489]: I0912 17:10:12.980645 2489 state_mem.go:75] "Updated machine memory state" Sep 12 17:10:12.987234 kubelet[2489]: E0912 17:10:12.987203 2489 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:10:12.987772 kubelet[2489]: I0912 17:10:12.987586 2489 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:10:12.987916 kubelet[2489]: I0912 17:10:12.987855 2489 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:10:12.988166 kubelet[2489]: I0912 17:10:12.988151 2489 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:10:12.989054 kubelet[2489]: E0912 17:10:12.989031 2489 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:10:13.044365 kubelet[2489]: I0912 17:10:13.044324 2489 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:10:13.044365 kubelet[2489]: I0912 17:10:13.044346 2489 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:10:13.044537 kubelet[2489]: I0912 17:10:13.044389 2489 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:13.092844 kubelet[2489]: I0912 17:10:13.092817 2489 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:10:13.101963 kubelet[2489]: I0912 17:10:13.101599 2489 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:10:13.101963 kubelet[2489]: I0912 17:10:13.101727 2489 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:10:13.134116 kubelet[2489]: I0912 17:10:13.133816 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:13.134116 kubelet[2489]: I0912 17:10:13.133860 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:13.134116 kubelet[2489]: I0912 17:10:13.133882 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eae92bdfe251fca7bf28d1c19b468ccc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eae92bdfe251fca7bf28d1c19b468ccc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:10:13.134116 kubelet[2489]: I0912 17:10:13.133925 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eae92bdfe251fca7bf28d1c19b468ccc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eae92bdfe251fca7bf28d1c19b468ccc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:10:13.134116 kubelet[2489]: I0912 17:10:13.133942 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:13.134382 kubelet[2489]: I0912 17:10:13.133957 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:13.134382 kubelet[2489]: I0912 17:10:13.133972 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:10:13.134382 kubelet[2489]: I0912 17:10:13.134022 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eae92bdfe251fca7bf28d1c19b468ccc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eae92bdfe251fca7bf28d1c19b468ccc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:10:13.134382 kubelet[2489]: I0912 17:10:13.134086 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:10:13.171634 sudo[2530]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:10:13.171943 sudo[2530]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:10:13.350953 kubelet[2489]: E0912 17:10:13.350649 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:13.352074 kubelet[2489]: E0912 17:10:13.351694 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:13.352074 kubelet[2489]: E0912 17:10:13.351844 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:13.627023 sudo[2530]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:13.912118 kubelet[2489]: I0912 17:10:13.911820 2489 apiserver.go:52] "Watching apiserver" Sep 12 17:10:13.933536 kubelet[2489]: I0912 17:10:13.933496 2489 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:10:13.959386 kubelet[2489]: E0912 17:10:13.959342 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:13.959557 kubelet[2489]: I0912 17:10:13.959540 2489 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:10:13.960671 kubelet[2489]: I0912 17:10:13.960650 2489 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:10:13.970411 kubelet[2489]: E0912 17:10:13.970373 2489 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:10:13.970559 kubelet[2489]: E0912 17:10:13.970540 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:13.971618 kubelet[2489]: E0912 17:10:13.970868 2489 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:10:13.971618 kubelet[2489]: E0912 17:10:13.971007 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:13.990859 kubelet[2489]: I0912 17:10:13.990778 2489 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.99075899 podStartE2EDuration="990.75899ms" podCreationTimestamp="2025-09-12 17:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:13.987550457 +0000 UTC m=+1.138961643" watchObservedRunningTime="2025-09-12 17:10:13.99075899 +0000 UTC m=+1.142170176" Sep 12 17:10:13.999295 kubelet[2489]: I0912 17:10:13.999214 2489 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.999175873 podStartE2EDuration="999.175873ms" podCreationTimestamp="2025-09-12 17:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:13.997441582 +0000 UTC m=+1.148852808" watchObservedRunningTime="2025-09-12 17:10:13.999175873 +0000 UTC m=+1.150587019" Sep 12 17:10:14.011956 kubelet[2489]: I0912 17:10:14.011207 2489 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.011190187 podStartE2EDuration="1.011190187s" podCreationTimestamp="2025-09-12 17:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:14.010893816 +0000 UTC m=+1.162305002" watchObservedRunningTime="2025-09-12 17:10:14.011190187 +0000 UTC m=+1.162601373" Sep 12 17:10:14.961153 kubelet[2489]: E0912 17:10:14.961036 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:14.961153 kubelet[2489]: E0912 17:10:14.961103 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:15.253251 sudo[1614]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:15.255172 sshd[1611]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:15.258342 systemd[1]: sshd@6-10.0.0.22:22-10.0.0.1:58614.service: Deactivated successfully. Sep 12 17:10:15.262487 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:10:15.262672 systemd[1]: session-7.scope: Consumed 7.578s CPU time, 157.0M memory peak, 0B memory swap peak. Sep 12 17:10:15.263854 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:10:15.265057 systemd-logind[1418]: Removed session 7. Sep 12 17:10:15.962653 kubelet[2489]: E0912 17:10:15.962544 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:16.253056 kubelet[2489]: E0912 17:10:16.252945 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:18.352995 kubelet[2489]: I0912 17:10:18.352913 2489 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:10:18.355472 containerd[1439]: time="2025-09-12T17:10:18.354927680Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:10:18.357330 kubelet[2489]: I0912 17:10:18.356560 2489 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:10:19.203076 systemd[1]: Created slice kubepods-besteffort-pode0569445_bfd9_435c_a139_15e208375ea7.slice - libcontainer container kubepods-besteffort-pode0569445_bfd9_435c_a139_15e208375ea7.slice. Sep 12 17:10:19.239558 systemd[1]: Created slice kubepods-burstable-pod38b53b81_c476_44b6_ae94_bd84b04cd7d0.slice - libcontainer container kubepods-burstable-pod38b53b81_c476_44b6_ae94_bd84b04cd7d0.slice. Sep 12 17:10:19.274174 kubelet[2489]: I0912 17:10:19.273671 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0569445-bfd9-435c-a139-15e208375ea7-xtables-lock\") pod \"kube-proxy-6slgn\" (UID: \"e0569445-bfd9-435c-a139-15e208375ea7\") " pod="kube-system/kube-proxy-6slgn" Sep 12 17:10:19.274174 kubelet[2489]: I0912 17:10:19.273717 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-run\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.274174 kubelet[2489]: I0912 17:10:19.273732 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-bpf-maps\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.274174 kubelet[2489]: I0912 17:10:19.273746 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cni-path\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.274174 kubelet[2489]: I0912 17:10:19.273761 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38b53b81-c476-44b6-ae94-bd84b04cd7d0-clustermesh-secrets\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.274174 kubelet[2489]: I0912 17:10:19.273777 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-config-path\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.274418 kubelet[2489]: I0912 17:10:19.273790 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-host-proc-sys-net\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.274418 kubelet[2489]: I0912 17:10:19.273806 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-host-proc-sys-kernel\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.274418 kubelet[2489]: I0912 17:10:19.273824 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38b53b81-c476-44b6-ae94-bd84b04cd7d0-hubble-tls\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.274418 kubelet[2489]: I0912 17:10:19.273838 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9s4c\" (UniqueName: \"kubernetes.io/projected/38b53b81-c476-44b6-ae94-bd84b04cd7d0-kube-api-access-g9s4c\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.274418 kubelet[2489]: I0912 17:10:19.273855 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0569445-bfd9-435c-a139-15e208375ea7-kube-proxy\") pod \"kube-proxy-6slgn\" (UID: \"e0569445-bfd9-435c-a139-15e208375ea7\") " pod="kube-system/kube-proxy-6slgn" Sep 12 17:10:19.274539 kubelet[2489]: I0912 17:10:19.273869 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xrhn\" (UniqueName: \"kubernetes.io/projected/e0569445-bfd9-435c-a139-15e208375ea7-kube-api-access-8xrhn\") pod \"kube-proxy-6slgn\" (UID: \"e0569445-bfd9-435c-a139-15e208375ea7\") " pod="kube-system/kube-proxy-6slgn" Sep 12 17:10:19.275122 kubelet[2489]: I0912 17:10:19.273885 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-cgroup\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.275122 kubelet[2489]: I0912 17:10:19.274884 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-lib-modules\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.275616 kubelet[2489]: I0912 17:10:19.275365 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0569445-bfd9-435c-a139-15e208375ea7-lib-modules\") pod \"kube-proxy-6slgn\" (UID: \"e0569445-bfd9-435c-a139-15e208375ea7\") " pod="kube-system/kube-proxy-6slgn" Sep 12 17:10:19.275616 kubelet[2489]: I0912 17:10:19.275509 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-hostproc\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.275616 kubelet[2489]: I0912 17:10:19.275533 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-etc-cni-netd\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.275616 kubelet[2489]: I0912 17:10:19.275548 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-xtables-lock\") pod \"cilium-pcs22\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " pod="kube-system/cilium-pcs22" Sep 12 17:10:19.325135 systemd[1]: Created slice kubepods-besteffort-pod85a3a377_55ec_4fe1_9db7_547028df43df.slice - libcontainer container kubepods-besteffort-pod85a3a377_55ec_4fe1_9db7_547028df43df.slice. Sep 12 17:10:19.376704 kubelet[2489]: I0912 17:10:19.376657 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xqv4\" (UniqueName: \"kubernetes.io/projected/85a3a377-55ec-4fe1-9db7-547028df43df-kube-api-access-6xqv4\") pod \"cilium-operator-6c4d7847fc-8546w\" (UID: \"85a3a377-55ec-4fe1-9db7-547028df43df\") " pod="kube-system/cilium-operator-6c4d7847fc-8546w" Sep 12 17:10:19.379482 kubelet[2489]: I0912 17:10:19.378261 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85a3a377-55ec-4fe1-9db7-547028df43df-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8546w\" (UID: \"85a3a377-55ec-4fe1-9db7-547028df43df\") " pod="kube-system/cilium-operator-6c4d7847fc-8546w" Sep 12 17:10:19.531027 kubelet[2489]: E0912 17:10:19.530899 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:19.532183 containerd[1439]: time="2025-09-12T17:10:19.532148929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6slgn,Uid:e0569445-bfd9-435c-a139-15e208375ea7,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:19.543609 kubelet[2489]: E0912 17:10:19.542518 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:19.543712 containerd[1439]: time="2025-09-12T17:10:19.543115161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pcs22,Uid:38b53b81-c476-44b6-ae94-bd84b04cd7d0,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:19.591751 containerd[1439]: time="2025-09-12T17:10:19.591582277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:19.591751 containerd[1439]: time="2025-09-12T17:10:19.591661017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:19.591751 containerd[1439]: time="2025-09-12T17:10:19.591673180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:19.592105 containerd[1439]: time="2025-09-12T17:10:19.591786210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:19.605021 containerd[1439]: time="2025-09-12T17:10:19.604552466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:19.605171 containerd[1439]: time="2025-09-12T17:10:19.605053556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:19.605171 containerd[1439]: time="2025-09-12T17:10:19.605083283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:19.605300 containerd[1439]: time="2025-09-12T17:10:19.605251727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:19.629115 kubelet[2489]: E0912 17:10:19.628500 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:19.629200 systemd[1]: Started cri-containerd-1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8.scope - libcontainer container 1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8. Sep 12 17:10:19.630613 containerd[1439]: time="2025-09-12T17:10:19.630538857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8546w,Uid:85a3a377-55ec-4fe1-9db7-547028df43df,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:19.633627 systemd[1]: Started cri-containerd-7944855da441ebfb00d4f89aa57f26c3aff26656aef040695c9e1cbf1cb7b344.scope - libcontainer container 7944855da441ebfb00d4f89aa57f26c3aff26656aef040695c9e1cbf1cb7b344. Sep 12 17:10:19.668433 containerd[1439]: time="2025-09-12T17:10:19.668145969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:19.668433 containerd[1439]: time="2025-09-12T17:10:19.668217867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:19.668433 containerd[1439]: time="2025-09-12T17:10:19.668233792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:19.668433 containerd[1439]: time="2025-09-12T17:10:19.668330216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:19.675096 containerd[1439]: time="2025-09-12T17:10:19.675041990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pcs22,Uid:38b53b81-c476-44b6-ae94-bd84b04cd7d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\"" Sep 12 17:10:19.676002 containerd[1439]: time="2025-09-12T17:10:19.675667231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6slgn,Uid:e0569445-bfd9-435c-a139-15e208375ea7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7944855da441ebfb00d4f89aa57f26c3aff26656aef040695c9e1cbf1cb7b344\"" Sep 12 17:10:19.676433 kubelet[2489]: E0912 17:10:19.676407 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:19.677701 kubelet[2489]: E0912 17:10:19.677465 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:19.679064 containerd[1439]: time="2025-09-12T17:10:19.679029339Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:10:19.687789 containerd[1439]: time="2025-09-12T17:10:19.687738228Z" level=info msg="CreateContainer within sandbox \"7944855da441ebfb00d4f89aa57f26c3aff26656aef040695c9e1cbf1cb7b344\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:10:19.702158 systemd[1]: Started cri-containerd-24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84.scope - libcontainer container 24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84. Sep 12 17:10:19.715250 containerd[1439]: time="2025-09-12T17:10:19.715199680Z" level=info msg="CreateContainer within sandbox \"7944855da441ebfb00d4f89aa57f26c3aff26656aef040695c9e1cbf1cb7b344\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"48da84bb0e2d3cadd5638378044ede8eba8b6d666515b340363f6dabaae859f3\"" Sep 12 17:10:19.716029 containerd[1439]: time="2025-09-12T17:10:19.715997206Z" level=info msg="StartContainer for \"48da84bb0e2d3cadd5638378044ede8eba8b6d666515b340363f6dabaae859f3\"" Sep 12 17:10:19.739322 containerd[1439]: time="2025-09-12T17:10:19.739281499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8546w,Uid:85a3a377-55ec-4fe1-9db7-547028df43df,Namespace:kube-system,Attempt:0,} returns sandbox id \"24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84\"" Sep 12 17:10:19.740278 kubelet[2489]: E0912 17:10:19.740253 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:19.756133 systemd[1]: Started cri-containerd-48da84bb0e2d3cadd5638378044ede8eba8b6d666515b340363f6dabaae859f3.scope - libcontainer container 48da84bb0e2d3cadd5638378044ede8eba8b6d666515b340363f6dabaae859f3. Sep 12 17:10:19.786066 containerd[1439]: time="2025-09-12T17:10:19.785953152Z" level=info msg="StartContainer for \"48da84bb0e2d3cadd5638378044ede8eba8b6d666515b340363f6dabaae859f3\" returns successfully" Sep 12 17:10:19.976342 kubelet[2489]: E0912 17:10:19.976232 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:19.993088 kubelet[2489]: E0912 17:10:19.993003 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:19.994810 kubelet[2489]: I0912 17:10:19.994677 2489 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6slgn" podStartSLOduration=0.994647485 podStartE2EDuration="994.647485ms" podCreationTimestamp="2025-09-12 17:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:19.99284698 +0000 UTC m=+7.144258166" watchObservedRunningTime="2025-09-12 17:10:19.994647485 +0000 UTC m=+7.146058671" Sep 12 17:10:20.977967 kubelet[2489]: E0912 17:10:20.977794 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:24.540529 kubelet[2489]: E0912 17:10:24.540267 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:26.262409 kubelet[2489]: E0912 17:10:26.262327 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:27.082997 update_engine[1427]: I20250912 17:10:27.082944 1427 update_attempter.cc:509] Updating boot flags... Sep 12 17:10:27.115988 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2877) Sep 12 17:10:27.166949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2879) Sep 12 17:10:27.207975 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2879) Sep 12 17:10:28.765743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233321902.mount: Deactivated successfully. Sep 12 17:10:30.355792 containerd[1439]: time="2025-09-12T17:10:30.355722427Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:30.356526 containerd[1439]: time="2025-09-12T17:10:30.356470696Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 17:10:30.357622 containerd[1439]: time="2025-09-12T17:10:30.357578777Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:30.359440 containerd[1439]: time="2025-09-12T17:10:30.359402602Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.680331013s" Sep 12 17:10:30.359481 containerd[1439]: time="2025-09-12T17:10:30.359441928Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 17:10:30.361060 containerd[1439]: time="2025-09-12T17:10:30.361028758Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:10:30.371418 containerd[1439]: time="2025-09-12T17:10:30.371349338Z" level=info msg="CreateContainer within sandbox \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:10:30.388229 containerd[1439]: time="2025-09-12T17:10:30.388176742Z" level=info msg="CreateContainer within sandbox \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491\"" Sep 12 17:10:30.389051 containerd[1439]: time="2025-09-12T17:10:30.389021185Z" level=info msg="StartContainer for \"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491\"" Sep 12 17:10:30.420152 systemd[1]: Started cri-containerd-fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491.scope - libcontainer container fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491. Sep 12 17:10:30.446257 containerd[1439]: time="2025-09-12T17:10:30.446209453Z" level=info msg="StartContainer for \"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491\" returns successfully" Sep 12 17:10:30.463266 systemd[1]: cri-containerd-fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491.scope: Deactivated successfully. Sep 12 17:10:30.598442 containerd[1439]: time="2025-09-12T17:10:30.583309409Z" level=info msg="shim disconnected" id=fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491 namespace=k8s.io Sep 12 17:10:30.598442 containerd[1439]: time="2025-09-12T17:10:30.598439527Z" level=warning msg="cleaning up after shim disconnected" id=fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491 namespace=k8s.io Sep 12 17:10:30.598743 containerd[1439]: time="2025-09-12T17:10:30.598461851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:10:31.007015 kubelet[2489]: E0912 17:10:31.005856 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:31.024165 containerd[1439]: time="2025-09-12T17:10:31.024013754Z" level=info msg="CreateContainer within sandbox \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:10:31.122420 containerd[1439]: time="2025-09-12T17:10:31.122315683Z" level=info msg="CreateContainer within sandbox \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc\"" Sep 12 17:10:31.129515 containerd[1439]: time="2025-09-12T17:10:31.123103352Z" level=info msg="StartContainer for \"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc\"" Sep 12 17:10:31.160127 systemd[1]: Started cri-containerd-1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc.scope - libcontainer container 1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc. Sep 12 17:10:31.189965 containerd[1439]: time="2025-09-12T17:10:31.189767621Z" level=info msg="StartContainer for \"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc\" returns successfully" Sep 12 17:10:31.200484 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:10:31.201137 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:10:31.201226 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:10:31.207348 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:10:31.209149 systemd[1]: cri-containerd-1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc.scope: Deactivated successfully. Sep 12 17:10:31.235576 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:10:31.243800 containerd[1439]: time="2025-09-12T17:10:31.243747214Z" level=info msg="shim disconnected" id=1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc namespace=k8s.io Sep 12 17:10:31.243800 containerd[1439]: time="2025-09-12T17:10:31.243796141Z" level=warning msg="cleaning up after shim disconnected" id=1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc namespace=k8s.io Sep 12 17:10:31.243800 containerd[1439]: time="2025-09-12T17:10:31.243804622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:10:31.384627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491-rootfs.mount: Deactivated successfully. Sep 12 17:10:31.548005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111203366.mount: Deactivated successfully. Sep 12 17:10:32.011395 kubelet[2489]: E0912 17:10:32.011362 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:32.019781 containerd[1439]: time="2025-09-12T17:10:32.019636589Z" level=info msg="CreateContainer within sandbox \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:10:32.045486 containerd[1439]: time="2025-09-12T17:10:32.045424874Z" level=info msg="CreateContainer within sandbox \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b\"" Sep 12 17:10:32.047106 containerd[1439]: time="2025-09-12T17:10:32.047051889Z" level=info msg="StartContainer for \"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b\"" Sep 12 17:10:32.080137 systemd[1]: Started cri-containerd-1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b.scope - libcontainer container 1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b. Sep 12 17:10:32.117245 containerd[1439]: time="2025-09-12T17:10:32.117194631Z" level=info msg="StartContainer for \"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b\" returns successfully" Sep 12 17:10:32.120300 systemd[1]: cri-containerd-1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b.scope: Deactivated successfully. Sep 12 17:10:32.191560 containerd[1439]: time="2025-09-12T17:10:32.191499882Z" level=info msg="shim disconnected" id=1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b namespace=k8s.io Sep 12 17:10:32.191560 containerd[1439]: time="2025-09-12T17:10:32.191552809Z" level=warning msg="cleaning up after shim disconnected" id=1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b namespace=k8s.io Sep 12 17:10:32.191560 containerd[1439]: time="2025-09-12T17:10:32.191563250Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:10:32.263251 containerd[1439]: time="2025-09-12T17:10:32.262464172Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:32.263251 containerd[1439]: time="2025-09-12T17:10:32.263187987Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 17:10:32.264098 containerd[1439]: time="2025-09-12T17:10:32.264043260Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:32.265589 containerd[1439]: time="2025-09-12T17:10:32.265493212Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.904425608s" Sep 12 17:10:32.265589 containerd[1439]: time="2025-09-12T17:10:32.265530217Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 17:10:32.270670 containerd[1439]: time="2025-09-12T17:10:32.270634411Z" level=info msg="CreateContainer within sandbox \"24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:10:32.284348 containerd[1439]: time="2025-09-12T17:10:32.284290014Z" level=info msg="CreateContainer within sandbox \"24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\"" Sep 12 17:10:32.286120 containerd[1439]: time="2025-09-12T17:10:32.285219696Z" level=info msg="StartContainer for \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\"" Sep 12 17:10:32.314076 systemd[1]: Started cri-containerd-de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625.scope - libcontainer container de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625. Sep 12 17:10:32.335583 containerd[1439]: time="2025-09-12T17:10:32.335541061Z" level=info msg="StartContainer for \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\" returns successfully" Sep 12 17:10:33.013797 kubelet[2489]: E0912 17:10:33.013745 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:33.020279 kubelet[2489]: E0912 17:10:33.020245 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:33.039678 containerd[1439]: time="2025-09-12T17:10:33.039616073Z" level=info msg="CreateContainer within sandbox \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:10:33.055204 kubelet[2489]: I0912 17:10:33.054583 2489 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8546w" podStartSLOduration=1.5292202480000001 podStartE2EDuration="14.054565717s" podCreationTimestamp="2025-09-12 17:10:19 +0000 UTC" firstStartedPulling="2025-09-12 17:10:19.740938207 +0000 UTC m=+6.892349353" lastFinishedPulling="2025-09-12 17:10:32.266283636 +0000 UTC m=+19.417694822" observedRunningTime="2025-09-12 17:10:33.029027819 +0000 UTC m=+20.180439005" watchObservedRunningTime="2025-09-12 17:10:33.054565717 +0000 UTC m=+20.205976863" Sep 12 17:10:33.066250 containerd[1439]: time="2025-09-12T17:10:33.066189982Z" level=info msg="CreateContainer within sandbox \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9\"" Sep 12 17:10:33.066973 containerd[1439]: time="2025-09-12T17:10:33.066945437Z" level=info msg="StartContainer for \"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9\"" Sep 12 17:10:33.110105 systemd[1]: Started cri-containerd-644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9.scope - libcontainer container 644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9. Sep 12 17:10:33.131396 systemd[1]: cri-containerd-644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9.scope: Deactivated successfully. Sep 12 17:10:33.137114 containerd[1439]: time="2025-09-12T17:10:33.137026030Z" level=info msg="StartContainer for \"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9\" returns successfully" Sep 12 17:10:33.203024 containerd[1439]: time="2025-09-12T17:10:33.201433388Z" level=info msg="shim disconnected" id=644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9 namespace=k8s.io Sep 12 17:10:33.203024 containerd[1439]: time="2025-09-12T17:10:33.201483194Z" level=warning msg="cleaning up after shim disconnected" id=644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9 namespace=k8s.io Sep 12 17:10:33.203024 containerd[1439]: time="2025-09-12T17:10:33.201491275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:10:33.384761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9-rootfs.mount: Deactivated successfully. Sep 12 17:10:34.047423 kubelet[2489]: E0912 17:10:34.047032 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:34.047423 kubelet[2489]: E0912 17:10:34.047119 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:34.058785 containerd[1439]: time="2025-09-12T17:10:34.058735913Z" level=info msg="CreateContainer within sandbox \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:10:34.081807 containerd[1439]: time="2025-09-12T17:10:34.081742723Z" level=info msg="CreateContainer within sandbox \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\"" Sep 12 17:10:34.082709 containerd[1439]: time="2025-09-12T17:10:34.082680156Z" level=info msg="StartContainer for \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\"" Sep 12 17:10:34.121098 systemd[1]: Started cri-containerd-a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34.scope - libcontainer container a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34. Sep 12 17:10:34.149624 containerd[1439]: time="2025-09-12T17:10:34.149478919Z" level=info msg="StartContainer for \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\" returns successfully" Sep 12 17:10:34.292086 kubelet[2489]: I0912 17:10:34.291711 2489 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:10:34.398886 systemd[1]: Created slice kubepods-burstable-poddc87890a_9da5_478a_a301_b6e102ce9883.slice - libcontainer container kubepods-burstable-poddc87890a_9da5_478a_a301_b6e102ce9883.slice. Sep 12 17:10:34.407318 systemd[1]: Created slice kubepods-burstable-podeb8c7a88_9c49_46e3_ac95_f26eb39912d6.slice - libcontainer container kubepods-burstable-podeb8c7a88_9c49_46e3_ac95_f26eb39912d6.slice. Sep 12 17:10:34.502946 kubelet[2489]: I0912 17:10:34.502880 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l9j6\" (UniqueName: \"kubernetes.io/projected/dc87890a-9da5-478a-a301-b6e102ce9883-kube-api-access-5l9j6\") pod \"coredns-674b8bbfcf-2nzt6\" (UID: \"dc87890a-9da5-478a-a301-b6e102ce9883\") " pod="kube-system/coredns-674b8bbfcf-2nzt6" Sep 12 17:10:34.503139 kubelet[2489]: I0912 17:10:34.502975 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb8c7a88-9c49-46e3-ac95-f26eb39912d6-config-volume\") pod \"coredns-674b8bbfcf-c4g97\" (UID: \"eb8c7a88-9c49-46e3-ac95-f26eb39912d6\") " pod="kube-system/coredns-674b8bbfcf-c4g97" Sep 12 17:10:34.503139 kubelet[2489]: I0912 17:10:34.503011 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc87890a-9da5-478a-a301-b6e102ce9883-config-volume\") pod \"coredns-674b8bbfcf-2nzt6\" (UID: \"dc87890a-9da5-478a-a301-b6e102ce9883\") " pod="kube-system/coredns-674b8bbfcf-2nzt6" Sep 12 17:10:34.503139 kubelet[2489]: I0912 17:10:34.503026 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7crz5\" (UniqueName: \"kubernetes.io/projected/eb8c7a88-9c49-46e3-ac95-f26eb39912d6-kube-api-access-7crz5\") pod \"coredns-674b8bbfcf-c4g97\" (UID: \"eb8c7a88-9c49-46e3-ac95-f26eb39912d6\") " pod="kube-system/coredns-674b8bbfcf-c4g97" Sep 12 17:10:34.701379 kubelet[2489]: E0912 17:10:34.701332 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:34.702604 containerd[1439]: time="2025-09-12T17:10:34.702560034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2nzt6,Uid:dc87890a-9da5-478a-a301-b6e102ce9883,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:34.711108 kubelet[2489]: E0912 17:10:34.711067 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:34.713269 containerd[1439]: time="2025-09-12T17:10:34.713229958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c4g97,Uid:eb8c7a88-9c49-46e3-ac95-f26eb39912d6,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:35.052315 kubelet[2489]: E0912 17:10:35.052189 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:35.069081 kubelet[2489]: I0912 17:10:35.069018 2489 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pcs22" podStartSLOduration=5.385933978 podStartE2EDuration="16.06900197s" podCreationTimestamp="2025-09-12 17:10:19 +0000 UTC" firstStartedPulling="2025-09-12 17:10:19.677451092 +0000 UTC m=+6.828862278" lastFinishedPulling="2025-09-12 17:10:30.360519084 +0000 UTC m=+17.511930270" observedRunningTime="2025-09-12 17:10:35.068647169 +0000 UTC m=+22.220058355" watchObservedRunningTime="2025-09-12 17:10:35.06900197 +0000 UTC m=+22.220413156" Sep 12 17:10:36.053820 kubelet[2489]: E0912 17:10:36.053775 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:36.225595 systemd-networkd[1382]: cilium_host: Link UP Sep 12 17:10:36.225727 systemd-networkd[1382]: cilium_net: Link UP Sep 12 17:10:36.226076 systemd-networkd[1382]: cilium_net: Gained carrier Sep 12 17:10:36.226198 systemd-networkd[1382]: cilium_host: Gained carrier Sep 12 17:10:36.226318 systemd-networkd[1382]: cilium_net: Gained IPv6LL Sep 12 17:10:36.226445 systemd-networkd[1382]: cilium_host: Gained IPv6LL Sep 12 17:10:36.308661 systemd-networkd[1382]: cilium_vxlan: Link UP Sep 12 17:10:36.308678 systemd-networkd[1382]: cilium_vxlan: Gained carrier Sep 12 17:10:36.575942 kernel: NET: Registered PF_ALG protocol family Sep 12 17:10:37.055370 kubelet[2489]: E0912 17:10:37.055334 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:37.211744 systemd-networkd[1382]: lxc_health: Link UP Sep 12 17:10:37.217779 systemd-networkd[1382]: lxc_health: Gained carrier Sep 12 17:10:37.748990 systemd-networkd[1382]: lxc3bb96924836c: Link UP Sep 12 17:10:37.758942 kernel: eth0: renamed from tmpf1cdd Sep 12 17:10:37.764842 systemd-networkd[1382]: lxc4476183f0358: Link UP Sep 12 17:10:37.774941 kernel: eth0: renamed from tmpb532b Sep 12 17:10:37.785409 systemd-networkd[1382]: lxc4476183f0358: Gained carrier Sep 12 17:10:37.785556 systemd-networkd[1382]: lxc3bb96924836c: Gained carrier Sep 12 17:10:37.852091 systemd-networkd[1382]: cilium_vxlan: Gained IPv6LL Sep 12 17:10:38.057864 kubelet[2489]: E0912 17:10:38.057756 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:38.491202 systemd-networkd[1382]: lxc_health: Gained IPv6LL Sep 12 17:10:39.195140 systemd-networkd[1382]: lxc4476183f0358: Gained IPv6LL Sep 12 17:10:39.515078 systemd-networkd[1382]: lxc3bb96924836c: Gained IPv6LL Sep 12 17:10:41.427304 containerd[1439]: time="2025-09-12T17:10:41.427222294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:41.427304 containerd[1439]: time="2025-09-12T17:10:41.427278739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:41.427304 containerd[1439]: time="2025-09-12T17:10:41.427295100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:41.427756 containerd[1439]: time="2025-09-12T17:10:41.427376548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:41.432313 containerd[1439]: time="2025-09-12T17:10:41.432063328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:41.432313 containerd[1439]: time="2025-09-12T17:10:41.432143215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:41.432313 containerd[1439]: time="2025-09-12T17:10:41.432173178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:41.432313 containerd[1439]: time="2025-09-12T17:10:41.432267826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:41.450072 systemd[1]: Started cri-containerd-f1cdd2e3d0b1f5ef6c2b495c50d7218e19bc024dc5c1b241a8e2b67a0dbe39d1.scope - libcontainer container f1cdd2e3d0b1f5ef6c2b495c50d7218e19bc024dc5c1b241a8e2b67a0dbe39d1. Sep 12 17:10:41.455258 systemd[1]: Started cri-containerd-b532b641217a929f12eb710f65b590d999af889d745befa4214ae2f73b49a19a.scope - libcontainer container b532b641217a929f12eb710f65b590d999af889d745befa4214ae2f73b49a19a. Sep 12 17:10:41.464717 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:10:41.466754 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:10:41.485325 containerd[1439]: time="2025-09-12T17:10:41.485285183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2nzt6,Uid:dc87890a-9da5-478a-a301-b6e102ce9883,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1cdd2e3d0b1f5ef6c2b495c50d7218e19bc024dc5c1b241a8e2b67a0dbe39d1\"" Sep 12 17:10:41.486449 kubelet[2489]: E0912 17:10:41.486266 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:41.487943 containerd[1439]: time="2025-09-12T17:10:41.487898977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c4g97,Uid:eb8c7a88-9c49-46e3-ac95-f26eb39912d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b532b641217a929f12eb710f65b590d999af889d745befa4214ae2f73b49a19a\"" Sep 12 17:10:41.488482 kubelet[2489]: E0912 17:10:41.488438 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:41.524168 containerd[1439]: time="2025-09-12T17:10:41.524121187Z" level=info msg="CreateContainer within sandbox \"f1cdd2e3d0b1f5ef6c2b495c50d7218e19bc024dc5c1b241a8e2b67a0dbe39d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:10:41.545804 containerd[1439]: time="2025-09-12T17:10:41.545746247Z" level=info msg="CreateContainer within sandbox \"b532b641217a929f12eb710f65b590d999af889d745befa4214ae2f73b49a19a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:10:41.562029 containerd[1439]: time="2025-09-12T17:10:41.561978464Z" level=info msg="CreateContainer within sandbox \"f1cdd2e3d0b1f5ef6c2b495c50d7218e19bc024dc5c1b241a8e2b67a0dbe39d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a758447788d0734f3365d3d6eaabd8553eafdd8f69eaf03ea0bd3c524bc98753\"" Sep 12 17:10:41.562569 containerd[1439]: time="2025-09-12T17:10:41.562530473Z" level=info msg="StartContainer for \"a758447788d0734f3365d3d6eaabd8553eafdd8f69eaf03ea0bd3c524bc98753\"" Sep 12 17:10:41.564776 containerd[1439]: time="2025-09-12T17:10:41.564558535Z" level=info msg="CreateContainer within sandbox \"b532b641217a929f12eb710f65b590d999af889d745befa4214ae2f73b49a19a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"333b98603ef593e0adaed42ea354a93345350f3ffec4282b2e116d199d376fde\"" Sep 12 17:10:41.566006 containerd[1439]: time="2025-09-12T17:10:41.565969782Z" level=info msg="StartContainer for \"333b98603ef593e0adaed42ea354a93345350f3ffec4282b2e116d199d376fde\"" Sep 12 17:10:41.596083 systemd[1]: Started cri-containerd-a758447788d0734f3365d3d6eaabd8553eafdd8f69eaf03ea0bd3c524bc98753.scope - libcontainer container a758447788d0734f3365d3d6eaabd8553eafdd8f69eaf03ea0bd3c524bc98753. Sep 12 17:10:41.613100 systemd[1]: Started cri-containerd-333b98603ef593e0adaed42ea354a93345350f3ffec4282b2e116d199d376fde.scope - libcontainer container 333b98603ef593e0adaed42ea354a93345350f3ffec4282b2e116d199d376fde. Sep 12 17:10:41.632254 containerd[1439]: time="2025-09-12T17:10:41.632209405Z" level=info msg="StartContainer for \"a758447788d0734f3365d3d6eaabd8553eafdd8f69eaf03ea0bd3c524bc98753\" returns successfully" Sep 12 17:10:41.645602 containerd[1439]: time="2025-09-12T17:10:41.645558522Z" level=info msg="StartContainer for \"333b98603ef593e0adaed42ea354a93345350f3ffec4282b2e116d199d376fde\" returns successfully" Sep 12 17:10:42.066114 kubelet[2489]: E0912 17:10:42.066004 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:42.069242 kubelet[2489]: E0912 17:10:42.069207 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:42.079927 kubelet[2489]: I0912 17:10:42.079849 2489 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2nzt6" podStartSLOduration=23.079832299 podStartE2EDuration="23.079832299s" podCreationTimestamp="2025-09-12 17:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:42.078074948 +0000 UTC m=+29.229486134" watchObservedRunningTime="2025-09-12 17:10:42.079832299 +0000 UTC m=+29.231243445" Sep 12 17:10:42.102872 kubelet[2489]: I0912 17:10:42.102232 2489 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-c4g97" podStartSLOduration=23.102214712 podStartE2EDuration="23.102214712s" podCreationTimestamp="2025-09-12 17:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:42.090637713 +0000 UTC m=+29.242048899" watchObservedRunningTime="2025-09-12 17:10:42.102214712 +0000 UTC m=+29.253625898" Sep 12 17:10:43.070898 kubelet[2489]: E0912 17:10:43.070680 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:43.070898 kubelet[2489]: E0912 17:10:43.070814 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:44.072047 kubelet[2489]: E0912 17:10:44.071972 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:44.073747 kubelet[2489]: E0912 17:10:44.073171 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:45.398757 kubelet[2489]: I0912 17:10:45.398547 2489 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:10:45.399447 kubelet[2489]: E0912 17:10:45.399409 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:46.076513 kubelet[2489]: E0912 17:10:46.075857 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:10:46.631796 systemd[1]: Started sshd@7-10.0.0.22:22-10.0.0.1:58466.service - OpenSSH per-connection server daemon (10.0.0.1:58466). Sep 12 17:10:46.677334 sshd[3918]: Accepted publickey for core from 10.0.0.1 port 58466 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:10:46.679480 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:46.684117 systemd-logind[1418]: New session 8 of user core. Sep 12 17:10:46.693132 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:10:46.833642 sshd[3918]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:46.836959 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:10:46.837145 systemd[1]: sshd@7-10.0.0.22:22-10.0.0.1:58466.service: Deactivated successfully. Sep 12 17:10:46.838842 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:10:46.841617 systemd-logind[1418]: Removed session 8. Sep 12 17:10:51.844770 systemd[1]: Started sshd@8-10.0.0.22:22-10.0.0.1:44250.service - OpenSSH per-connection server daemon (10.0.0.1:44250). Sep 12 17:10:51.899885 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 44250 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:10:51.901323 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:51.906342 systemd-logind[1418]: New session 9 of user core. Sep 12 17:10:51.914143 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:10:52.043280 sshd[3938]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:52.047146 systemd[1]: sshd@8-10.0.0.22:22-10.0.0.1:44250.service: Deactivated successfully. Sep 12 17:10:52.049833 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:10:52.050771 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:10:52.051707 systemd-logind[1418]: Removed session 9. Sep 12 17:10:57.059439 systemd[1]: Started sshd@9-10.0.0.22:22-10.0.0.1:44266.service - OpenSSH per-connection server daemon (10.0.0.1:44266). Sep 12 17:10:57.103480 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 44266 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:10:57.104898 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:57.113729 systemd-logind[1418]: New session 10 of user core. Sep 12 17:10:57.119182 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:10:57.253852 sshd[3953]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:57.256859 systemd[1]: sshd@9-10.0.0.22:22-10.0.0.1:44266.service: Deactivated successfully. Sep 12 17:10:57.258696 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:10:57.262219 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:10:57.263057 systemd-logind[1418]: Removed session 10. Sep 12 17:11:02.273420 systemd[1]: Started sshd@10-10.0.0.22:22-10.0.0.1:50158.service - OpenSSH per-connection server daemon (10.0.0.1:50158). Sep 12 17:11:02.315814 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 50158 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:02.317218 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:02.322485 systemd-logind[1418]: New session 11 of user core. Sep 12 17:11:02.330143 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:11:02.457255 sshd[3969]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:02.471197 systemd[1]: sshd@10-10.0.0.22:22-10.0.0.1:50158.service: Deactivated successfully. Sep 12 17:11:02.473432 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:11:02.475009 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:11:02.476838 systemd[1]: Started sshd@11-10.0.0.22:22-10.0.0.1:50170.service - OpenSSH per-connection server daemon (10.0.0.1:50170). Sep 12 17:11:02.478091 systemd-logind[1418]: Removed session 11. Sep 12 17:11:02.539403 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 50170 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:02.540593 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:02.546043 systemd-logind[1418]: New session 12 of user core. Sep 12 17:11:02.558131 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:11:02.744921 sshd[3984]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:02.762256 systemd[1]: sshd@11-10.0.0.22:22-10.0.0.1:50170.service: Deactivated successfully. Sep 12 17:11:02.766673 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:11:02.769614 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:11:02.775387 systemd[1]: Started sshd@12-10.0.0.22:22-10.0.0.1:50178.service - OpenSSH per-connection server daemon (10.0.0.1:50178). Sep 12 17:11:02.777046 systemd-logind[1418]: Removed session 12. Sep 12 17:11:02.814610 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 50178 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:02.816050 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:02.819945 systemd-logind[1418]: New session 13 of user core. Sep 12 17:11:02.832054 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:11:02.945472 sshd[3997]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:02.949200 systemd[1]: sshd@12-10.0.0.22:22-10.0.0.1:50178.service: Deactivated successfully. Sep 12 17:11:02.950931 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:11:02.951557 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:11:02.952481 systemd-logind[1418]: Removed session 13. Sep 12 17:11:07.956959 systemd[1]: Started sshd@13-10.0.0.22:22-10.0.0.1:50186.service - OpenSSH per-connection server daemon (10.0.0.1:50186). Sep 12 17:11:08.002103 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 50186 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:08.004842 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:08.008892 systemd-logind[1418]: New session 14 of user core. Sep 12 17:11:08.023131 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:11:08.154301 sshd[4012]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:08.160112 systemd[1]: sshd@13-10.0.0.22:22-10.0.0.1:50186.service: Deactivated successfully. Sep 12 17:11:08.161726 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:11:08.162769 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:11:08.164407 systemd-logind[1418]: Removed session 14. Sep 12 17:11:13.169705 systemd[1]: Started sshd@14-10.0.0.22:22-10.0.0.1:52410.service - OpenSSH per-connection server daemon (10.0.0.1:52410). Sep 12 17:11:13.212882 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 52410 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:13.214943 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:13.219784 systemd-logind[1418]: New session 15 of user core. Sep 12 17:11:13.231082 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:11:13.364590 sshd[4029]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:13.376063 systemd[1]: sshd@14-10.0.0.22:22-10.0.0.1:52410.service: Deactivated successfully. Sep 12 17:11:13.377790 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:11:13.379946 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:11:13.386266 systemd[1]: Started sshd@15-10.0.0.22:22-10.0.0.1:52422.service - OpenSSH per-connection server daemon (10.0.0.1:52422). Sep 12 17:11:13.389041 systemd-logind[1418]: Removed session 15. Sep 12 17:11:13.428152 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 52422 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:13.429533 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:13.434790 systemd-logind[1418]: New session 16 of user core. Sep 12 17:11:13.447091 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:11:13.819331 sshd[4044]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:13.830509 systemd[1]: sshd@15-10.0.0.22:22-10.0.0.1:52422.service: Deactivated successfully. Sep 12 17:11:13.833442 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:11:13.835288 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:11:13.845166 systemd[1]: Started sshd@16-10.0.0.22:22-10.0.0.1:52428.service - OpenSSH per-connection server daemon (10.0.0.1:52428). Sep 12 17:11:13.846296 systemd-logind[1418]: Removed session 16. Sep 12 17:11:13.884822 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 52428 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:13.886193 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:13.890492 systemd-logind[1418]: New session 17 of user core. Sep 12 17:11:13.910163 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:11:14.586478 sshd[4056]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:14.596248 systemd[1]: sshd@16-10.0.0.22:22-10.0.0.1:52428.service: Deactivated successfully. Sep 12 17:11:14.598618 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:11:14.600215 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:11:14.605611 systemd[1]: Started sshd@17-10.0.0.22:22-10.0.0.1:52436.service - OpenSSH per-connection server daemon (10.0.0.1:52436). Sep 12 17:11:14.607041 systemd-logind[1418]: Removed session 17. Sep 12 17:11:14.650964 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 52436 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:14.652389 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:14.656413 systemd-logind[1418]: New session 18 of user core. Sep 12 17:11:14.663672 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:11:14.897389 sshd[4079]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:14.906827 systemd[1]: sshd@17-10.0.0.22:22-10.0.0.1:52436.service: Deactivated successfully. Sep 12 17:11:14.910388 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:11:14.911660 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:11:14.917161 systemd[1]: Started sshd@18-10.0.0.22:22-10.0.0.1:52440.service - OpenSSH per-connection server daemon (10.0.0.1:52440). Sep 12 17:11:14.918161 systemd-logind[1418]: Removed session 18. Sep 12 17:11:14.954838 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 52440 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:14.956543 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:14.960262 systemd-logind[1418]: New session 19 of user core. Sep 12 17:11:14.969067 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:11:15.081766 sshd[4092]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:15.084311 systemd[1]: sshd@18-10.0.0.22:22-10.0.0.1:52440.service: Deactivated successfully. Sep 12 17:11:15.085837 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:11:15.087117 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:11:15.088261 systemd-logind[1418]: Removed session 19. Sep 12 17:11:20.095522 systemd[1]: Started sshd@19-10.0.0.22:22-10.0.0.1:57990.service - OpenSSH per-connection server daemon (10.0.0.1:57990). Sep 12 17:11:20.142348 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 57990 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:20.143745 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:20.148063 systemd-logind[1418]: New session 20 of user core. Sep 12 17:11:20.154268 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:11:20.290075 sshd[4110]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:20.294137 systemd[1]: sshd@19-10.0.0.22:22-10.0.0.1:57990.service: Deactivated successfully. Sep 12 17:11:20.296016 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:11:20.296697 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:11:20.297727 systemd-logind[1418]: Removed session 20. Sep 12 17:11:20.944291 kubelet[2489]: E0912 17:11:20.944222 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:22.944842 kubelet[2489]: E0912 17:11:22.944739 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:25.306672 systemd[1]: Started sshd@20-10.0.0.22:22-10.0.0.1:57994.service - OpenSSH per-connection server daemon (10.0.0.1:57994). Sep 12 17:11:25.354095 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 57994 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:25.355832 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:25.360389 systemd-logind[1418]: New session 21 of user core. Sep 12 17:11:25.376303 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:11:25.507243 sshd[4124]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:25.511028 systemd[1]: sshd@20-10.0.0.22:22-10.0.0.1:57994.service: Deactivated successfully. Sep 12 17:11:25.512842 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:11:25.513454 systemd-logind[1418]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:11:25.514410 systemd-logind[1418]: Removed session 21. Sep 12 17:11:26.944556 kubelet[2489]: E0912 17:11:26.944508 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:30.527745 systemd[1]: Started sshd@21-10.0.0.22:22-10.0.0.1:46624.service - OpenSSH per-connection server daemon (10.0.0.1:46624). Sep 12 17:11:30.570203 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 46624 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:30.571638 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:30.581653 systemd-logind[1418]: New session 22 of user core. Sep 12 17:11:30.592218 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:11:30.725784 sshd[4138]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:30.738892 systemd[1]: sshd@21-10.0.0.22:22-10.0.0.1:46624.service: Deactivated successfully. Sep 12 17:11:30.742222 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:11:30.743837 systemd-logind[1418]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:11:30.750280 systemd[1]: Started sshd@22-10.0.0.22:22-10.0.0.1:46628.service - OpenSSH per-connection server daemon (10.0.0.1:46628). Sep 12 17:11:30.751424 systemd-logind[1418]: Removed session 22. Sep 12 17:11:30.786738 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 46628 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:30.787839 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:30.795745 systemd-logind[1418]: New session 23 of user core. Sep 12 17:11:30.807143 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:11:32.376018 containerd[1439]: time="2025-09-12T17:11:32.375830537Z" level=info msg="StopContainer for \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\" with timeout 30 (s)" Sep 12 17:11:32.376589 containerd[1439]: time="2025-09-12T17:11:32.376298022Z" level=info msg="Stop container \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\" with signal terminated" Sep 12 17:11:32.391173 systemd[1]: cri-containerd-de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625.scope: Deactivated successfully. Sep 12 17:11:32.412621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625-rootfs.mount: Deactivated successfully. Sep 12 17:11:32.415123 containerd[1439]: time="2025-09-12T17:11:32.414997392Z" level=info msg="StopContainer for \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\" with timeout 2 (s)" Sep 12 17:11:32.415310 containerd[1439]: time="2025-09-12T17:11:32.415284435Z" level=info msg="Stop container \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\" with signal terminated" Sep 12 17:11:32.416978 containerd[1439]: time="2025-09-12T17:11:32.416367446Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:11:32.420158 containerd[1439]: time="2025-09-12T17:11:32.420094566Z" level=info msg="shim disconnected" id=de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625 namespace=k8s.io Sep 12 17:11:32.420454 containerd[1439]: time="2025-09-12T17:11:32.420292248Z" level=warning msg="cleaning up after shim disconnected" id=de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625 namespace=k8s.io Sep 12 17:11:32.420454 containerd[1439]: time="2025-09-12T17:11:32.420308128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:32.421597 systemd-networkd[1382]: lxc_health: Link DOWN Sep 12 17:11:32.421603 systemd-networkd[1382]: lxc_health: Lost carrier Sep 12 17:11:32.442447 systemd[1]: cri-containerd-a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34.scope: Deactivated successfully. Sep 12 17:11:32.442979 systemd[1]: cri-containerd-a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34.scope: Consumed 6.387s CPU time. Sep 12 17:11:32.479334 containerd[1439]: time="2025-09-12T17:11:32.479006990Z" level=info msg="StopContainer for \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\" returns successfully" Sep 12 17:11:32.479694 containerd[1439]: time="2025-09-12T17:11:32.479668077Z" level=info msg="StopPodSandbox for \"24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84\"" Sep 12 17:11:32.479743 containerd[1439]: time="2025-09-12T17:11:32.479712877Z" level=info msg="Container to stop \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:32.481596 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84-shm.mount: Deactivated successfully. Sep 12 17:11:32.494877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34-rootfs.mount: Deactivated successfully. Sep 12 17:11:32.497720 systemd[1]: cri-containerd-24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84.scope: Deactivated successfully. Sep 12 17:11:32.516487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84-rootfs.mount: Deactivated successfully. Sep 12 17:11:32.518135 containerd[1439]: time="2025-09-12T17:11:32.518079284Z" level=info msg="shim disconnected" id=a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34 namespace=k8s.io Sep 12 17:11:32.518135 containerd[1439]: time="2025-09-12T17:11:32.518136124Z" level=warning msg="cleaning up after shim disconnected" id=a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34 namespace=k8s.io Sep 12 17:11:32.518267 containerd[1439]: time="2025-09-12T17:11:32.518144605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:32.518305 containerd[1439]: time="2025-09-12T17:11:32.518081764Z" level=info msg="shim disconnected" id=24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84 namespace=k8s.io Sep 12 17:11:32.518335 containerd[1439]: time="2025-09-12T17:11:32.518305846Z" level=warning msg="cleaning up after shim disconnected" id=24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84 namespace=k8s.io Sep 12 17:11:32.518335 containerd[1439]: time="2025-09-12T17:11:32.518315766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:32.532044 containerd[1439]: time="2025-09-12T17:11:32.531990351Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:11:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:11:32.535189 containerd[1439]: time="2025-09-12T17:11:32.535137865Z" level=info msg="TearDown network for sandbox \"24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84\" successfully" Sep 12 17:11:32.535189 containerd[1439]: time="2025-09-12T17:11:32.535179545Z" level=info msg="StopPodSandbox for \"24129e7a59507bb6df9daf82c5282ed4e47d945b5a862f13e5b18ec792d5af84\" returns successfully" Sep 12 17:11:32.535339 containerd[1439]: time="2025-09-12T17:11:32.535151265Z" level=info msg="StopContainer for \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\" returns successfully" Sep 12 17:11:32.535683 containerd[1439]: time="2025-09-12T17:11:32.535661030Z" level=info msg="StopPodSandbox for \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\"" Sep 12 17:11:32.535718 containerd[1439]: time="2025-09-12T17:11:32.535695150Z" level=info msg="Container to stop \"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:32.535718 containerd[1439]: time="2025-09-12T17:11:32.535706391Z" level=info msg="Container to stop \"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:32.535718 containerd[1439]: time="2025-09-12T17:11:32.535715711Z" level=info msg="Container to stop \"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:32.535788 containerd[1439]: time="2025-09-12T17:11:32.535726471Z" level=info msg="Container to stop \"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:32.535788 containerd[1439]: time="2025-09-12T17:11:32.535736151Z" level=info msg="Container to stop \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:32.541067 systemd[1]: cri-containerd-1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8.scope: Deactivated successfully. Sep 12 17:11:32.567740 containerd[1439]: time="2025-09-12T17:11:32.567675969Z" level=info msg="shim disconnected" id=1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8 namespace=k8s.io Sep 12 17:11:32.567740 containerd[1439]: time="2025-09-12T17:11:32.567736850Z" level=warning msg="cleaning up after shim disconnected" id=1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8 namespace=k8s.io Sep 12 17:11:32.567740 containerd[1439]: time="2025-09-12T17:11:32.567746730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:32.575441 kubelet[2489]: I0912 17:11:32.575398 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85a3a377-55ec-4fe1-9db7-547028df43df-cilium-config-path\") pod \"85a3a377-55ec-4fe1-9db7-547028df43df\" (UID: \"85a3a377-55ec-4fe1-9db7-547028df43df\") " Sep 12 17:11:32.575888 kubelet[2489]: I0912 17:11:32.575464 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xqv4\" (UniqueName: \"kubernetes.io/projected/85a3a377-55ec-4fe1-9db7-547028df43df-kube-api-access-6xqv4\") pod \"85a3a377-55ec-4fe1-9db7-547028df43df\" (UID: \"85a3a377-55ec-4fe1-9db7-547028df43df\") " Sep 12 17:11:32.580945 kubelet[2489]: I0912 17:11:32.579949 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a3a377-55ec-4fe1-9db7-547028df43df-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "85a3a377-55ec-4fe1-9db7-547028df43df" (UID: "85a3a377-55ec-4fe1-9db7-547028df43df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:11:32.581530 kubelet[2489]: I0912 17:11:32.581496 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a3a377-55ec-4fe1-9db7-547028df43df-kube-api-access-6xqv4" (OuterVolumeSpecName: "kube-api-access-6xqv4") pod "85a3a377-55ec-4fe1-9db7-547028df43df" (UID: "85a3a377-55ec-4fe1-9db7-547028df43df"). InnerVolumeSpecName "kube-api-access-6xqv4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:11:32.581893 containerd[1439]: time="2025-09-12T17:11:32.581858759Z" level=info msg="TearDown network for sandbox \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" successfully" Sep 12 17:11:32.582072 containerd[1439]: time="2025-09-12T17:11:32.581974761Z" level=info msg="StopPodSandbox for \"1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8\" returns successfully" Sep 12 17:11:32.676466 kubelet[2489]: I0912 17:11:32.676337 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38b53b81-c476-44b6-ae94-bd84b04cd7d0-hubble-tls\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676466 kubelet[2489]: I0912 17:11:32.676384 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-etc-cni-netd\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676466 kubelet[2489]: I0912 17:11:32.676402 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cni-path\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676466 kubelet[2489]: I0912 17:11:32.676419 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38b53b81-c476-44b6-ae94-bd84b04cd7d0-clustermesh-secrets\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676466 kubelet[2489]: I0912 17:11:32.676433 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-lib-modules\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676466 kubelet[2489]: I0912 17:11:32.676448 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-xtables-lock\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676694 kubelet[2489]: I0912 17:11:32.676479 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9s4c\" (UniqueName: \"kubernetes.io/projected/38b53b81-c476-44b6-ae94-bd84b04cd7d0-kube-api-access-g9s4c\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676694 kubelet[2489]: I0912 17:11:32.676494 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-hostproc\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676694 kubelet[2489]: I0912 17:11:32.676514 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-config-path\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676694 kubelet[2489]: I0912 17:11:32.676529 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-host-proc-sys-kernel\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676694 kubelet[2489]: I0912 17:11:32.676550 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-host-proc-sys-net\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676694 kubelet[2489]: I0912 17:11:32.676564 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-run\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676818 kubelet[2489]: I0912 17:11:32.676577 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-bpf-maps\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676818 kubelet[2489]: I0912 17:11:32.676592 2489 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-cgroup\") pod \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\" (UID: \"38b53b81-c476-44b6-ae94-bd84b04cd7d0\") " Sep 12 17:11:32.676818 kubelet[2489]: I0912 17:11:32.676627 2489 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6xqv4\" (UniqueName: \"kubernetes.io/projected/85a3a377-55ec-4fe1-9db7-547028df43df-kube-api-access-6xqv4\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.676818 kubelet[2489]: I0912 17:11:32.676638 2489 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85a3a377-55ec-4fe1-9db7-547028df43df-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.676818 kubelet[2489]: I0912 17:11:32.676685 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:32.677483 kubelet[2489]: I0912 17:11:32.677457 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:32.678714 kubelet[2489]: I0912 17:11:32.678340 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:32.678789 kubelet[2489]: I0912 17:11:32.678372 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:32.678857 kubelet[2489]: I0912 17:11:32.678830 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:32.678891 kubelet[2489]: I0912 17:11:32.678461 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cni-path" (OuterVolumeSpecName: "cni-path") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:32.678891 kubelet[2489]: I0912 17:11:32.678473 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:32.678891 kubelet[2489]: I0912 17:11:32.678487 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-hostproc" (OuterVolumeSpecName: "hostproc") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:32.678891 kubelet[2489]: I0912 17:11:32.678788 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:32.679363 kubelet[2489]: I0912 17:11:32.679328 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:11:32.681014 kubelet[2489]: I0912 17:11:32.680980 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:11:32.681208 kubelet[2489]: I0912 17:11:32.681181 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b53b81-c476-44b6-ae94-bd84b04cd7d0-kube-api-access-g9s4c" (OuterVolumeSpecName: "kube-api-access-g9s4c") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "kube-api-access-g9s4c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:11:32.681284 kubelet[2489]: I0912 17:11:32.681213 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b53b81-c476-44b6-ae94-bd84b04cd7d0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:11:32.686683 kubelet[2489]: I0912 17:11:32.686635 2489 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b53b81-c476-44b6-ae94-bd84b04cd7d0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "38b53b81-c476-44b6-ae94-bd84b04cd7d0" (UID: "38b53b81-c476-44b6-ae94-bd84b04cd7d0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:11:32.777019 kubelet[2489]: I0912 17:11:32.776976 2489 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777019 kubelet[2489]: I0912 17:11:32.777010 2489 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777019 kubelet[2489]: I0912 17:11:32.777019 2489 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777019 kubelet[2489]: I0912 17:11:32.777030 2489 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38b53b81-c476-44b6-ae94-bd84b04cd7d0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777214 kubelet[2489]: I0912 17:11:32.777039 2489 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777214 kubelet[2489]: I0912 17:11:32.777046 2489 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777214 kubelet[2489]: I0912 17:11:32.777053 2489 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38b53b81-c476-44b6-ae94-bd84b04cd7d0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777214 kubelet[2489]: I0912 17:11:32.777063 2489 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777214 kubelet[2489]: I0912 17:11:32.777070 2489 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777214 kubelet[2489]: I0912 17:11:32.777079 2489 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g9s4c\" (UniqueName: \"kubernetes.io/projected/38b53b81-c476-44b6-ae94-bd84b04cd7d0-kube-api-access-g9s4c\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777214 kubelet[2489]: I0912 17:11:32.777087 2489 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777214 kubelet[2489]: I0912 17:11:32.777095 2489 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38b53b81-c476-44b6-ae94-bd84b04cd7d0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777400 kubelet[2489]: I0912 17:11:32.777105 2489 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.777400 kubelet[2489]: I0912 17:11:32.777115 2489 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38b53b81-c476-44b6-ae94-bd84b04cd7d0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:32.953377 systemd[1]: Removed slice kubepods-burstable-pod38b53b81_c476_44b6_ae94_bd84b04cd7d0.slice - libcontainer container kubepods-burstable-pod38b53b81_c476_44b6_ae94_bd84b04cd7d0.slice. Sep 12 17:11:32.953668 systemd[1]: kubepods-burstable-pod38b53b81_c476_44b6_ae94_bd84b04cd7d0.slice: Consumed 6.469s CPU time. Sep 12 17:11:32.955535 systemd[1]: Removed slice kubepods-besteffort-pod85a3a377_55ec_4fe1_9db7_547028df43df.slice - libcontainer container kubepods-besteffort-pod85a3a377_55ec_4fe1_9db7_547028df43df.slice. Sep 12 17:11:33.004197 kubelet[2489]: E0912 17:11:33.004156 2489 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:11:33.204025 kubelet[2489]: I0912 17:11:33.203687 2489 scope.go:117] "RemoveContainer" containerID="de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625" Sep 12 17:11:33.205445 containerd[1439]: time="2025-09-12T17:11:33.205405501Z" level=info msg="RemoveContainer for \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\"" Sep 12 17:11:33.210693 containerd[1439]: time="2025-09-12T17:11:33.210641320Z" level=info msg="RemoveContainer for \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\" returns successfully" Sep 12 17:11:33.211036 kubelet[2489]: I0912 17:11:33.210937 2489 scope.go:117] "RemoveContainer" containerID="de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625" Sep 12 17:11:33.211367 containerd[1439]: time="2025-09-12T17:11:33.211321047Z" level=error msg="ContainerStatus for \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\": not found" Sep 12 17:11:33.219261 kubelet[2489]: E0912 17:11:33.219224 2489 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\": not found" containerID="de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625" Sep 12 17:11:33.219395 kubelet[2489]: I0912 17:11:33.219265 2489 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625"} err="failed to get container status \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\": rpc error: code = NotFound desc = an error occurred when try to find container \"de0c422fe154e7c9b048b60bbebb6ade926c9b619bd157025b3fbc0090ba0625\": not found" Sep 12 17:11:33.219395 kubelet[2489]: I0912 17:11:33.219313 2489 scope.go:117] "RemoveContainer" containerID="a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34" Sep 12 17:11:33.221048 containerd[1439]: time="2025-09-12T17:11:33.221015197Z" level=info msg="RemoveContainer for \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\"" Sep 12 17:11:33.223972 containerd[1439]: time="2025-09-12T17:11:33.223935750Z" level=info msg="RemoveContainer for \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\" returns successfully" Sep 12 17:11:33.224182 kubelet[2489]: I0912 17:11:33.224117 2489 scope.go:117] "RemoveContainer" containerID="644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9" Sep 12 17:11:33.225232 containerd[1439]: time="2025-09-12T17:11:33.225203484Z" level=info msg="RemoveContainer for \"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9\"" Sep 12 17:11:33.228149 containerd[1439]: time="2025-09-12T17:11:33.228118317Z" level=info msg="RemoveContainer for \"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9\" returns successfully" Sep 12 17:11:33.228339 kubelet[2489]: I0912 17:11:33.228315 2489 scope.go:117] "RemoveContainer" containerID="1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b" Sep 12 17:11:33.230229 containerd[1439]: time="2025-09-12T17:11:33.230196340Z" level=info msg="RemoveContainer for \"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b\"" Sep 12 17:11:33.233514 containerd[1439]: time="2025-09-12T17:11:33.233403656Z" level=info msg="RemoveContainer for \"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b\" returns successfully" Sep 12 17:11:33.235081 kubelet[2489]: I0912 17:11:33.235048 2489 scope.go:117] "RemoveContainer" containerID="1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc" Sep 12 17:11:33.237550 containerd[1439]: time="2025-09-12T17:11:33.237518062Z" level=info msg="RemoveContainer for \"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc\"" Sep 12 17:11:33.240038 containerd[1439]: time="2025-09-12T17:11:33.240010331Z" level=info msg="RemoveContainer for \"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc\" returns successfully" Sep 12 17:11:33.240177 kubelet[2489]: I0912 17:11:33.240157 2489 scope.go:117] "RemoveContainer" containerID="fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491" Sep 12 17:11:33.241169 containerd[1439]: time="2025-09-12T17:11:33.240960781Z" level=info msg="RemoveContainer for \"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491\"" Sep 12 17:11:33.243289 containerd[1439]: time="2025-09-12T17:11:33.243256527Z" level=info msg="RemoveContainer for \"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491\" returns successfully" Sep 12 17:11:33.243594 kubelet[2489]: I0912 17:11:33.243565 2489 scope.go:117] "RemoveContainer" containerID="a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34" Sep 12 17:11:33.243833 containerd[1439]: time="2025-09-12T17:11:33.243788853Z" level=error msg="ContainerStatus for \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\": not found" Sep 12 17:11:33.244015 kubelet[2489]: E0912 17:11:33.243964 2489 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\": not found" containerID="a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34" Sep 12 17:11:33.244056 kubelet[2489]: I0912 17:11:33.244022 2489 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34"} err="failed to get container status \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\": rpc error: code = NotFound desc = an error occurred when try to find container \"a602301d47bca22014afb0e1493469338983cf0504bd144d3cd05619df725e34\": not found" Sep 12 17:11:33.244056 kubelet[2489]: I0912 17:11:33.244046 2489 scope.go:117] "RemoveContainer" containerID="644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9" Sep 12 17:11:33.244285 containerd[1439]: time="2025-09-12T17:11:33.244251298Z" level=error msg="ContainerStatus for \"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9\": not found" Sep 12 17:11:33.244437 kubelet[2489]: E0912 17:11:33.244405 2489 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9\": not found" containerID="644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9" Sep 12 17:11:33.244520 kubelet[2489]: I0912 17:11:33.244440 2489 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9"} err="failed to get container status \"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"644fb64df369c9c4902244b76070f57bf5c5b28ebecd097cd0a7eb2fb39a50d9\": not found" Sep 12 17:11:33.244520 kubelet[2489]: I0912 17:11:33.244457 2489 scope.go:117] "RemoveContainer" containerID="1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b" Sep 12 17:11:33.249360 containerd[1439]: time="2025-09-12T17:11:33.249226394Z" level=error msg="ContainerStatus for \"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b\": not found" Sep 12 17:11:33.249633 kubelet[2489]: E0912 17:11:33.249521 2489 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b\": not found" containerID="1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b" Sep 12 17:11:33.249633 kubelet[2489]: I0912 17:11:33.249552 2489 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b"} err="failed to get container status \"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b\": rpc error: code = NotFound desc = an error occurred when try to find container \"1af74b61a53a38d0dc4f94ca2a3ccab3a86e2e5bf3ac16dae8304d14df07c30b\": not found" Sep 12 17:11:33.249633 kubelet[2489]: I0912 17:11:33.249569 2489 scope.go:117] "RemoveContainer" containerID="1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc" Sep 12 17:11:33.249830 containerd[1439]: time="2025-09-12T17:11:33.249791201Z" level=error msg="ContainerStatus for \"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc\": not found" Sep 12 17:11:33.250076 kubelet[2489]: E0912 17:11:33.249971 2489 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc\": not found" containerID="1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc" Sep 12 17:11:33.250076 kubelet[2489]: I0912 17:11:33.249998 2489 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc"} err="failed to get container status \"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a5d09f1f264f7c8a2ba4a22629398767d0bd937a3216c404c2e7f1e8f90a4dc\": not found" Sep 12 17:11:33.250076 kubelet[2489]: I0912 17:11:33.250013 2489 scope.go:117] "RemoveContainer" containerID="fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491" Sep 12 17:11:33.250188 containerd[1439]: time="2025-09-12T17:11:33.250166965Z" level=error msg="ContainerStatus for \"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491\": not found" Sep 12 17:11:33.250441 kubelet[2489]: E0912 17:11:33.250398 2489 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491\": not found" containerID="fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491" Sep 12 17:11:33.250441 kubelet[2489]: I0912 17:11:33.250430 2489 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491"} err="failed to get container status \"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe3b66ea3291faa92b88e4afc0f2cfd98b3c2fb1a547c0ed7f5021d42fb04491\": not found" Sep 12 17:11:33.387802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8-rootfs.mount: Deactivated successfully. Sep 12 17:11:33.387926 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c980217c71f2b1cffb3f5881fca77b0232a1f432bfade0c2406b16487c4f2a8-shm.mount: Deactivated successfully. Sep 12 17:11:33.387986 systemd[1]: var-lib-kubelet-pods-85a3a377\x2d55ec\x2d4fe1\x2d9db7\x2d547028df43df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6xqv4.mount: Deactivated successfully. Sep 12 17:11:33.388042 systemd[1]: var-lib-kubelet-pods-38b53b81\x2dc476\x2d44b6\x2dae94\x2dbd84b04cd7d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg9s4c.mount: Deactivated successfully. Sep 12 17:11:33.388096 systemd[1]: var-lib-kubelet-pods-38b53b81\x2dc476\x2d44b6\x2dae94\x2dbd84b04cd7d0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:11:33.388147 systemd[1]: var-lib-kubelet-pods-38b53b81\x2dc476\x2d44b6\x2dae94\x2dbd84b04cd7d0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:11:34.291210 sshd[4152]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:34.304022 systemd[1]: sshd@22-10.0.0.22:22-10.0.0.1:46628.service: Deactivated successfully. Sep 12 17:11:34.305578 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:11:34.307049 systemd-logind[1418]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:11:34.308325 systemd[1]: Started sshd@23-10.0.0.22:22-10.0.0.1:46630.service - OpenSSH per-connection server daemon (10.0.0.1:46630). Sep 12 17:11:34.309129 systemd-logind[1418]: Removed session 23. Sep 12 17:11:34.352828 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 46630 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:34.354430 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:34.358652 systemd-logind[1418]: New session 24 of user core. Sep 12 17:11:34.367112 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:11:34.630162 kubelet[2489]: I0912 17:11:34.629844 2489 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:11:34Z","lastTransitionTime":"2025-09-12T17:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:11:34.947272 kubelet[2489]: I0912 17:11:34.946499 2489 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b53b81-c476-44b6-ae94-bd84b04cd7d0" path="/var/lib/kubelet/pods/38b53b81-c476-44b6-ae94-bd84b04cd7d0/volumes" Sep 12 17:11:34.947272 kubelet[2489]: I0912 17:11:34.947027 2489 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85a3a377-55ec-4fe1-9db7-547028df43df" path="/var/lib/kubelet/pods/85a3a377-55ec-4fe1-9db7-547028df43df/volumes" Sep 12 17:11:35.535094 sshd[4313]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:35.545080 systemd[1]: sshd@23-10.0.0.22:22-10.0.0.1:46630.service: Deactivated successfully. Sep 12 17:11:35.549424 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:11:35.549596 systemd[1]: session-24.scope: Consumed 1.071s CPU time. Sep 12 17:11:35.551664 systemd-logind[1418]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:11:35.562730 systemd[1]: Started sshd@24-10.0.0.22:22-10.0.0.1:46642.service - OpenSSH per-connection server daemon (10.0.0.1:46642). Sep 12 17:11:35.567325 systemd-logind[1418]: Removed session 24. Sep 12 17:11:35.581789 systemd[1]: Created slice kubepods-burstable-pod54aaf84e_7208_4d72_9c34_ae28dcd35ab8.slice - libcontainer container kubepods-burstable-pod54aaf84e_7208_4d72_9c34_ae28dcd35ab8.slice. Sep 12 17:11:35.596031 kubelet[2489]: I0912 17:11:35.595943 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-clustermesh-secrets\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596163 kubelet[2489]: I0912 17:11:35.596081 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-cilium-run\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596163 kubelet[2489]: I0912 17:11:35.596109 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-bpf-maps\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596163 kubelet[2489]: I0912 17:11:35.596128 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-hostproc\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596244 kubelet[2489]: I0912 17:11:35.596180 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-cni-path\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596244 kubelet[2489]: I0912 17:11:35.596197 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-host-proc-sys-net\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596244 kubelet[2489]: I0912 17:11:35.596213 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-hubble-tls\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596303 kubelet[2489]: I0912 17:11:35.596256 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zc6\" (UniqueName: \"kubernetes.io/projected/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-kube-api-access-r5zc6\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596303 kubelet[2489]: I0912 17:11:35.596274 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-cilium-cgroup\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596347 kubelet[2489]: I0912 17:11:35.596306 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-lib-modules\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596347 kubelet[2489]: I0912 17:11:35.596322 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-cilium-config-path\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596347 kubelet[2489]: I0912 17:11:35.596335 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-etc-cni-netd\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596416 kubelet[2489]: I0912 17:11:35.596386 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-xtables-lock\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596416 kubelet[2489]: I0912 17:11:35.596403 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-cilium-ipsec-secrets\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.596460 kubelet[2489]: I0912 17:11:35.596421 2489 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/54aaf84e-7208-4d72-9c34-ae28dcd35ab8-host-proc-sys-kernel\") pod \"cilium-z6h5d\" (UID: \"54aaf84e-7208-4d72-9c34-ae28dcd35ab8\") " pod="kube-system/cilium-z6h5d" Sep 12 17:11:35.612998 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 46642 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:35.614391 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:35.618554 systemd-logind[1418]: New session 25 of user core. Sep 12 17:11:35.630135 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:11:35.681123 sshd[4326]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:35.691540 systemd[1]: sshd@24-10.0.0.22:22-10.0.0.1:46642.service: Deactivated successfully. Sep 12 17:11:35.694374 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:11:35.695785 systemd-logind[1418]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:11:35.707372 systemd[1]: Started sshd@25-10.0.0.22:22-10.0.0.1:46654.service - OpenSSH per-connection server daemon (10.0.0.1:46654). Sep 12 17:11:35.717715 systemd-logind[1418]: Removed session 25. Sep 12 17:11:35.747547 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 46654 ssh2: RSA SHA256:8JyPYHHUCQHtRmL1I0qnB3JzzyTRfsNg5qYJBQxVX8Y Sep 12 17:11:35.748967 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:35.752899 systemd-logind[1418]: New session 26 of user core. Sep 12 17:11:35.764106 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:11:35.896131 kubelet[2489]: E0912 17:11:35.895809 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:35.897572 containerd[1439]: time="2025-09-12T17:11:35.897408280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z6h5d,Uid:54aaf84e-7208-4d72-9c34-ae28dcd35ab8,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:35.916089 containerd[1439]: time="2025-09-12T17:11:35.915468306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:35.916089 containerd[1439]: time="2025-09-12T17:11:35.915877791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:35.916089 containerd[1439]: time="2025-09-12T17:11:35.915889991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:35.916089 containerd[1439]: time="2025-09-12T17:11:35.916001873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:35.934131 systemd[1]: Started cri-containerd-b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b.scope - libcontainer container b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b. Sep 12 17:11:35.960592 containerd[1439]: time="2025-09-12T17:11:35.960535351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z6h5d,Uid:54aaf84e-7208-4d72-9c34-ae28dcd35ab8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b\"" Sep 12 17:11:35.961286 kubelet[2489]: E0912 17:11:35.961263 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:35.971270 containerd[1439]: time="2025-09-12T17:11:35.971226845Z" level=info msg="CreateContainer within sandbox \"b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:11:35.981666 containerd[1439]: time="2025-09-12T17:11:35.981617095Z" level=info msg="CreateContainer within sandbox \"b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86f499ea4fc63e775a0fb547f54fedc16e471d09a6a41fcb93c193fade3700d1\"" Sep 12 17:11:35.982202 containerd[1439]: time="2025-09-12T17:11:35.982177902Z" level=info msg="StartContainer for \"86f499ea4fc63e775a0fb547f54fedc16e471d09a6a41fcb93c193fade3700d1\"" Sep 12 17:11:36.010133 systemd[1]: Started cri-containerd-86f499ea4fc63e775a0fb547f54fedc16e471d09a6a41fcb93c193fade3700d1.scope - libcontainer container 86f499ea4fc63e775a0fb547f54fedc16e471d09a6a41fcb93c193fade3700d1. Sep 12 17:11:36.038619 systemd[1]: cri-containerd-86f499ea4fc63e775a0fb547f54fedc16e471d09a6a41fcb93c193fade3700d1.scope: Deactivated successfully. Sep 12 17:11:36.048051 containerd[1439]: time="2025-09-12T17:11:36.047961635Z" level=info msg="StartContainer for \"86f499ea4fc63e775a0fb547f54fedc16e471d09a6a41fcb93c193fade3700d1\" returns successfully" Sep 12 17:11:36.079204 containerd[1439]: time="2025-09-12T17:11:36.078972402Z" level=info msg="shim disconnected" id=86f499ea4fc63e775a0fb547f54fedc16e471d09a6a41fcb93c193fade3700d1 namespace=k8s.io Sep 12 17:11:36.079204 containerd[1439]: time="2025-09-12T17:11:36.079034003Z" level=warning msg="cleaning up after shim disconnected" id=86f499ea4fc63e775a0fb547f54fedc16e471d09a6a41fcb93c193fade3700d1 namespace=k8s.io Sep 12 17:11:36.079204 containerd[1439]: time="2025-09-12T17:11:36.079043163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:36.216883 kubelet[2489]: E0912 17:11:36.216852 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:36.221484 containerd[1439]: time="2025-09-12T17:11:36.221306753Z" level=info msg="CreateContainer within sandbox \"b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:11:36.232475 containerd[1439]: time="2025-09-12T17:11:36.232428379Z" level=info msg="CreateContainer within sandbox \"b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef77cbc7a050b380c77515baff1d4d738951054e6136ad025a66d3895e50d93b\"" Sep 12 17:11:36.233265 containerd[1439]: time="2025-09-12T17:11:36.233137748Z" level=info msg="StartContainer for \"ef77cbc7a050b380c77515baff1d4d738951054e6136ad025a66d3895e50d93b\"" Sep 12 17:11:36.260117 systemd[1]: Started cri-containerd-ef77cbc7a050b380c77515baff1d4d738951054e6136ad025a66d3895e50d93b.scope - libcontainer container ef77cbc7a050b380c77515baff1d4d738951054e6136ad025a66d3895e50d93b. Sep 12 17:11:36.282501 containerd[1439]: time="2025-09-12T17:11:36.282457036Z" level=info msg="StartContainer for \"ef77cbc7a050b380c77515baff1d4d738951054e6136ad025a66d3895e50d93b\" returns successfully" Sep 12 17:11:36.289783 systemd[1]: cri-containerd-ef77cbc7a050b380c77515baff1d4d738951054e6136ad025a66d3895e50d93b.scope: Deactivated successfully. Sep 12 17:11:36.323525 containerd[1439]: time="2025-09-12T17:11:36.323459855Z" level=info msg="shim disconnected" id=ef77cbc7a050b380c77515baff1d4d738951054e6136ad025a66d3895e50d93b namespace=k8s.io Sep 12 17:11:36.323525 containerd[1439]: time="2025-09-12T17:11:36.323518496Z" level=warning msg="cleaning up after shim disconnected" id=ef77cbc7a050b380c77515baff1d4d738951054e6136ad025a66d3895e50d93b namespace=k8s.io Sep 12 17:11:36.323525 containerd[1439]: time="2025-09-12T17:11:36.323527696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:37.227703 kubelet[2489]: E0912 17:11:37.227671 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:37.256210 containerd[1439]: time="2025-09-12T17:11:37.256048978Z" level=info msg="CreateContainer within sandbox \"b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:11:37.277566 containerd[1439]: time="2025-09-12T17:11:37.277447352Z" level=info msg="CreateContainer within sandbox \"b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"52fea2ca284b16b6a3c58f036c49388674227fd1c945611c0956bfd13a3c1545\"" Sep 12 17:11:37.278023 containerd[1439]: time="2025-09-12T17:11:37.277999679Z" level=info msg="StartContainer for \"52fea2ca284b16b6a3c58f036c49388674227fd1c945611c0956bfd13a3c1545\"" Sep 12 17:11:37.317169 systemd[1]: Started cri-containerd-52fea2ca284b16b6a3c58f036c49388674227fd1c945611c0956bfd13a3c1545.scope - libcontainer container 52fea2ca284b16b6a3c58f036c49388674227fd1c945611c0956bfd13a3c1545. Sep 12 17:11:37.347752 systemd[1]: cri-containerd-52fea2ca284b16b6a3c58f036c49388674227fd1c945611c0956bfd13a3c1545.scope: Deactivated successfully. Sep 12 17:11:37.353315 containerd[1439]: time="2025-09-12T17:11:37.353269193Z" level=info msg="StartContainer for \"52fea2ca284b16b6a3c58f036c49388674227fd1c945611c0956bfd13a3c1545\" returns successfully" Sep 12 17:11:37.376742 containerd[1439]: time="2025-09-12T17:11:37.376679394Z" level=info msg="shim disconnected" id=52fea2ca284b16b6a3c58f036c49388674227fd1c945611c0956bfd13a3c1545 namespace=k8s.io Sep 12 17:11:37.377035 containerd[1439]: time="2025-09-12T17:11:37.376737515Z" level=warning msg="cleaning up after shim disconnected" id=52fea2ca284b16b6a3c58f036c49388674227fd1c945611c0956bfd13a3c1545 namespace=k8s.io Sep 12 17:11:37.377035 containerd[1439]: time="2025-09-12T17:11:37.376803396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:37.709892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52fea2ca284b16b6a3c58f036c49388674227fd1c945611c0956bfd13a3c1545-rootfs.mount: Deactivated successfully. Sep 12 17:11:38.005315 kubelet[2489]: E0912 17:11:38.005187 2489 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:11:38.232057 kubelet[2489]: E0912 17:11:38.231912 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:38.238449 containerd[1439]: time="2025-09-12T17:11:38.238396678Z" level=info msg="CreateContainer within sandbox \"b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:11:38.271693 containerd[1439]: time="2025-09-12T17:11:38.271496311Z" level=info msg="CreateContainer within sandbox \"b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1dfe17e8e38515330b8fd229528abf1aafa503fe73cd1b766ebdefdaeb36c99b\"" Sep 12 17:11:38.273149 containerd[1439]: time="2025-09-12T17:11:38.273107974Z" level=info msg="StartContainer for \"1dfe17e8e38515330b8fd229528abf1aafa503fe73cd1b766ebdefdaeb36c99b\"" Sep 12 17:11:38.301087 systemd[1]: Started cri-containerd-1dfe17e8e38515330b8fd229528abf1aafa503fe73cd1b766ebdefdaeb36c99b.scope - libcontainer container 1dfe17e8e38515330b8fd229528abf1aafa503fe73cd1b766ebdefdaeb36c99b. Sep 12 17:11:38.321794 systemd[1]: cri-containerd-1dfe17e8e38515330b8fd229528abf1aafa503fe73cd1b766ebdefdaeb36c99b.scope: Deactivated successfully. Sep 12 17:11:38.324817 containerd[1439]: time="2025-09-12T17:11:38.324506949Z" level=info msg="StartContainer for \"1dfe17e8e38515330b8fd229528abf1aafa503fe73cd1b766ebdefdaeb36c99b\" returns successfully" Sep 12 17:11:38.347685 containerd[1439]: time="2025-09-12T17:11:38.347607239Z" level=info msg="shim disconnected" id=1dfe17e8e38515330b8fd229528abf1aafa503fe73cd1b766ebdefdaeb36c99b namespace=k8s.io Sep 12 17:11:38.347685 containerd[1439]: time="2025-09-12T17:11:38.347666320Z" level=warning msg="cleaning up after shim disconnected" id=1dfe17e8e38515330b8fd229528abf1aafa503fe73cd1b766ebdefdaeb36c99b namespace=k8s.io Sep 12 17:11:38.347685 containerd[1439]: time="2025-09-12T17:11:38.347674520Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:38.711831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dfe17e8e38515330b8fd229528abf1aafa503fe73cd1b766ebdefdaeb36c99b-rootfs.mount: Deactivated successfully. Sep 12 17:11:39.239954 kubelet[2489]: E0912 17:11:39.239887 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:39.260769 containerd[1439]: time="2025-09-12T17:11:39.260719676Z" level=info msg="CreateContainer within sandbox \"b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:11:39.278788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951135353.mount: Deactivated successfully. Sep 12 17:11:39.288399 containerd[1439]: time="2025-09-12T17:11:39.287870559Z" level=info msg="CreateContainer within sandbox \"b9750c1d6360aa66642c84181e8d741cdb7ade881422f2ee2f98a67a04321b7b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"575b9d6809fc262ee3b0f0b78f73bc04efdaf3e407671a447d8f7dfbcc13b92e\"" Sep 12 17:11:39.289300 containerd[1439]: time="2025-09-12T17:11:39.288967015Z" level=info msg="StartContainer for \"575b9d6809fc262ee3b0f0b78f73bc04efdaf3e407671a447d8f7dfbcc13b92e\"" Sep 12 17:11:39.321166 systemd[1]: Started cri-containerd-575b9d6809fc262ee3b0f0b78f73bc04efdaf3e407671a447d8f7dfbcc13b92e.scope - libcontainer container 575b9d6809fc262ee3b0f0b78f73bc04efdaf3e407671a447d8f7dfbcc13b92e. Sep 12 17:11:39.348815 containerd[1439]: time="2025-09-12T17:11:39.348749303Z" level=info msg="StartContainer for \"575b9d6809fc262ee3b0f0b78f73bc04efdaf3e407671a447d8f7dfbcc13b92e\" returns successfully" Sep 12 17:11:39.641963 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 17:11:40.249697 kubelet[2489]: E0912 17:11:40.248821 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:40.944534 kubelet[2489]: E0912 17:11:40.944421 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:41.898039 kubelet[2489]: E0912 17:11:41.897927 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:42.680488 systemd-networkd[1382]: lxc_health: Link UP Sep 12 17:11:42.684778 systemd-networkd[1382]: lxc_health: Gained carrier Sep 12 17:11:43.899182 kubelet[2489]: E0912 17:11:43.898962 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:43.932827 kubelet[2489]: I0912 17:11:43.932324 2489 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z6h5d" podStartSLOduration=8.932306235 podStartE2EDuration="8.932306235s" podCreationTimestamp="2025-09-12 17:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:40.27553493 +0000 UTC m=+87.426946076" watchObservedRunningTime="2025-09-12 17:11:43.932306235 +0000 UTC m=+91.083717421" Sep 12 17:11:44.155137 systemd-networkd[1382]: lxc_health: Gained IPv6LL Sep 12 17:11:44.259441 kubelet[2489]: E0912 17:11:44.259294 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:44.302365 systemd[1]: run-containerd-runc-k8s.io-575b9d6809fc262ee3b0f0b78f73bc04efdaf3e407671a447d8f7dfbcc13b92e-runc.4nOZrl.mount: Deactivated successfully. Sep 12 17:11:45.260265 kubelet[2489]: E0912 17:11:45.260218 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:11:46.448085 systemd[1]: run-containerd-runc-k8s.io-575b9d6809fc262ee3b0f0b78f73bc04efdaf3e407671a447d8f7dfbcc13b92e-runc.wSwCGd.mount: Deactivated successfully. Sep 12 17:11:48.629151 sshd[4334]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:48.632269 systemd-logind[1418]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:11:48.632596 systemd[1]: sshd@25-10.0.0.22:22-10.0.0.1:46654.service: Deactivated successfully. Sep 12 17:11:48.634524 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:11:48.635420 systemd-logind[1418]: Removed session 26. Sep 12 17:11:49.944506 kubelet[2489]: E0912 17:11:49.944463 2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"