Sep 9 23:18:36.826448 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 23:18:36.826470 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue Sep 9 22:11:11 -00 2025 Sep 9 23:18:36.826481 kernel: KASLR enabled Sep 9 23:18:36.826486 kernel: efi: EFI v2.7 by EDK II Sep 9 23:18:36.826559 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Sep 9 23:18:36.826565 kernel: random: crng init done Sep 9 23:18:36.826572 kernel: secureboot: Secure boot disabled Sep 9 23:18:36.826578 kernel: ACPI: Early table checksum verification disabled Sep 9 23:18:36.826584 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Sep 9 23:18:36.826593 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 23:18:36.826599 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:18:36.826604 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:18:36.826610 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:18:36.826616 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:18:36.826624 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:18:36.826632 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:18:36.826638 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:18:36.826645 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:18:36.826651 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:18:36.826657 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 23:18:36.826664 kernel: NUMA: Failed to initialise from firmware Sep 9 23:18:36.826670 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:18:36.826677 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 9 23:18:36.826683 kernel: Zone ranges: Sep 9 23:18:36.826689 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:18:36.826697 kernel: DMA32 empty Sep 9 23:18:36.826703 kernel: Normal empty Sep 9 23:18:36.826709 kernel: Movable zone start for each node Sep 9 23:18:36.826716 kernel: Early memory node ranges Sep 9 23:18:36.826722 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Sep 9 23:18:36.826729 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Sep 9 23:18:36.826735 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Sep 9 23:18:36.826741 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 9 23:18:36.826748 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 9 23:18:36.826754 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 23:18:36.826760 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 23:18:36.826766 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 23:18:36.826774 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 23:18:36.826780 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:18:36.826787 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 23:18:36.826796 kernel: psci: probing for conduit method from ACPI. Sep 9 23:18:36.826802 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 23:18:36.826809 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:18:36.826818 kernel: psci: Trusted OS migration not required Sep 9 23:18:36.826824 kernel: psci: SMC Calling Convention v1.1 Sep 9 23:18:36.826831 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 23:18:36.826838 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 9 23:18:36.826844 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 9 23:18:36.826851 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 23:18:36.826858 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:18:36.826864 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:18:36.826871 kernel: CPU features: detected: Hardware dirty bit management Sep 9 23:18:36.826883 kernel: CPU features: detected: Spectre-v4 Sep 9 23:18:36.826894 kernel: CPU features: detected: Spectre-BHB Sep 9 23:18:36.826900 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 23:18:36.826907 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 23:18:36.826913 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 23:18:36.826920 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 23:18:36.826927 kernel: alternatives: applying boot alternatives Sep 9 23:18:36.826934 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=21f768e38d6f559c285ae64c28cbdad2cb8e0d9191080506cf69923230b56ba0 Sep 9 23:18:36.826942 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:18:36.826949 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:18:36.826956 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:18:36.826963 kernel: Fallback order for Node 0: 0 Sep 9 23:18:36.826971 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 23:18:36.826978 kernel: Policy zone: DMA Sep 9 23:18:36.826984 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:18:36.826991 kernel: software IO TLB: area num 4. Sep 9 23:18:36.826998 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 9 23:18:36.827005 kernel: Memory: 2387412K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 184876K reserved, 0K cma-reserved) Sep 9 23:18:36.827012 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 23:18:36.827018 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:18:36.827025 kernel: rcu: RCU event tracing is enabled. Sep 9 23:18:36.827032 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 23:18:36.827039 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:18:36.827046 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:18:36.827054 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:18:36.827061 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 23:18:36.827068 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:18:36.827074 kernel: GICv3: 256 SPIs implemented Sep 9 23:18:36.827081 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:18:36.827088 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:18:36.827094 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 23:18:36.827101 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 23:18:36.827108 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 23:18:36.827115 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 23:18:36.827122 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 9 23:18:36.827131 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 9 23:18:36.827138 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 9 23:18:36.827145 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:18:36.827152 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:18:36.827158 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 23:18:36.827165 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 23:18:36.827172 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 23:18:36.827179 kernel: arm-pv: using stolen time PV Sep 9 23:18:36.827186 kernel: Console: colour dummy device 80x25 Sep 9 23:18:36.827193 kernel: ACPI: Core revision 20230628 Sep 9 23:18:36.827200 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 23:18:36.827208 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:18:36.827215 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 23:18:36.827222 kernel: landlock: Up and running. Sep 9 23:18:36.827229 kernel: SELinux: Initializing. Sep 9 23:18:36.827237 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:18:36.827243 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:18:36.827250 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:18:36.827258 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:18:36.827265 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:18:36.827274 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:18:36.827280 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 23:18:36.827287 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 23:18:36.827294 kernel: Remapping and enabling EFI services. Sep 9 23:18:36.827301 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:18:36.827308 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:18:36.827314 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 23:18:36.827321 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 9 23:18:36.827328 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:18:36.827336 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 23:18:36.827343 kernel: Detected PIPT I-cache on CPU2 Sep 9 23:18:36.827355 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 23:18:36.827363 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 9 23:18:36.827371 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:18:36.827377 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 23:18:36.827385 kernel: Detected PIPT I-cache on CPU3 Sep 9 23:18:36.827392 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 23:18:36.827399 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 9 23:18:36.827408 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:18:36.827415 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 23:18:36.827423 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 23:18:36.827430 kernel: SMP: Total of 4 processors activated. Sep 9 23:18:36.827437 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:18:36.827445 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 23:18:36.827452 kernel: CPU features: detected: Common not Private translations Sep 9 23:18:36.827460 kernel: CPU features: detected: CRC32 instructions Sep 9 23:18:36.827468 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 23:18:36.827476 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 23:18:36.827483 kernel: CPU features: detected: LSE atomic instructions Sep 9 23:18:36.827497 kernel: CPU features: detected: Privileged Access Never Sep 9 23:18:36.827505 kernel: CPU features: detected: RAS Extension Support Sep 9 23:18:36.827513 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 23:18:36.827520 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:18:36.827527 kernel: alternatives: applying system-wide alternatives Sep 9 23:18:36.827535 kernel: devtmpfs: initialized Sep 9 23:18:36.827555 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:18:36.827563 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 23:18:36.827570 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:18:36.827577 kernel: SMBIOS 3.0.0 present. Sep 9 23:18:36.827584 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 23:18:36.827591 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:18:36.827599 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:18:36.827606 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:18:36.827613 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:18:36.827622 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:18:36.827630 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Sep 9 23:18:36.827638 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:18:36.827645 kernel: cpuidle: using governor menu Sep 9 23:18:36.827652 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:18:36.827660 kernel: ASID allocator initialised with 32768 entries Sep 9 23:18:36.827667 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:18:36.827674 kernel: Serial: AMBA PL011 UART driver Sep 9 23:18:36.827682 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 23:18:36.827690 kernel: Modules: 0 pages in range for non-PLT usage Sep 9 23:18:36.827698 kernel: Modules: 509248 pages in range for PLT usage Sep 9 23:18:36.827705 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:18:36.827713 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:18:36.827720 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:18:36.827728 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:18:36.827735 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:18:36.827746 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:18:36.827754 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:18:36.827763 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:18:36.827771 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:18:36.827778 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:18:36.827785 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:18:36.827792 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:18:36.827813 kernel: ACPI: Interpreter enabled Sep 9 23:18:36.827820 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:18:36.827827 kernel: ACPI: MCFG table detected, 1 entries Sep 9 23:18:36.827835 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 23:18:36.827843 kernel: printk: console [ttyAMA0] enabled Sep 9 23:18:36.827853 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 23:18:36.828013 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 23:18:36.828095 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 23:18:36.828164 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 23:18:36.828229 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 23:18:36.828296 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 23:18:36.828305 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 23:18:36.828315 kernel: PCI host bridge to bus 0000:00 Sep 9 23:18:36.828389 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 23:18:36.828452 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 23:18:36.828565 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 23:18:36.828629 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 23:18:36.828715 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 23:18:36.828801 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 23:18:36.828877 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 23:18:36.828966 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 23:18:36.829039 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 23:18:36.829108 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 23:18:36.829181 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 23:18:36.829251 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 23:18:36.829324 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 23:18:36.829386 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 23:18:36.829463 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 23:18:36.829473 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 23:18:36.829481 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 23:18:36.829488 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 23:18:36.829506 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 23:18:36.829513 kernel: iommu: Default domain type: Translated Sep 9 23:18:36.829523 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:18:36.829531 kernel: efivars: Registered efivars operations Sep 9 23:18:36.829552 kernel: vgaarb: loaded Sep 9 23:18:36.829560 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:18:36.829567 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:18:36.829575 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:18:36.829582 kernel: pnp: PnP ACPI init Sep 9 23:18:36.829668 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 23:18:36.829680 kernel: pnp: PnP ACPI: found 1 devices Sep 9 23:18:36.829690 kernel: NET: Registered PF_INET protocol family Sep 9 23:18:36.829698 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:18:36.829705 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:18:36.829713 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:18:36.829720 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:18:36.829727 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:18:36.829735 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:18:36.829742 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:18:36.829750 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:18:36.829758 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:18:36.829765 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:18:36.829772 kernel: kvm [1]: HYP mode not available Sep 9 23:18:36.829779 kernel: Initialise system trusted keyrings Sep 9 23:18:36.829787 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:18:36.829794 kernel: Key type asymmetric registered Sep 9 23:18:36.829801 kernel: Asymmetric key parser 'x509' registered Sep 9 23:18:36.829808 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 23:18:36.829816 kernel: io scheduler mq-deadline registered Sep 9 23:18:36.829825 kernel: io scheduler kyber registered Sep 9 23:18:36.829832 kernel: io scheduler bfq registered Sep 9 23:18:36.829840 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 23:18:36.829848 kernel: ACPI: button: Power Button [PWRB] Sep 9 23:18:36.829856 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 23:18:36.829946 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 23:18:36.829957 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:18:36.829964 kernel: thunder_xcv, ver 1.0 Sep 9 23:18:36.829972 kernel: thunder_bgx, ver 1.0 Sep 9 23:18:36.829982 kernel: nicpf, ver 1.0 Sep 9 23:18:36.829990 kernel: nicvf, ver 1.0 Sep 9 23:18:36.830069 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:18:36.830136 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:18:36 UTC (1757459916) Sep 9 23:18:36.830146 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:18:36.830153 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 23:18:36.830161 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 9 23:18:36.830168 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:18:36.830177 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:18:36.830185 kernel: Segment Routing with IPv6 Sep 9 23:18:36.830192 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:18:36.830199 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:18:36.830206 kernel: Key type dns_resolver registered Sep 9 23:18:36.830213 kernel: registered taskstats version 1 Sep 9 23:18:36.830221 kernel: Loading compiled-in X.509 certificates Sep 9 23:18:36.830228 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: 3c4ba31f0a17c8a368cad32e74fc485e669c1e50' Sep 9 23:18:36.830235 kernel: Key type .fscrypt registered Sep 9 23:18:36.830244 kernel: Key type fscrypt-provisioning registered Sep 9 23:18:36.830251 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:18:36.830259 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:18:36.830266 kernel: ima: No architecture policies found Sep 9 23:18:36.830273 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:18:36.830281 kernel: clk: Disabling unused clocks Sep 9 23:18:36.830288 kernel: Freeing unused kernel memory: 38400K Sep 9 23:18:36.830296 kernel: Run /init as init process Sep 9 23:18:36.830303 kernel: with arguments: Sep 9 23:18:36.830311 kernel: /init Sep 9 23:18:36.830319 kernel: with environment: Sep 9 23:18:36.830326 kernel: HOME=/ Sep 9 23:18:36.830333 kernel: TERM=linux Sep 9 23:18:36.830341 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:18:36.830350 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:18:36.830360 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:18:36.830370 systemd[1]: Detected virtualization kvm. Sep 9 23:18:36.830378 systemd[1]: Detected architecture arm64. Sep 9 23:18:36.830386 systemd[1]: Running in initrd. Sep 9 23:18:36.830394 systemd[1]: No hostname configured, using default hostname. Sep 9 23:18:36.830402 systemd[1]: Hostname set to . Sep 9 23:18:36.830410 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:18:36.830417 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:18:36.830425 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:18:36.830435 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:18:36.830443 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:18:36.830451 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:18:36.830459 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:18:36.830468 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:18:36.830477 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:18:36.830485 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:18:36.830510 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:18:36.830521 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:18:36.830529 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:18:36.830537 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:18:36.830545 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:18:36.830553 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:18:36.830561 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:18:36.830569 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:18:36.830577 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:18:36.830586 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:18:36.830594 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:18:36.830602 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:18:36.830609 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:18:36.830617 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:18:36.830625 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:18:36.830633 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:18:36.830641 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:18:36.830650 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:18:36.830659 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:18:36.830666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:18:36.830674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:18:36.830681 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:18:36.830689 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:18:36.830699 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:18:36.830707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:18:36.830734 systemd-journald[238]: Collecting audit messages is disabled. Sep 9 23:18:36.830756 systemd-journald[238]: Journal started Sep 9 23:18:36.830774 systemd-journald[238]: Runtime Journal (/run/log/journal/a060e0844c7043b2b57190cf04423840) is 5.9M, max 47.3M, 41.4M free. Sep 9 23:18:36.838566 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:18:36.838599 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:18:36.838612 kernel: Bridge firewalling registered Sep 9 23:18:36.823472 systemd-modules-load[239]: Inserted module 'overlay' Sep 9 23:18:36.838070 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 9 23:18:36.842974 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:18:36.842994 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:18:36.845521 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:18:36.847124 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:18:36.850453 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:18:36.852578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:18:36.854514 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:18:36.860750 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:18:36.863581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:18:36.865622 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:18:36.867679 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:18:36.879641 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:18:36.882187 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:18:36.891335 dracut-cmdline[278]: dracut-dracut-053 Sep 9 23:18:36.893807 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=21f768e38d6f559c285ae64c28cbdad2cb8e0d9191080506cf69923230b56ba0 Sep 9 23:18:36.910940 systemd-resolved[281]: Positive Trust Anchors: Sep 9 23:18:36.910958 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:18:36.910989 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:18:36.915870 systemd-resolved[281]: Defaulting to hostname 'linux'. Sep 9 23:18:36.917273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:18:36.919168 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:18:36.961519 kernel: SCSI subsystem initialized Sep 9 23:18:36.966511 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:18:36.973518 kernel: iscsi: registered transport (tcp) Sep 9 23:18:36.986514 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:18:36.986536 kernel: QLogic iSCSI HBA Driver Sep 9 23:18:37.028744 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:18:37.035683 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:18:37.050516 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:18:37.050565 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:18:37.050576 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 23:18:37.096525 kernel: raid6: neonx8 gen() 15785 MB/s Sep 9 23:18:37.113502 kernel: raid6: neonx4 gen() 15820 MB/s Sep 9 23:18:37.130502 kernel: raid6: neonx2 gen() 13234 MB/s Sep 9 23:18:37.147505 kernel: raid6: neonx1 gen() 10533 MB/s Sep 9 23:18:37.164501 kernel: raid6: int64x8 gen() 6795 MB/s Sep 9 23:18:37.181506 kernel: raid6: int64x4 gen() 7347 MB/s Sep 9 23:18:37.198508 kernel: raid6: int64x2 gen() 6106 MB/s Sep 9 23:18:37.215510 kernel: raid6: int64x1 gen() 5055 MB/s Sep 9 23:18:37.215528 kernel: raid6: using algorithm neonx4 gen() 15820 MB/s Sep 9 23:18:37.232515 kernel: raid6: .... xor() 12489 MB/s, rmw enabled Sep 9 23:18:37.232527 kernel: raid6: using neon recovery algorithm Sep 9 23:18:37.237509 kernel: xor: measuring software checksum speed Sep 9 23:18:37.237529 kernel: 8regs : 21533 MB/sec Sep 9 23:18:37.238975 kernel: 32regs : 20272 MB/sec Sep 9 23:18:37.238998 kernel: arm64_neon : 27908 MB/sec Sep 9 23:18:37.239024 kernel: xor: using function: arm64_neon (27908 MB/sec) Sep 9 23:18:37.286524 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:18:37.297234 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:18:37.307693 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:18:37.321189 systemd-udevd[464]: Using default interface naming scheme 'v255'. Sep 9 23:18:37.324944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:18:37.327443 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:18:37.342058 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Sep 9 23:18:37.368579 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:18:37.378651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:18:37.419581 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:18:37.425677 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:18:37.439369 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:18:37.443259 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:18:37.444633 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:18:37.445444 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:18:37.451716 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:18:37.460997 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:18:37.478512 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 23:18:37.484121 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 23:18:37.485699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:18:37.485771 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:18:37.497658 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:18:37.498530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:18:37.503982 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 23:18:37.504003 kernel: GPT:9289727 != 19775487 Sep 9 23:18:37.504017 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 23:18:37.504027 kernel: GPT:9289727 != 19775487 Sep 9 23:18:37.504037 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 23:18:37.504046 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:18:37.498589 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:18:37.503975 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:18:37.511704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:18:37.522502 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (518) Sep 9 23:18:37.524525 kernel: BTRFS: device fsid 3ddee560-dcea-4f51-a281-f1376972e538 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (510) Sep 9 23:18:37.524596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:18:37.536600 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 23:18:37.548022 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 23:18:37.555284 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:18:37.561322 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 23:18:37.562374 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 23:18:37.582656 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:18:37.586688 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:18:37.589302 disk-uuid[554]: Primary Header is updated. Sep 9 23:18:37.589302 disk-uuid[554]: Secondary Entries is updated. Sep 9 23:18:37.589302 disk-uuid[554]: Secondary Header is updated. Sep 9 23:18:37.591917 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:18:37.612569 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:18:38.602513 disk-uuid[555]: The operation has completed successfully. Sep 9 23:18:38.603318 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:18:38.636657 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:18:38.636760 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:18:38.667712 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:18:38.670456 sh[576]: Success Sep 9 23:18:38.679589 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 23:18:38.709926 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:18:38.724997 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:18:38.726540 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:18:38.739848 kernel: BTRFS info (device dm-0): first mount of filesystem 3ddee560-dcea-4f51-a281-f1376972e538 Sep 9 23:18:38.739902 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:18:38.739913 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 23:18:38.741528 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:18:38.741596 kernel: BTRFS info (device dm-0): using free space tree Sep 9 23:18:38.747635 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:18:38.748820 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:18:38.758883 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:18:38.761365 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:18:38.781310 kernel: BTRFS info (device vda6): first mount of filesystem 191f1648-95e8-4e77-9224-63d1cc235347 Sep 9 23:18:38.781369 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:18:38.781380 kernel: BTRFS info (device vda6): using free space tree Sep 9 23:18:38.787323 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 23:18:38.791842 kernel: BTRFS info (device vda6): last unmount of filesystem 191f1648-95e8-4e77-9224-63d1cc235347 Sep 9 23:18:38.799536 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:18:38.804793 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:18:38.872453 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:18:38.884716 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:18:38.887759 ignition[680]: Ignition 2.20.0 Sep 9 23:18:38.887770 ignition[680]: Stage: fetch-offline Sep 9 23:18:38.887812 ignition[680]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:18:38.887824 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:18:38.887987 ignition[680]: parsed url from cmdline: "" Sep 9 23:18:38.887991 ignition[680]: no config URL provided Sep 9 23:18:38.887995 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:18:38.888004 ignition[680]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:18:38.888027 ignition[680]: op(1): [started] loading QEMU firmware config module Sep 9 23:18:38.888032 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 23:18:38.896960 ignition[680]: op(1): [finished] loading QEMU firmware config module Sep 9 23:18:38.911939 systemd-networkd[764]: lo: Link UP Sep 9 23:18:38.911954 systemd-networkd[764]: lo: Gained carrier Sep 9 23:18:38.912771 systemd-networkd[764]: Enumeration completed Sep 9 23:18:38.913076 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:18:38.913198 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:18:38.913201 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:18:38.914586 systemd[1]: Reached target network.target - Network. Sep 9 23:18:38.915470 systemd-networkd[764]: eth0: Link UP Sep 9 23:18:38.915474 systemd-networkd[764]: eth0: Gained carrier Sep 9 23:18:38.915481 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:18:38.936560 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:18:38.948109 ignition[680]: parsing config with SHA512: d5e249b3655e8a68923fbf642f9ff87e0c0a2688606ba4a22a4bfc1295994135137a365666c2cedd3bd65742ebdf6fd21e3cfc485de82300031824354da7d3ed Sep 9 23:18:38.952912 unknown[680]: fetched base config from "system" Sep 9 23:18:38.952922 unknown[680]: fetched user config from "qemu" Sep 9 23:18:38.954287 ignition[680]: fetch-offline: fetch-offline passed Sep 9 23:18:38.956239 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:18:38.954417 ignition[680]: Ignition finished successfully Sep 9 23:18:38.957635 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 23:18:38.970133 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:18:38.986382 ignition[772]: Ignition 2.20.0 Sep 9 23:18:38.986393 ignition[772]: Stage: kargs Sep 9 23:18:38.986582 ignition[772]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:18:38.986592 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:18:38.987473 ignition[772]: kargs: kargs passed Sep 9 23:18:38.987538 ignition[772]: Ignition finished successfully Sep 9 23:18:38.990579 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:18:39.004691 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:18:39.014998 ignition[780]: Ignition 2.20.0 Sep 9 23:18:39.015009 ignition[780]: Stage: disks Sep 9 23:18:39.015180 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:18:39.015190 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:18:39.016159 ignition[780]: disks: disks passed Sep 9 23:18:39.018550 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:18:39.016208 ignition[780]: Ignition finished successfully Sep 9 23:18:39.019574 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:18:39.021087 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:18:39.022551 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:18:39.024285 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:18:39.026161 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:18:39.037661 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:18:39.049646 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 23:18:39.053064 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:18:39.054981 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:18:39.101541 kernel: EXT4-fs (vda9): mounted filesystem e3172dee-2277-4905-9eaa-a536ab409f20 r/w with ordered data mode. Quota mode: none. Sep 9 23:18:39.101599 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:18:39.102680 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:18:39.113589 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:18:39.115641 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:18:39.116517 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 23:18:39.116561 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:18:39.116586 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:18:39.124436 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (799) Sep 9 23:18:39.122877 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:18:39.128369 kernel: BTRFS info (device vda6): first mount of filesystem 191f1648-95e8-4e77-9224-63d1cc235347 Sep 9 23:18:39.128390 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:18:39.128400 kernel: BTRFS info (device vda6): using free space tree Sep 9 23:18:39.124416 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:18:39.131547 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 23:18:39.132627 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:18:39.163655 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:18:39.167853 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:18:39.171718 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:18:39.175538 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:18:39.256221 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:18:39.267598 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:18:39.269070 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:18:39.274525 kernel: BTRFS info (device vda6): last unmount of filesystem 191f1648-95e8-4e77-9224-63d1cc235347 Sep 9 23:18:39.289847 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:18:39.294754 ignition[913]: INFO : Ignition 2.20.0 Sep 9 23:18:39.294754 ignition[913]: INFO : Stage: mount Sep 9 23:18:39.296172 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:18:39.296172 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:18:39.296172 ignition[913]: INFO : mount: mount passed Sep 9 23:18:39.296172 ignition[913]: INFO : Ignition finished successfully Sep 9 23:18:39.299531 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:18:39.306651 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:18:39.888172 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:18:39.900703 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:18:39.907520 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (928) Sep 9 23:18:39.909966 kernel: BTRFS info (device vda6): first mount of filesystem 191f1648-95e8-4e77-9224-63d1cc235347 Sep 9 23:18:39.910015 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:18:39.910026 kernel: BTRFS info (device vda6): using free space tree Sep 9 23:18:39.912512 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 23:18:39.914067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:18:39.930187 ignition[945]: INFO : Ignition 2.20.0 Sep 9 23:18:39.930187 ignition[945]: INFO : Stage: files Sep 9 23:18:39.931725 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:18:39.931725 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:18:39.931725 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:18:39.935357 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:18:39.935357 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:18:39.940283 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:18:39.942130 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:18:39.942130 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:18:39.940965 unknown[945]: wrote ssh authorized keys file for user: core Sep 9 23:18:39.946806 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 23:18:39.946806 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 9 23:18:40.020559 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:18:40.128644 systemd-networkd[764]: eth0: Gained IPv6LL Sep 9 23:18:40.525691 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 23:18:40.527391 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:18:40.527391 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 23:18:40.803721 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 23:18:40.975746 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:18:40.977562 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 23:18:41.408399 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 23:18:42.265069 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:18:42.265069 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 23:18:42.268438 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:18:42.268438 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:18:42.268438 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 23:18:42.268438 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 23:18:42.268438 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:18:42.268438 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:18:42.268438 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 23:18:42.268438 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 23:18:42.280891 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:18:42.284305 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:18:42.285512 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 23:18:42.285512 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:18:42.285512 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:18:42.285512 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:18:42.285512 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:18:42.285512 ignition[945]: INFO : files: files passed Sep 9 23:18:42.285512 ignition[945]: INFO : Ignition finished successfully Sep 9 23:18:42.287938 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:18:42.304727 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:18:42.307067 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:18:42.311310 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:18:42.311414 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:18:42.314832 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 23:18:42.318009 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:18:42.318009 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:18:42.320810 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:18:42.322423 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:18:42.323719 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:18:42.340701 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:18:42.358776 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:18:42.359566 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:18:42.361612 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:18:42.362377 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:18:42.363820 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:18:42.364647 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:18:42.378995 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:18:42.394723 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:18:42.402779 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:18:42.403738 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:18:42.405311 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:18:42.406690 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:18:42.406819 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:18:42.408711 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:18:42.410234 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:18:42.411488 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:18:42.412957 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:18:42.414451 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:18:42.416191 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:18:42.417652 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:18:42.419137 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:18:42.420602 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:18:42.421912 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:18:42.423095 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:18:42.423220 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:18:42.424977 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:18:42.426382 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:18:42.427925 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:18:42.428566 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:18:42.429464 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:18:42.429587 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:18:42.431775 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:18:42.431900 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:18:42.433414 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:18:42.434645 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:18:42.439553 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:18:42.440534 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:18:42.442260 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:18:42.443466 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:18:42.443567 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:18:42.444777 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:18:42.444851 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:18:42.446035 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:18:42.446142 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:18:42.447551 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:18:42.447652 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:18:42.459744 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:18:42.460460 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:18:42.460617 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:18:42.465740 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:18:42.466412 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:18:42.466566 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:18:42.468001 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:18:42.468103 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:18:42.472341 ignition[999]: INFO : Ignition 2.20.0 Sep 9 23:18:42.472341 ignition[999]: INFO : Stage: umount Sep 9 23:18:42.472341 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:18:42.472341 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:18:42.475385 ignition[999]: INFO : umount: umount passed Sep 9 23:18:42.475385 ignition[999]: INFO : Ignition finished successfully Sep 9 23:18:42.474633 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:18:42.474718 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:18:42.476632 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:18:42.476713 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:18:42.478400 systemd[1]: Stopped target network.target - Network. Sep 9 23:18:42.479909 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:18:42.479983 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:18:42.481692 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:18:42.481735 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:18:42.483298 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:18:42.483342 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:18:42.485449 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:18:42.485504 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:18:42.487167 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:18:42.488380 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:18:42.490453 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:18:42.496478 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:18:42.496588 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:18:42.502161 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:18:42.502408 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:18:42.502508 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:18:42.506024 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:18:42.511427 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:18:42.511512 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:18:42.528242 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:18:42.529508 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:18:42.529597 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:18:42.532692 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:18:42.532758 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:18:42.535300 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:18:42.535362 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:18:42.539820 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:18:42.539888 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:18:42.542315 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:18:42.550321 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:18:42.550398 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:18:42.561004 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:18:42.561165 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:18:42.564791 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:18:42.564890 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:18:42.568998 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:18:42.569130 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:18:42.571391 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:18:42.571455 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:18:42.573122 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:18:42.573155 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:18:42.574550 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:18:42.574608 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:18:42.576663 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:18:42.576707 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:18:42.578738 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:18:42.578785 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:18:42.581003 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:18:42.581049 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:18:42.598692 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:18:42.599530 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:18:42.599598 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:18:42.602297 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:18:42.602342 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:18:42.605606 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 23:18:42.605669 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:18:42.606017 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:18:42.606101 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:18:42.607959 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:18:42.610143 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:18:42.619727 systemd[1]: Switching root. Sep 9 23:18:42.641538 systemd-journald[238]: Journal stopped Sep 9 23:18:43.377587 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 9 23:18:43.377642 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:18:43.377660 kernel: SELinux: policy capability open_perms=1 Sep 9 23:18:43.377670 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:18:43.377680 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:18:43.377691 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:18:43.377701 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:18:43.377710 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:18:43.377719 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:18:43.377728 kernel: audit: type=1403 audit(1757459922.806:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:18:43.377739 systemd[1]: Successfully loaded SELinux policy in 35.199ms. Sep 9 23:18:43.377759 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.853ms. Sep 9 23:18:43.377771 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:18:43.377783 systemd[1]: Detected virtualization kvm. Sep 9 23:18:43.377795 systemd[1]: Detected architecture arm64. Sep 9 23:18:43.377805 systemd[1]: Detected first boot. Sep 9 23:18:43.377815 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:18:43.377825 zram_generator::config[1046]: No configuration found. Sep 9 23:18:43.377837 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:18:43.377846 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:18:43.377866 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:18:43.377877 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:18:43.377889 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:18:43.377899 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:18:43.377910 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:18:43.377920 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:18:43.377930 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:18:43.377940 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:18:43.377950 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:18:43.377961 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:18:43.377971 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:18:43.377982 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:18:43.377993 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:18:43.378003 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:18:43.378013 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:18:43.378023 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:18:43.378033 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:18:43.378043 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:18:43.378053 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 23:18:43.378065 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:18:43.378077 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:18:43.378087 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:18:43.378097 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:18:43.378107 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:18:43.378121 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:18:43.378132 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:18:43.378142 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:18:43.378152 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:18:43.378164 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:18:43.378174 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:18:43.378184 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:18:43.378194 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:18:43.378204 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:18:43.378213 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:18:43.378224 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:18:43.378234 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:18:43.378244 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:18:43.378255 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:18:43.378278 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:18:43.378288 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:18:43.378300 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:18:43.378310 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:18:43.378320 systemd[1]: Reached target machines.target - Containers. Sep 9 23:18:43.378331 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:18:43.378341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:18:43.378353 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:18:43.378363 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:18:43.378375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:18:43.378384 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:18:43.378394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:18:43.378404 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:18:43.378414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:18:43.378424 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:18:43.378436 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:18:43.378446 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:18:43.378456 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:18:43.378466 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:18:43.378477 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:18:43.378487 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:18:43.378549 kernel: fuse: init (API version 7.39) Sep 9 23:18:43.378560 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:18:43.378571 kernel: loop: module loaded Sep 9 23:18:43.378582 kernel: ACPI: bus type drm_connector registered Sep 9 23:18:43.378592 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:18:43.378602 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:18:43.378612 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:18:43.378641 systemd-journald[1114]: Collecting audit messages is disabled. Sep 9 23:18:43.378666 systemd-journald[1114]: Journal started Sep 9 23:18:43.378688 systemd-journald[1114]: Runtime Journal (/run/log/journal/a060e0844c7043b2b57190cf04423840) is 5.9M, max 47.3M, 41.4M free. Sep 9 23:18:43.201565 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:18:43.212434 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 23:18:43.212846 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:18:43.381507 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:18:43.382708 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:18:43.382728 systemd[1]: Stopped verity-setup.service. Sep 9 23:18:43.389952 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:18:43.390671 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:18:43.391619 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:18:43.392550 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:18:43.393427 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:18:43.394463 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:18:43.395423 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:18:43.396618 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:18:43.399902 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:18:43.400079 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:18:43.402924 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:18:43.403098 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:18:43.404539 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:18:43.404726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:18:43.405968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:18:43.406119 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:18:43.407379 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:18:43.407646 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:18:43.408749 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:18:43.408934 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:18:43.410267 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:18:43.411639 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:18:43.412848 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:18:43.414281 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:18:43.415761 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:18:43.428727 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:18:43.435604 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:18:43.437580 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:18:43.438427 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:18:43.438472 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:18:43.440247 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:18:43.442423 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:18:43.444513 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:18:43.445372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:18:43.446513 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:18:43.449351 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:18:43.450347 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:18:43.451731 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:18:43.453137 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:18:43.454850 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:18:43.456766 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:18:43.461760 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:18:43.464124 systemd-journald[1114]: Time spent on flushing to /var/log/journal/a060e0844c7043b2b57190cf04423840 is 14.760ms for 871 entries. Sep 9 23:18:43.464124 systemd-journald[1114]: System Journal (/var/log/journal/a060e0844c7043b2b57190cf04423840) is 8M, max 195.6M, 187.6M free. Sep 9 23:18:43.502336 systemd-journald[1114]: Received client request to flush runtime journal. Sep 9 23:18:43.502397 kernel: loop0: detected capacity change from 0 to 211168 Sep 9 23:18:43.502417 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:18:43.465605 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:18:43.467219 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:18:43.468584 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:18:43.475783 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:18:43.477096 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:18:43.481431 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:18:43.497804 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:18:43.500598 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 23:18:43.503037 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:18:43.507546 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:18:43.517122 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 9 23:18:43.523572 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:18:43.530670 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:18:43.531530 kernel: loop1: detected capacity change from 0 to 123192 Sep 9 23:18:43.532722 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:18:43.551047 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Sep 9 23:18:43.551061 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Sep 9 23:18:43.555429 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:18:43.568576 kernel: loop2: detected capacity change from 0 to 113512 Sep 9 23:18:43.597533 kernel: loop3: detected capacity change from 0 to 211168 Sep 9 23:18:43.604537 kernel: loop4: detected capacity change from 0 to 123192 Sep 9 23:18:43.611111 kernel: loop5: detected capacity change from 0 to 113512 Sep 9 23:18:43.614206 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 23:18:43.614620 (sd-merge)[1189]: Merged extensions into '/usr'. Sep 9 23:18:43.619954 systemd[1]: Reload requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:18:43.619970 systemd[1]: Reloading... Sep 9 23:18:43.686638 zram_generator::config[1216]: No configuration found. Sep 9 23:18:43.757609 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:18:43.791350 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:18:43.847579 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:18:43.847842 systemd[1]: Reloading finished in 227 ms. Sep 9 23:18:43.866320 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:18:43.867701 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:18:43.881962 systemd[1]: Starting ensure-sysext.service... Sep 9 23:18:43.883894 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:18:43.899749 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:18:43.899969 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:18:43.900606 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:18:43.900813 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 9 23:18:43.900871 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 9 23:18:43.901250 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:18:43.901265 systemd[1]: Reloading... Sep 9 23:18:43.903478 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:18:43.903488 systemd-tmpfiles[1252]: Skipping /boot Sep 9 23:18:43.912167 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:18:43.912185 systemd-tmpfiles[1252]: Skipping /boot Sep 9 23:18:43.954581 zram_generator::config[1282]: No configuration found. Sep 9 23:18:44.039742 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:18:44.095828 systemd[1]: Reloading finished in 194 ms. Sep 9 23:18:44.106267 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:18:44.120781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:18:44.128303 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:18:44.130945 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:18:44.133203 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:18:44.136838 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:18:44.139604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:18:44.143906 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:18:44.149847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:18:44.153893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:18:44.165672 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:18:44.168067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:18:44.169082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:18:44.169240 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:18:44.172841 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:18:44.178126 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:18:44.180062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:18:44.180300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:18:44.181702 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Sep 9 23:18:44.181974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:18:44.182211 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:18:44.183121 augenrules[1346]: No rules Sep 9 23:18:44.183995 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:18:44.184221 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:18:44.185796 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:18:44.186017 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:18:44.197079 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:18:44.201679 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:18:44.203350 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:18:44.213733 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:18:44.214584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:18:44.215699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:18:44.218682 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:18:44.221195 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:18:44.224942 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:18:44.225833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:18:44.225893 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:18:44.228732 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:18:44.230933 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:18:44.232059 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:18:44.257654 augenrules[1369]: /sbin/augenrules: No change Sep 9 23:18:44.263337 augenrules[1405]: No rules Sep 9 23:18:44.282727 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:18:44.284371 systemd[1]: Finished ensure-sysext.service. Sep 9 23:18:44.285363 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:18:44.285653 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:18:44.287939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:18:44.288210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:18:44.289411 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:18:44.289591 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:18:44.290686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:18:44.290862 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:18:44.292048 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:18:44.292213 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:18:44.293455 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:18:44.294183 systemd-resolved[1321]: Positive Trust Anchors: Sep 9 23:18:44.294211 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:18:44.294244 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:18:44.301088 systemd-resolved[1321]: Defaulting to hostname 'linux'. Sep 9 23:18:44.309532 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1368) Sep 9 23:18:44.312142 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:18:44.315754 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 23:18:44.339904 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:18:44.341017 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:18:44.341086 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:18:44.349202 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 23:18:44.358222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:18:44.358972 systemd-networkd[1386]: lo: Link UP Sep 9 23:18:44.358984 systemd-networkd[1386]: lo: Gained carrier Sep 9 23:18:44.359833 systemd-networkd[1386]: Enumeration completed Sep 9 23:18:44.360265 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:18:44.360276 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:18:44.360748 systemd-networkd[1386]: eth0: Link UP Sep 9 23:18:44.360756 systemd-networkd[1386]: eth0: Gained carrier Sep 9 23:18:44.360770 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:18:44.369640 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:18:44.372043 systemd[1]: Reached target network.target - Network. Sep 9 23:18:44.374273 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:18:44.375149 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:18:44.378198 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:18:44.382037 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:18:44.392340 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:18:44.396626 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:18:44.428802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:18:44.433530 systemd-timesyncd[1424]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 23:18:44.433578 systemd-timesyncd[1424]: Initial clock synchronization to Tue 2025-09-09 23:18:44.076961 UTC. Sep 9 23:18:44.434014 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 23:18:44.435404 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 23:18:44.437141 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:18:44.453746 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 23:18:44.461282 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:18:44.462984 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 23:18:44.492997 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 23:18:44.494247 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:18:44.495204 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:18:44.496209 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:18:44.497281 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:18:44.498579 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:18:44.499526 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:18:44.500528 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:18:44.501444 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:18:44.501476 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:18:44.502223 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:18:44.504151 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:18:44.506432 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:18:44.509647 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:18:44.510863 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:18:44.511957 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:18:44.517463 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:18:44.518894 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:18:44.521099 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 23:18:44.522651 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:18:44.523665 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:18:44.524421 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:18:44.525272 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:18:44.525303 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:18:44.526295 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:18:44.530522 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 23:18:44.528226 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:18:44.531722 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:18:44.534070 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:18:44.535378 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:18:44.539483 jq[1449]: false Sep 9 23:18:44.539707 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:18:44.541846 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:18:44.544709 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:18:44.548539 extend-filesystems[1450]: Found loop3 Sep 9 23:18:44.548539 extend-filesystems[1450]: Found loop4 Sep 9 23:18:44.548539 extend-filesystems[1450]: Found loop5 Sep 9 23:18:44.548539 extend-filesystems[1450]: Found vda Sep 9 23:18:44.548539 extend-filesystems[1450]: Found vda1 Sep 9 23:18:44.548539 extend-filesystems[1450]: Found vda2 Sep 9 23:18:44.548539 extend-filesystems[1450]: Found vda3 Sep 9 23:18:44.548539 extend-filesystems[1450]: Found usr Sep 9 23:18:44.548539 extend-filesystems[1450]: Found vda4 Sep 9 23:18:44.548539 extend-filesystems[1450]: Found vda6 Sep 9 23:18:44.548539 extend-filesystems[1450]: Found vda7 Sep 9 23:18:44.548539 extend-filesystems[1450]: Found vda9 Sep 9 23:18:44.548539 extend-filesystems[1450]: Checking size of /dev/vda9 Sep 9 23:18:44.551117 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:18:44.562013 dbus-daemon[1448]: [system] SELinux support is enabled Sep 9 23:18:44.558487 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:18:44.570193 extend-filesystems[1450]: Resized partition /dev/vda9 Sep 9 23:18:44.561040 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:18:44.574798 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Sep 9 23:18:44.562302 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:18:44.569941 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:18:44.576664 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:18:44.578798 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:18:44.581824 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 23:18:44.585134 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 23:18:44.585174 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1359) Sep 9 23:18:44.586138 jq[1471]: true Sep 9 23:18:44.586754 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:18:44.588655 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:18:44.588989 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:18:44.589169 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:18:44.592174 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:18:44.592349 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:18:44.604476 update_engine[1464]: I20250909 23:18:44.603856 1464 main.cc:92] Flatcar Update Engine starting Sep 9 23:18:44.606450 jq[1475]: true Sep 9 23:18:44.609000 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:18:44.610217 update_engine[1464]: I20250909 23:18:44.610069 1464 update_check_scheduler.cc:74] Next update check in 11m7s Sep 9 23:18:44.622513 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:18:44.623894 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:18:44.623933 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:18:44.625150 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:18:44.625175 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:18:44.636727 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:18:44.667133 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 23:18:44.667343 systemd-logind[1463]: New seat seat0. Sep 9 23:18:44.668052 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:18:44.675756 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 23:18:44.675816 tar[1473]: linux-arm64/LICENSE Sep 9 23:18:44.698674 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:18:44.707727 tar[1473]: linux-arm64/helm Sep 9 23:18:44.708994 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 23:18:44.708994 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 23:18:44.708994 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 23:18:44.712576 extend-filesystems[1450]: Resized filesystem in /dev/vda9 Sep 9 23:18:44.711890 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:18:44.713657 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:18:44.730810 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:18:44.734793 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:18:44.736456 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 23:18:44.800988 containerd[1476]: time="2025-09-09T23:18:44.800825200Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 9 23:18:44.840034 containerd[1476]: time="2025-09-09T23:18:44.839950920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842523 containerd[1476]: time="2025-09-09T23:18:44.841952160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842523 containerd[1476]: time="2025-09-09T23:18:44.841986720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 23:18:44.842523 containerd[1476]: time="2025-09-09T23:18:44.842027120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 23:18:44.842523 containerd[1476]: time="2025-09-09T23:18:44.842182880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 23:18:44.842523 containerd[1476]: time="2025-09-09T23:18:44.842198560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842523 containerd[1476]: time="2025-09-09T23:18:44.842252800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842523 containerd[1476]: time="2025-09-09T23:18:44.842266120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842523 containerd[1476]: time="2025-09-09T23:18:44.842455920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842523 containerd[1476]: time="2025-09-09T23:18:44.842470880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842523 containerd[1476]: time="2025-09-09T23:18:44.842483200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842523 containerd[1476]: time="2025-09-09T23:18:44.842510000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842823 containerd[1476]: time="2025-09-09T23:18:44.842591760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842843 containerd[1476]: time="2025-09-09T23:18:44.842821360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842994 containerd[1476]: time="2025-09-09T23:18:44.842965360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 23:18:44.842994 containerd[1476]: time="2025-09-09T23:18:44.842988440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 23:18:44.843087 containerd[1476]: time="2025-09-09T23:18:44.843072360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 23:18:44.843194 containerd[1476]: time="2025-09-09T23:18:44.843120280Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:18:44.846803 containerd[1476]: time="2025-09-09T23:18:44.846769960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 23:18:44.846891 containerd[1476]: time="2025-09-09T23:18:44.846823120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 23:18:44.846891 containerd[1476]: time="2025-09-09T23:18:44.846838840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 23:18:44.846891 containerd[1476]: time="2025-09-09T23:18:44.846860440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 23:18:44.846891 containerd[1476]: time="2025-09-09T23:18:44.846875400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 23:18:44.847148 containerd[1476]: time="2025-09-09T23:18:44.847016080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 23:18:44.847278 containerd[1476]: time="2025-09-09T23:18:44.847243240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 23:18:44.847459 containerd[1476]: time="2025-09-09T23:18:44.847440480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 23:18:44.847506 containerd[1476]: time="2025-09-09T23:18:44.847469160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 23:18:44.847506 containerd[1476]: time="2025-09-09T23:18:44.847484320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 23:18:44.847558 containerd[1476]: time="2025-09-09T23:18:44.847518320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 23:18:44.847558 containerd[1476]: time="2025-09-09T23:18:44.847534320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 23:18:44.847558 containerd[1476]: time="2025-09-09T23:18:44.847547720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 23:18:44.847605 containerd[1476]: time="2025-09-09T23:18:44.847560800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 23:18:44.847605 containerd[1476]: time="2025-09-09T23:18:44.847575760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 23:18:44.847605 containerd[1476]: time="2025-09-09T23:18:44.847589040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 23:18:44.847605 containerd[1476]: time="2025-09-09T23:18:44.847601600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 23:18:44.847669 containerd[1476]: time="2025-09-09T23:18:44.847613920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 23:18:44.847669 containerd[1476]: time="2025-09-09T23:18:44.847634840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847669 containerd[1476]: time="2025-09-09T23:18:44.847648880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847669 containerd[1476]: time="2025-09-09T23:18:44.847662360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847737 containerd[1476]: time="2025-09-09T23:18:44.847675160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847737 containerd[1476]: time="2025-09-09T23:18:44.847687120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847737 containerd[1476]: time="2025-09-09T23:18:44.847703240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847737 containerd[1476]: time="2025-09-09T23:18:44.847715880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847737 containerd[1476]: time="2025-09-09T23:18:44.847731800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847817 containerd[1476]: time="2025-09-09T23:18:44.847744440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847817 containerd[1476]: time="2025-09-09T23:18:44.847759520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847817 containerd[1476]: time="2025-09-09T23:18:44.847771080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847817 containerd[1476]: time="2025-09-09T23:18:44.847782320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847817 containerd[1476]: time="2025-09-09T23:18:44.847795000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847817 containerd[1476]: time="2025-09-09T23:18:44.847809560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 23:18:44.847924 containerd[1476]: time="2025-09-09T23:18:44.847829320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847924 containerd[1476]: time="2025-09-09T23:18:44.847842280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.847924 containerd[1476]: time="2025-09-09T23:18:44.847862880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 23:18:44.848184 containerd[1476]: time="2025-09-09T23:18:44.848039080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 23:18:44.848184 containerd[1476]: time="2025-09-09T23:18:44.848059040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 23:18:44.848184 containerd[1476]: time="2025-09-09T23:18:44.848069080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 23:18:44.848184 containerd[1476]: time="2025-09-09T23:18:44.848081040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 23:18:44.848184 containerd[1476]: time="2025-09-09T23:18:44.848092040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.848184 containerd[1476]: time="2025-09-09T23:18:44.848103800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 23:18:44.848184 containerd[1476]: time="2025-09-09T23:18:44.848113920Z" level=info msg="NRI interface is disabled by configuration." Sep 9 23:18:44.848184 containerd[1476]: time="2025-09-09T23:18:44.848126440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 23:18:44.848540 containerd[1476]: time="2025-09-09T23:18:44.848466480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 23:18:44.848667 containerd[1476]: time="2025-09-09T23:18:44.848545760Z" level=info msg="Connect containerd service" Sep 9 23:18:44.848667 containerd[1476]: time="2025-09-09T23:18:44.848583480Z" level=info msg="using legacy CRI server" Sep 9 23:18:44.848667 containerd[1476]: time="2025-09-09T23:18:44.848591360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:18:44.849525 containerd[1476]: time="2025-09-09T23:18:44.848937040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 23:18:44.852507 containerd[1476]: time="2025-09-09T23:18:44.850890840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:18:44.852507 containerd[1476]: time="2025-09-09T23:18:44.851321240Z" level=info msg="Start subscribing containerd event" Sep 9 23:18:44.852507 containerd[1476]: time="2025-09-09T23:18:44.851371320Z" level=info msg="Start recovering state" Sep 9 23:18:44.852507 containerd[1476]: time="2025-09-09T23:18:44.851429320Z" level=info msg="Start event monitor" Sep 9 23:18:44.852507 containerd[1476]: time="2025-09-09T23:18:44.851441000Z" level=info msg="Start snapshots syncer" Sep 9 23:18:44.852507 containerd[1476]: time="2025-09-09T23:18:44.851450400Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:18:44.852507 containerd[1476]: time="2025-09-09T23:18:44.851457760Z" level=info msg="Start streaming server" Sep 9 23:18:44.852507 containerd[1476]: time="2025-09-09T23:18:44.851721640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:18:44.852507 containerd[1476]: time="2025-09-09T23:18:44.851757760Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:18:44.852507 containerd[1476]: time="2025-09-09T23:18:44.852044680Z" level=info msg="containerd successfully booted in 0.052113s" Sep 9 23:18:44.852193 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:18:45.062976 tar[1473]: linux-arm64/README.md Sep 9 23:18:45.081541 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:18:45.301090 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:18:45.321569 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:18:45.333762 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:18:45.338922 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:18:45.339147 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:18:45.342765 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:18:45.353127 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:18:45.356791 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:18:45.358615 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 23:18:45.359589 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:18:45.824670 systemd-networkd[1386]: eth0: Gained IPv6LL Sep 9 23:18:45.826821 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:18:45.828158 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:18:45.838764 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 23:18:45.841052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:18:45.843006 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:18:45.858660 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 23:18:45.858871 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 23:18:45.860429 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:18:45.861803 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:18:46.358941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:18:46.360231 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:18:46.363266 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:18:46.364607 systemd[1]: Startup finished in 511ms (kernel) + 6.129s (initrd) + 3.593s (userspace) = 10.235s. Sep 9 23:18:46.704027 kubelet[1562]: E0909 23:18:46.703935 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:18:46.706523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:18:46.706668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:18:46.707618 systemd[1]: kubelet.service: Consumed 749ms CPU time, 258.8M memory peak. Sep 9 23:18:49.462171 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:18:49.480815 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:54792.service - OpenSSH per-connection server daemon (10.0.0.1:54792). Sep 9 23:18:49.535195 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 54792 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:18:49.538084 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:18:49.553304 systemd-logind[1463]: New session 1 of user core. Sep 9 23:18:49.554231 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:18:49.564771 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:18:49.573928 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:18:49.576003 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:18:49.582781 (systemd)[1580]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:18:49.584906 systemd-logind[1463]: New session c1 of user core. Sep 9 23:18:49.681956 systemd[1580]: Queued start job for default target default.target. Sep 9 23:18:49.693475 systemd[1580]: Created slice app.slice - User Application Slice. Sep 9 23:18:49.693517 systemd[1580]: Reached target paths.target - Paths. Sep 9 23:18:49.693555 systemd[1580]: Reached target timers.target - Timers. Sep 9 23:18:49.694878 systemd[1580]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:18:49.704824 systemd[1580]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:18:49.704895 systemd[1580]: Reached target sockets.target - Sockets. Sep 9 23:18:49.704936 systemd[1580]: Reached target basic.target - Basic System. Sep 9 23:18:49.704969 systemd[1580]: Reached target default.target - Main User Target. Sep 9 23:18:49.704995 systemd[1580]: Startup finished in 114ms. Sep 9 23:18:49.705213 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:18:49.706744 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:18:49.765132 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:54808.service - OpenSSH per-connection server daemon (10.0.0.1:54808). Sep 9 23:18:49.805102 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 54808 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:18:49.806361 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:18:49.810568 systemd-logind[1463]: New session 2 of user core. Sep 9 23:18:49.820707 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:18:49.874174 sshd[1593]: Connection closed by 10.0.0.1 port 54808 Sep 9 23:18:49.876227 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Sep 9 23:18:49.887699 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:54808.service: Deactivated successfully. Sep 9 23:18:49.889582 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 23:18:49.890301 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Sep 9 23:18:49.900859 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:54818.service - OpenSSH per-connection server daemon (10.0.0.1:54818). Sep 9 23:18:49.901407 systemd-logind[1463]: Removed session 2. Sep 9 23:18:49.939342 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 54818 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:18:49.940709 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:18:49.946371 systemd-logind[1463]: New session 3 of user core. Sep 9 23:18:49.958733 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:18:50.007538 sshd[1601]: Connection closed by 10.0.0.1 port 54818 Sep 9 23:18:50.008010 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Sep 9 23:18:50.031690 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:54818.service: Deactivated successfully. Sep 9 23:18:50.033283 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 23:18:50.034074 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Sep 9 23:18:50.048821 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:54832.service - OpenSSH per-connection server daemon (10.0.0.1:54832). Sep 9 23:18:50.049578 systemd-logind[1463]: Removed session 3. Sep 9 23:18:50.086370 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 54832 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:18:50.087705 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:18:50.094864 systemd-logind[1463]: New session 4 of user core. Sep 9 23:18:50.108706 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:18:50.160152 sshd[1609]: Connection closed by 10.0.0.1 port 54832 Sep 9 23:18:50.160027 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Sep 9 23:18:50.172272 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:54832.service: Deactivated successfully. Sep 9 23:18:50.173932 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:18:50.175574 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:18:50.191514 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:45312.service - OpenSSH per-connection server daemon (10.0.0.1:45312). Sep 9 23:18:50.192996 systemd-logind[1463]: Removed session 4. Sep 9 23:18:50.230880 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 45312 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:18:50.232113 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:18:50.237341 systemd-logind[1463]: New session 5 of user core. Sep 9 23:18:50.245704 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:18:50.304786 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:18:50.305536 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:18:50.322419 sudo[1618]: pam_unix(sudo:session): session closed for user root Sep 9 23:18:50.326019 sshd[1617]: Connection closed by 10.0.0.1 port 45312 Sep 9 23:18:50.326810 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Sep 9 23:18:50.341174 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:45312.service: Deactivated successfully. Sep 9 23:18:50.344088 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:18:50.348449 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:18:50.360979 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:45326.service - OpenSSH per-connection server daemon (10.0.0.1:45326). Sep 9 23:18:50.363184 systemd-logind[1463]: Removed session 5. Sep 9 23:18:50.398542 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 45326 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:18:50.399823 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:18:50.404315 systemd-logind[1463]: New session 6 of user core. Sep 9 23:18:50.412727 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:18:50.464316 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:18:50.464665 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:18:50.468142 sudo[1628]: pam_unix(sudo:session): session closed for user root Sep 9 23:18:50.472914 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:18:50.473166 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:18:50.491956 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:18:50.515811 augenrules[1650]: No rules Sep 9 23:18:50.516998 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:18:50.517201 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:18:50.518090 sudo[1627]: pam_unix(sudo:session): session closed for user root Sep 9 23:18:50.519520 sshd[1626]: Connection closed by 10.0.0.1 port 45326 Sep 9 23:18:50.519809 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Sep 9 23:18:50.532204 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:45326.service: Deactivated successfully. Sep 9 23:18:50.533828 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:18:50.535165 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:18:50.536205 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:45342.service - OpenSSH per-connection server daemon (10.0.0.1:45342). Sep 9 23:18:50.536922 systemd-logind[1463]: Removed session 6. Sep 9 23:18:50.579121 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 45342 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:18:50.580236 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:18:50.583994 systemd-logind[1463]: New session 7 of user core. Sep 9 23:18:50.593670 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:18:50.643241 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:18:50.643874 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:18:50.943754 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:18:50.943891 (dockerd)[1684]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:18:51.145864 dockerd[1684]: time="2025-09-09T23:18:51.145800307Z" level=info msg="Starting up" Sep 9 23:18:51.741631 dockerd[1684]: time="2025-09-09T23:18:51.741424816Z" level=info msg="Loading containers: start." Sep 9 23:18:51.890528 kernel: Initializing XFRM netlink socket Sep 9 23:18:51.956178 systemd-networkd[1386]: docker0: Link UP Sep 9 23:18:52.029758 dockerd[1684]: time="2025-09-09T23:18:52.029647469Z" level=info msg="Loading containers: done." Sep 9 23:18:52.040949 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2074087983-merged.mount: Deactivated successfully. Sep 9 23:18:52.050872 dockerd[1684]: time="2025-09-09T23:18:52.050824988Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:18:52.051101 dockerd[1684]: time="2025-09-09T23:18:52.051082365Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 9 23:18:52.051333 dockerd[1684]: time="2025-09-09T23:18:52.051314510Z" level=info msg="Daemon has completed initialization" Sep 9 23:18:52.077892 dockerd[1684]: time="2025-09-09T23:18:52.077835858Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:18:52.078013 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:18:52.732544 containerd[1476]: time="2025-09-09T23:18:52.732396621Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 23:18:53.672400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718283178.mount: Deactivated successfully. Sep 9 23:18:54.887433 containerd[1476]: time="2025-09-09T23:18:54.887382498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:54.887923 containerd[1476]: time="2025-09-09T23:18:54.887843992Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 9 23:18:54.888567 containerd[1476]: time="2025-09-09T23:18:54.888541481Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:54.891697 containerd[1476]: time="2025-09-09T23:18:54.891638586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:54.892706 containerd[1476]: time="2025-09-09T23:18:54.892680912Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.160250578s" Sep 9 23:18:54.892758 containerd[1476]: time="2025-09-09T23:18:54.892711576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 9 23:18:54.894735 containerd[1476]: time="2025-09-09T23:18:54.894565088Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 23:18:56.292018 containerd[1476]: time="2025-09-09T23:18:56.291972487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:56.292518 containerd[1476]: time="2025-09-09T23:18:56.292455719Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 9 23:18:56.293395 containerd[1476]: time="2025-09-09T23:18:56.293369133Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:56.297517 containerd[1476]: time="2025-09-09T23:18:56.296129051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:56.298333 containerd[1476]: time="2025-09-09T23:18:56.298297539Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.403701128s" Sep 9 23:18:56.298427 containerd[1476]: time="2025-09-09T23:18:56.298410607Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 9 23:18:56.299014 containerd[1476]: time="2025-09-09T23:18:56.298987073Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 23:18:56.957041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:18:56.966661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:18:57.060030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:18:57.062998 (kubelet)[1947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:18:57.096040 kubelet[1947]: E0909 23:18:57.095982 1947 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:18:57.099207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:18:57.099342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:18:57.099706 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.1M memory peak. Sep 9 23:18:57.662207 containerd[1476]: time="2025-09-09T23:18:57.662106156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:57.663287 containerd[1476]: time="2025-09-09T23:18:57.663236876Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 9 23:18:57.664181 containerd[1476]: time="2025-09-09T23:18:57.664149492Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:57.667766 containerd[1476]: time="2025-09-09T23:18:57.667729154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:57.669575 containerd[1476]: time="2025-09-09T23:18:57.669504080Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.37034785s" Sep 9 23:18:57.669616 containerd[1476]: time="2025-09-09T23:18:57.669574007Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 9 23:18:57.670091 containerd[1476]: time="2025-09-09T23:18:57.670055090Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 23:18:58.719538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount834799481.mount: Deactivated successfully. Sep 9 23:18:58.983579 containerd[1476]: time="2025-09-09T23:18:58.983421778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:58.983876 containerd[1476]: time="2025-09-09T23:18:58.983755854Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 9 23:18:58.984791 containerd[1476]: time="2025-09-09T23:18:58.984750819Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:58.986952 containerd[1476]: time="2025-09-09T23:18:58.986920446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:18:58.987600 containerd[1476]: time="2025-09-09T23:18:58.987556254Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.317466152s" Sep 9 23:18:58.987642 containerd[1476]: time="2025-09-09T23:18:58.987601575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 23:18:58.988101 containerd[1476]: time="2025-09-09T23:18:58.988066058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 23:18:59.708527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456946987.mount: Deactivated successfully. Sep 9 23:19:00.656098 containerd[1476]: time="2025-09-09T23:19:00.656029639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:00.656660 containerd[1476]: time="2025-09-09T23:19:00.656616569Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 9 23:19:00.657545 containerd[1476]: time="2025-09-09T23:19:00.657518752Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:00.661150 containerd[1476]: time="2025-09-09T23:19:00.661111579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:00.662951 containerd[1476]: time="2025-09-09T23:19:00.662902069Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.674805953s" Sep 9 23:19:00.663000 containerd[1476]: time="2025-09-09T23:19:00.662951212Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 9 23:19:00.663432 containerd[1476]: time="2025-09-09T23:19:00.663402562Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:19:01.082386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount571589321.mount: Deactivated successfully. Sep 9 23:19:01.087194 containerd[1476]: time="2025-09-09T23:19:01.086435712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:01.087194 containerd[1476]: time="2025-09-09T23:19:01.086825252Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 23:19:01.087766 containerd[1476]: time="2025-09-09T23:19:01.087724814Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:01.089962 containerd[1476]: time="2025-09-09T23:19:01.089919567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:01.090705 containerd[1476]: time="2025-09-09T23:19:01.090680860Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 427.244617ms" Sep 9 23:19:01.090923 containerd[1476]: time="2025-09-09T23:19:01.090800906Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:19:01.091354 containerd[1476]: time="2025-09-09T23:19:01.091267080Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 23:19:01.506909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282975284.mount: Deactivated successfully. Sep 9 23:19:03.384044 containerd[1476]: time="2025-09-09T23:19:03.382830244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:03.384044 containerd[1476]: time="2025-09-09T23:19:03.383662239Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 9 23:19:03.384596 containerd[1476]: time="2025-09-09T23:19:03.384566980Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:03.388628 containerd[1476]: time="2025-09-09T23:19:03.388573696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:03.389619 containerd[1476]: time="2025-09-09T23:19:03.389576640Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.298081155s" Sep 9 23:19:03.389958 containerd[1476]: time="2025-09-09T23:19:03.389712013Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 9 23:19:07.349758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 23:19:07.359672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:19:07.470222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:19:07.473829 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:19:07.508402 kubelet[2111]: E0909 23:19:07.508358 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:19:07.511113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:19:07.511253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:19:07.511543 systemd[1]: kubelet.service: Consumed 123ms CPU time, 109.1M memory peak. Sep 9 23:19:07.636363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:19:07.636525 systemd[1]: kubelet.service: Consumed 123ms CPU time, 109.1M memory peak. Sep 9 23:19:07.644776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:19:07.667013 systemd[1]: Reload requested from client PID 2126 ('systemctl') (unit session-7.scope)... Sep 9 23:19:07.667029 systemd[1]: Reloading... Sep 9 23:19:07.746537 zram_generator::config[2173]: No configuration found. Sep 9 23:19:07.926398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:19:08.006862 systemd[1]: Reloading finished in 339 ms. Sep 9 23:19:08.045152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:19:08.048268 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:19:08.049285 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:19:08.049718 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:19:08.049920 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:19:08.049957 systemd[1]: kubelet.service: Consumed 83ms CPU time, 95.1M memory peak. Sep 9 23:19:08.052185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:19:08.152407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:19:08.155738 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:19:08.185655 kubelet[2218]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:19:08.185655 kubelet[2218]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:19:08.185655 kubelet[2218]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:19:08.185655 kubelet[2218]: I0909 23:19:08.185392 2218 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:19:09.087875 kubelet[2218]: I0909 23:19:09.087829 2218 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 23:19:09.087875 kubelet[2218]: I0909 23:19:09.087864 2218 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:19:09.088088 kubelet[2218]: I0909 23:19:09.088073 2218 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 23:19:09.102554 kubelet[2218]: E0909 23:19:09.102467 2218 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 23:19:09.106044 kubelet[2218]: I0909 23:19:09.105835 2218 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:19:09.111512 kubelet[2218]: E0909 23:19:09.111473 2218 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 23:19:09.111573 kubelet[2218]: I0909 23:19:09.111515 2218 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 23:19:09.114130 kubelet[2218]: I0909 23:19:09.114111 2218 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:19:09.115107 kubelet[2218]: I0909 23:19:09.115062 2218 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:19:09.115277 kubelet[2218]: I0909 23:19:09.115097 2218 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:19:09.115365 kubelet[2218]: I0909 23:19:09.115345 2218 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:19:09.115365 kubelet[2218]: I0909 23:19:09.115356 2218 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 23:19:09.115566 kubelet[2218]: I0909 23:19:09.115543 2218 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:19:09.118503 kubelet[2218]: I0909 23:19:09.118478 2218 kubelet.go:480] "Attempting to sync node with API server" Sep 9 23:19:09.118558 kubelet[2218]: I0909 23:19:09.118509 2218 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:19:09.118558 kubelet[2218]: I0909 23:19:09.118535 2218 kubelet.go:386] "Adding apiserver pod source" Sep 9 23:19:09.119975 kubelet[2218]: I0909 23:19:09.119771 2218 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:19:09.121880 kubelet[2218]: E0909 23:19:09.121294 2218 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 23:19:09.121880 kubelet[2218]: I0909 23:19:09.121422 2218 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 9 23:19:09.121880 kubelet[2218]: E0909 23:19:09.121835 2218 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 23:19:09.122622 kubelet[2218]: I0909 23:19:09.122242 2218 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 23:19:09.122622 kubelet[2218]: W0909 23:19:09.122369 2218 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:19:09.127457 kubelet[2218]: I0909 23:19:09.126883 2218 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:19:09.127457 kubelet[2218]: I0909 23:19:09.126923 2218 server.go:1289] "Started kubelet" Sep 9 23:19:09.127457 kubelet[2218]: I0909 23:19:09.127360 2218 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:19:09.129725 kubelet[2218]: I0909 23:19:09.129705 2218 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:19:09.129857 kubelet[2218]: I0909 23:19:09.129831 2218 server.go:317] "Adding debug handlers to kubelet server" Sep 9 23:19:09.130220 kubelet[2218]: I0909 23:19:09.130171 2218 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:19:09.130300 kubelet[2218]: I0909 23:19:09.130273 2218 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:19:09.130468 kubelet[2218]: I0909 23:19:09.130445 2218 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:19:09.131001 kubelet[2218]: I0909 23:19:09.130971 2218 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:19:09.131068 kubelet[2218]: E0909 23:19:09.131054 2218 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:19:09.131912 kubelet[2218]: I0909 23:19:09.131890 2218 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:19:09.131973 kubelet[2218]: I0909 23:19:09.131951 2218 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:19:09.132267 kubelet[2218]: E0909 23:19:09.132221 2218 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 23:19:09.132324 kubelet[2218]: E0909 23:19:09.132282 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Sep 9 23:19:09.132407 kubelet[2218]: E0909 23:19:09.131236 2218 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c08326af5ff4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 23:19:09.126901748 +0000 UTC m=+0.968160588,LastTimestamp:2025-09-09 23:19:09.126901748 +0000 UTC m=+0.968160588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 23:19:09.132659 kubelet[2218]: I0909 23:19:09.132585 2218 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:19:09.133746 kubelet[2218]: I0909 23:19:09.133727 2218 factory.go:223] Registration of the containerd container factory successfully Sep 9 23:19:09.133887 kubelet[2218]: I0909 23:19:09.133877 2218 factory.go:223] Registration of the systemd container factory successfully Sep 9 23:19:09.134296 kubelet[2218]: E0909 23:19:09.134274 2218 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:19:09.145381 kubelet[2218]: I0909 23:19:09.145355 2218 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:19:09.145381 kubelet[2218]: I0909 23:19:09.145375 2218 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:19:09.145482 kubelet[2218]: I0909 23:19:09.145392 2218 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:19:09.146944 kubelet[2218]: I0909 23:19:09.146909 2218 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 23:19:09.147890 kubelet[2218]: I0909 23:19:09.147861 2218 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 23:19:09.147890 kubelet[2218]: I0909 23:19:09.147881 2218 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 23:19:09.147977 kubelet[2218]: I0909 23:19:09.147902 2218 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:19:09.147977 kubelet[2218]: I0909 23:19:09.147910 2218 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 23:19:09.148133 kubelet[2218]: E0909 23:19:09.148111 2218 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:19:09.149263 kubelet[2218]: I0909 23:19:09.149233 2218 policy_none.go:49] "None policy: Start" Sep 9 23:19:09.149263 kubelet[2218]: I0909 23:19:09.149258 2218 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:19:09.149321 kubelet[2218]: I0909 23:19:09.149269 2218 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:19:09.151442 kubelet[2218]: E0909 23:19:09.151258 2218 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 23:19:09.154469 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:19:09.175904 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:19:09.179084 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:19:09.188358 kubelet[2218]: E0909 23:19:09.188328 2218 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 23:19:09.188629 kubelet[2218]: I0909 23:19:09.188556 2218 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:19:09.188629 kubelet[2218]: I0909 23:19:09.188568 2218 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:19:09.188967 kubelet[2218]: I0909 23:19:09.188798 2218 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:19:09.190626 kubelet[2218]: E0909 23:19:09.190606 2218 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:19:09.190739 kubelet[2218]: E0909 23:19:09.190725 2218 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 23:19:09.257919 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 23:19:09.281728 kubelet[2218]: E0909 23:19:09.281688 2218 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:19:09.284313 systemd[1]: Created slice kubepods-burstable-pode7f6bf1b77c4c34dd47268ec637b5c5e.slice - libcontainer container kubepods-burstable-pode7f6bf1b77c4c34dd47268ec637b5c5e.slice. Sep 9 23:19:09.291097 kubelet[2218]: I0909 23:19:09.291066 2218 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:19:09.291533 kubelet[2218]: E0909 23:19:09.291501 2218 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Sep 9 23:19:09.296528 kubelet[2218]: E0909 23:19:09.296402 2218 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:19:09.298649 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 23:19:09.299964 kubelet[2218]: E0909 23:19:09.299923 2218 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:19:09.332699 kubelet[2218]: E0909 23:19:09.332668 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Sep 9 23:19:09.333751 kubelet[2218]: I0909 23:19:09.333693 2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:19:09.333751 kubelet[2218]: I0909 23:19:09.333722 2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7f6bf1b77c4c34dd47268ec637b5c5e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7f6bf1b77c4c34dd47268ec637b5c5e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:19:09.333869 kubelet[2218]: I0909 23:19:09.333791 2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7f6bf1b77c4c34dd47268ec637b5c5e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7f6bf1b77c4c34dd47268ec637b5c5e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:19:09.333869 kubelet[2218]: I0909 23:19:09.333820 2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:09.333869 kubelet[2218]: I0909 23:19:09.333838 2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:09.333869 kubelet[2218]: I0909 23:19:09.333854 2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7f6bf1b77c4c34dd47268ec637b5c5e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e7f6bf1b77c4c34dd47268ec637b5c5e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:19:09.333869 kubelet[2218]: I0909 23:19:09.333867 2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:09.333989 kubelet[2218]: I0909 23:19:09.333886 2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:09.333989 kubelet[2218]: I0909 23:19:09.333909 2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:09.493271 kubelet[2218]: I0909 23:19:09.493160 2218 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:19:09.493540 kubelet[2218]: E0909 23:19:09.493507 2218 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Sep 9 23:19:09.582429 kubelet[2218]: E0909 23:19:09.582152 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:09.584876 containerd[1476]: time="2025-09-09T23:19:09.584815646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 23:19:09.596884 kubelet[2218]: E0909 23:19:09.596810 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:09.597254 containerd[1476]: time="2025-09-09T23:19:09.597222218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e7f6bf1b77c4c34dd47268ec637b5c5e,Namespace:kube-system,Attempt:0,}" Sep 9 23:19:09.600922 kubelet[2218]: E0909 23:19:09.600715 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:09.601069 containerd[1476]: time="2025-09-09T23:19:09.601040903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 23:19:09.733635 kubelet[2218]: E0909 23:19:09.733580 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Sep 9 23:19:09.895385 kubelet[2218]: I0909 23:19:09.895286 2218 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:19:09.895628 kubelet[2218]: E0909 23:19:09.895590 2218 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Sep 9 23:19:10.026728 kubelet[2218]: E0909 23:19:10.026660 2218 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 23:19:10.109604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740612.mount: Deactivated successfully. Sep 9 23:19:10.114292 containerd[1476]: time="2025-09-09T23:19:10.114212209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:19:10.116166 containerd[1476]: time="2025-09-09T23:19:10.116122383Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 23:19:10.117577 containerd[1476]: time="2025-09-09T23:19:10.116823113Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:19:10.118202 containerd[1476]: time="2025-09-09T23:19:10.118153326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 9 23:19:10.120274 containerd[1476]: time="2025-09-09T23:19:10.120233910Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:19:10.121710 containerd[1476]: time="2025-09-09T23:19:10.121466956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 23:19:10.121710 containerd[1476]: time="2025-09-09T23:19:10.121626703Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:19:10.125286 containerd[1476]: time="2025-09-09T23:19:10.125256313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:19:10.127826 containerd[1476]: time="2025-09-09T23:19:10.127622085Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.326078ms" Sep 9 23:19:10.129956 containerd[1476]: time="2025-09-09T23:19:10.129912536Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.023901ms" Sep 9 23:19:10.132597 containerd[1476]: time="2025-09-09T23:19:10.132569887Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 531.464658ms" Sep 9 23:19:10.180389 kubelet[2218]: E0909 23:19:10.180201 2218 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 23:19:10.223076 containerd[1476]: time="2025-09-09T23:19:10.222810924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:19:10.223076 containerd[1476]: time="2025-09-09T23:19:10.222879775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:19:10.223076 containerd[1476]: time="2025-09-09T23:19:10.222894991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:10.223581 containerd[1476]: time="2025-09-09T23:19:10.223290804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:19:10.223581 containerd[1476]: time="2025-09-09T23:19:10.223345078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:19:10.223581 containerd[1476]: time="2025-09-09T23:19:10.223379903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:10.223581 containerd[1476]: time="2025-09-09T23:19:10.223021391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:10.223791 containerd[1476]: time="2025-09-09T23:19:10.223452468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:10.224704 containerd[1476]: time="2025-09-09T23:19:10.224556639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:19:10.224794 containerd[1476]: time="2025-09-09T23:19:10.224705802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:19:10.224794 containerd[1476]: time="2025-09-09T23:19:10.224732320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:10.224866 containerd[1476]: time="2025-09-09T23:19:10.224824614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:10.248687 systemd[1]: Started cri-containerd-1c872a15c21cff30664a8447128d69a78ea14dabd9a8cd1249eedbb21e7fe980.scope - libcontainer container 1c872a15c21cff30664a8447128d69a78ea14dabd9a8cd1249eedbb21e7fe980. Sep 9 23:19:10.250056 systemd[1]: Started cri-containerd-b3ecdc5588a6e4f3e3ba3f4728f6b0d79f720368656d8fb5675eda5994bd831a.scope - libcontainer container b3ecdc5588a6e4f3e3ba3f4728f6b0d79f720368656d8fb5675eda5994bd831a. Sep 9 23:19:10.251462 systemd[1]: Started cri-containerd-cbc636516b7961cfca6f3431bdeab5fff2a533bcca4a2435a27b6fdb9820c52e.scope - libcontainer container cbc636516b7961cfca6f3431bdeab5fff2a533bcca4a2435a27b6fdb9820c52e. Sep 9 23:19:10.278045 containerd[1476]: time="2025-09-09T23:19:10.278009797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c872a15c21cff30664a8447128d69a78ea14dabd9a8cd1249eedbb21e7fe980\"" Sep 9 23:19:10.279611 kubelet[2218]: E0909 23:19:10.279457 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:10.285917 containerd[1476]: time="2025-09-09T23:19:10.285725174Z" level=info msg="CreateContainer within sandbox \"1c872a15c21cff30664a8447128d69a78ea14dabd9a8cd1249eedbb21e7fe980\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:19:10.288618 containerd[1476]: time="2025-09-09T23:19:10.288524699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbc636516b7961cfca6f3431bdeab5fff2a533bcca4a2435a27b6fdb9820c52e\"" Sep 9 23:19:10.289120 kubelet[2218]: E0909 23:19:10.289096 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:10.291210 containerd[1476]: time="2025-09-09T23:19:10.291038636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e7f6bf1b77c4c34dd47268ec637b5c5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3ecdc5588a6e4f3e3ba3f4728f6b0d79f720368656d8fb5675eda5994bd831a\"" Sep 9 23:19:10.292314 kubelet[2218]: E0909 23:19:10.292290 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:10.293520 containerd[1476]: time="2025-09-09T23:19:10.293480567Z" level=info msg="CreateContainer within sandbox \"cbc636516b7961cfca6f3431bdeab5fff2a533bcca4a2435a27b6fdb9820c52e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:19:10.295501 containerd[1476]: time="2025-09-09T23:19:10.295460431Z" level=info msg="CreateContainer within sandbox \"b3ecdc5588a6e4f3e3ba3f4728f6b0d79f720368656d8fb5675eda5994bd831a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:19:10.306425 containerd[1476]: time="2025-09-09T23:19:10.306381489Z" level=info msg="CreateContainer within sandbox \"1c872a15c21cff30664a8447128d69a78ea14dabd9a8cd1249eedbb21e7fe980\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b070910d0c61c79cbe14f0c533749b9da3c3ccb5aa06e1baf2151bb48b32c6b5\"" Sep 9 23:19:10.307029 containerd[1476]: time="2025-09-09T23:19:10.307001427Z" level=info msg="StartContainer for \"b070910d0c61c79cbe14f0c533749b9da3c3ccb5aa06e1baf2151bb48b32c6b5\"" Sep 9 23:19:10.311789 containerd[1476]: time="2025-09-09T23:19:10.311686365Z" level=info msg="CreateContainer within sandbox \"cbc636516b7961cfca6f3431bdeab5fff2a533bcca4a2435a27b6fdb9820c52e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eb5d7865748e1c4927064da033d59aee1129231775ac8086210a62c1217c16a5\"" Sep 9 23:19:10.312186 containerd[1476]: time="2025-09-09T23:19:10.312157539Z" level=info msg="StartContainer for \"eb5d7865748e1c4927064da033d59aee1129231775ac8086210a62c1217c16a5\"" Sep 9 23:19:10.312332 containerd[1476]: time="2025-09-09T23:19:10.312272996Z" level=info msg="CreateContainer within sandbox \"b3ecdc5588a6e4f3e3ba3f4728f6b0d79f720368656d8fb5675eda5994bd831a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d8179030b01042530b1677b9ba2431c2881a1732b692c5232cec7f60aa1184f0\"" Sep 9 23:19:10.312599 containerd[1476]: time="2025-09-09T23:19:10.312558104Z" level=info msg="StartContainer for \"d8179030b01042530b1677b9ba2431c2881a1732b692c5232cec7f60aa1184f0\"" Sep 9 23:19:10.340655 systemd[1]: Started cri-containerd-b070910d0c61c79cbe14f0c533749b9da3c3ccb5aa06e1baf2151bb48b32c6b5.scope - libcontainer container b070910d0c61c79cbe14f0c533749b9da3c3ccb5aa06e1baf2151bb48b32c6b5. Sep 9 23:19:10.341802 kubelet[2218]: E0909 23:19:10.341760 2218 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 23:19:10.341918 systemd[1]: Started cri-containerd-eb5d7865748e1c4927064da033d59aee1129231775ac8086210a62c1217c16a5.scope - libcontainer container eb5d7865748e1c4927064da033d59aee1129231775ac8086210a62c1217c16a5. Sep 9 23:19:10.345108 systemd[1]: Started cri-containerd-d8179030b01042530b1677b9ba2431c2881a1732b692c5232cec7f60aa1184f0.scope - libcontainer container d8179030b01042530b1677b9ba2431c2881a1732b692c5232cec7f60aa1184f0. Sep 9 23:19:10.379663 containerd[1476]: time="2025-09-09T23:19:10.378708587Z" level=info msg="StartContainer for \"eb5d7865748e1c4927064da033d59aee1129231775ac8086210a62c1217c16a5\" returns successfully" Sep 9 23:19:10.379663 containerd[1476]: time="2025-09-09T23:19:10.378958751Z" level=info msg="StartContainer for \"b070910d0c61c79cbe14f0c533749b9da3c3ccb5aa06e1baf2151bb48b32c6b5\" returns successfully" Sep 9 23:19:10.389001 containerd[1476]: time="2025-09-09T23:19:10.388959068Z" level=info msg="StartContainer for \"d8179030b01042530b1677b9ba2431c2881a1732b692c5232cec7f60aa1184f0\" returns successfully" Sep 9 23:19:10.704869 kubelet[2218]: I0909 23:19:10.704542 2218 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:19:11.158853 kubelet[2218]: E0909 23:19:11.158647 2218 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:19:11.158853 kubelet[2218]: E0909 23:19:11.158830 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:11.160643 kubelet[2218]: E0909 23:19:11.160618 2218 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:19:11.160783 kubelet[2218]: E0909 23:19:11.160726 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:11.162294 kubelet[2218]: E0909 23:19:11.162273 2218 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:19:11.162388 kubelet[2218]: E0909 23:19:11.162375 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:12.166562 kubelet[2218]: E0909 23:19:12.166245 2218 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:19:12.166562 kubelet[2218]: E0909 23:19:12.166376 2218 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:19:12.166562 kubelet[2218]: E0909 23:19:12.166401 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:12.166562 kubelet[2218]: E0909 23:19:12.166487 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:12.167213 kubelet[2218]: E0909 23:19:12.167056 2218 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:19:12.167213 kubelet[2218]: E0909 23:19:12.167168 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:12.295329 kubelet[2218]: E0909 23:19:12.295287 2218 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 23:19:12.358678 kubelet[2218]: I0909 23:19:12.358628 2218 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 23:19:12.358678 kubelet[2218]: E0909 23:19:12.358671 2218 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 23:19:12.404463 kubelet[2218]: E0909 23:19:12.404210 2218 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1863c08326af5ff4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 23:19:09.126901748 +0000 UTC m=+0.968160588,LastTimestamp:2025-09-09 23:19:09.126901748 +0000 UTC m=+0.968160588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 23:19:12.432840 kubelet[2218]: I0909 23:19:12.432274 2218 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:19:12.439227 kubelet[2218]: E0909 23:19:12.438766 2218 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 23:19:12.439227 kubelet[2218]: I0909 23:19:12.438796 2218 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:12.441448 kubelet[2218]: E0909 23:19:12.441425 2218 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:12.441448 kubelet[2218]: I0909 23:19:12.441447 2218 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 23:19:12.443134 kubelet[2218]: E0909 23:19:12.443057 2218 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 23:19:13.123560 kubelet[2218]: I0909 23:19:13.123532 2218 apiserver.go:52] "Watching apiserver" Sep 9 23:19:13.132007 kubelet[2218]: I0909 23:19:13.131967 2218 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:19:14.094589 systemd[1]: Reload requested from client PID 2505 ('systemctl') (unit session-7.scope)... Sep 9 23:19:14.094605 systemd[1]: Reloading... Sep 9 23:19:14.170523 zram_generator::config[2552]: No configuration found. Sep 9 23:19:14.257166 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:19:14.352638 systemd[1]: Reloading finished in 257 ms. Sep 9 23:19:14.372110 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:19:14.382473 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:19:14.382751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:19:14.382804 systemd[1]: kubelet.service: Consumed 1.334s CPU time, 126.9M memory peak. Sep 9 23:19:14.402588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:19:14.543543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:19:14.556024 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:19:14.596232 kubelet[2591]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:19:14.598442 kubelet[2591]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:19:14.598442 kubelet[2591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:19:14.598442 kubelet[2591]: I0909 23:19:14.596639 2591 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:19:14.602475 kubelet[2591]: I0909 23:19:14.602421 2591 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 23:19:14.602587 kubelet[2591]: I0909 23:19:14.602536 2591 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:19:14.603111 kubelet[2591]: I0909 23:19:14.602773 2591 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 23:19:14.604088 kubelet[2591]: I0909 23:19:14.604054 2591 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 23:19:14.607521 kubelet[2591]: I0909 23:19:14.607405 2591 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:19:14.612305 kubelet[2591]: E0909 23:19:14.612271 2591 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 23:19:14.612666 kubelet[2591]: I0909 23:19:14.612441 2591 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 23:19:14.615381 kubelet[2591]: I0909 23:19:14.615358 2591 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:19:14.615637 kubelet[2591]: I0909 23:19:14.615609 2591 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:19:14.615780 kubelet[2591]: I0909 23:19:14.615638 2591 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:19:14.615849 kubelet[2591]: I0909 23:19:14.615790 2591 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:19:14.615849 kubelet[2591]: I0909 23:19:14.615799 2591 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 23:19:14.615849 kubelet[2591]: I0909 23:19:14.615840 2591 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:19:14.615997 kubelet[2591]: I0909 23:19:14.615985 2591 kubelet.go:480] "Attempting to sync node with API server" Sep 9 23:19:14.616025 kubelet[2591]: I0909 23:19:14.616000 2591 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:19:14.616046 kubelet[2591]: I0909 23:19:14.616024 2591 kubelet.go:386] "Adding apiserver pod source" Sep 9 23:19:14.616046 kubelet[2591]: I0909 23:19:14.616037 2591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:19:14.616888 kubelet[2591]: I0909 23:19:14.616851 2591 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 9 23:19:14.621044 kubelet[2591]: I0909 23:19:14.621004 2591 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 23:19:14.624347 kubelet[2591]: I0909 23:19:14.623710 2591 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:19:14.624347 kubelet[2591]: I0909 23:19:14.623786 2591 server.go:1289] "Started kubelet" Sep 9 23:19:14.624451 kubelet[2591]: I0909 23:19:14.624400 2591 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:19:14.625316 kubelet[2591]: I0909 23:19:14.625289 2591 server.go:317] "Adding debug handlers to kubelet server" Sep 9 23:19:14.625770 kubelet[2591]: I0909 23:19:14.625706 2591 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:19:14.626083 kubelet[2591]: I0909 23:19:14.626049 2591 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:19:14.627044 kubelet[2591]: I0909 23:19:14.627019 2591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:19:14.633585 kubelet[2591]: I0909 23:19:14.633551 2591 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:19:14.636309 kubelet[2591]: E0909 23:19:14.636279 2591 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:19:14.637258 kubelet[2591]: I0909 23:19:14.637230 2591 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:19:14.640061 kubelet[2591]: I0909 23:19:14.640032 2591 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:19:14.640061 kubelet[2591]: I0909 23:19:14.640066 2591 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:19:14.640449 kubelet[2591]: I0909 23:19:14.640422 2591 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:19:14.644522 kubelet[2591]: I0909 23:19:14.643801 2591 factory.go:223] Registration of the containerd container factory successfully Sep 9 23:19:14.644522 kubelet[2591]: I0909 23:19:14.643828 2591 factory.go:223] Registration of the systemd container factory successfully Sep 9 23:19:14.649246 kubelet[2591]: I0909 23:19:14.649110 2591 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 23:19:14.651606 kubelet[2591]: I0909 23:19:14.651577 2591 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 23:19:14.651606 kubelet[2591]: I0909 23:19:14.651602 2591 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 23:19:14.651739 kubelet[2591]: I0909 23:19:14.651622 2591 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:19:14.651739 kubelet[2591]: I0909 23:19:14.651629 2591 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 23:19:14.651739 kubelet[2591]: E0909 23:19:14.651671 2591 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:19:14.676615 kubelet[2591]: I0909 23:19:14.676584 2591 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:19:14.676615 kubelet[2591]: I0909 23:19:14.676606 2591 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:19:14.676799 kubelet[2591]: I0909 23:19:14.676630 2591 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:19:14.676799 kubelet[2591]: I0909 23:19:14.676780 2591 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:19:14.676854 kubelet[2591]: I0909 23:19:14.676791 2591 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:19:14.676854 kubelet[2591]: I0909 23:19:14.676809 2591 policy_none.go:49] "None policy: Start" Sep 9 23:19:14.676854 kubelet[2591]: I0909 23:19:14.676817 2591 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:19:14.676854 kubelet[2591]: I0909 23:19:14.676825 2591 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:19:14.676933 kubelet[2591]: I0909 23:19:14.676924 2591 state_mem.go:75] "Updated machine memory state" Sep 9 23:19:14.680597 kubelet[2591]: E0909 23:19:14.680571 2591 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 23:19:14.680959 kubelet[2591]: I0909 23:19:14.680757 2591 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:19:14.680959 kubelet[2591]: I0909 23:19:14.680780 2591 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:19:14.681039 kubelet[2591]: I0909 23:19:14.680999 2591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:19:14.682344 kubelet[2591]: E0909 23:19:14.682301 2591 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:19:14.753201 kubelet[2591]: I0909 23:19:14.753163 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 23:19:14.753878 kubelet[2591]: I0909 23:19:14.753233 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:19:14.753878 kubelet[2591]: I0909 23:19:14.753164 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:14.793951 kubelet[2591]: I0909 23:19:14.793721 2591 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:19:14.841607 kubelet[2591]: I0909 23:19:14.841544 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7f6bf1b77c4c34dd47268ec637b5c5e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7f6bf1b77c4c34dd47268ec637b5c5e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:19:14.841607 kubelet[2591]: I0909 23:19:14.841589 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7f6bf1b77c4c34dd47268ec637b5c5e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7f6bf1b77c4c34dd47268ec637b5c5e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:19:14.841607 kubelet[2591]: I0909 23:19:14.841607 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:14.841821 kubelet[2591]: I0909 23:19:14.841627 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:14.841821 kubelet[2591]: I0909 23:19:14.841645 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:14.841821 kubelet[2591]: I0909 23:19:14.841662 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7f6bf1b77c4c34dd47268ec637b5c5e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e7f6bf1b77c4c34dd47268ec637b5c5e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:19:14.841821 kubelet[2591]: I0909 23:19:14.841677 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:14.841821 kubelet[2591]: I0909 23:19:14.841721 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:19:14.841921 kubelet[2591]: I0909 23:19:14.841736 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:19:14.951445 kubelet[2591]: I0909 23:19:14.951320 2591 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 23:19:14.951445 kubelet[2591]: I0909 23:19:14.951420 2591 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 23:19:15.057987 kubelet[2591]: E0909 23:19:15.057933 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:15.062424 kubelet[2591]: E0909 23:19:15.062395 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:15.062591 kubelet[2591]: E0909 23:19:15.062565 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:15.097026 sudo[2632]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 23:19:15.097325 sudo[2632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 23:19:15.532455 sudo[2632]: pam_unix(sudo:session): session closed for user root Sep 9 23:19:15.617649 kubelet[2591]: I0909 23:19:15.617600 2591 apiserver.go:52] "Watching apiserver" Sep 9 23:19:15.640756 kubelet[2591]: I0909 23:19:15.640701 2591 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:19:15.662035 kubelet[2591]: I0909 23:19:15.661940 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:19:15.662035 kubelet[2591]: I0909 23:19:15.662037 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 23:19:15.663815 kubelet[2591]: E0909 23:19:15.663795 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:15.667173 kubelet[2591]: E0909 23:19:15.666939 2591 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 23:19:15.667173 kubelet[2591]: E0909 23:19:15.667103 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:15.675804 kubelet[2591]: E0909 23:19:15.675773 2591 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 23:19:15.675965 kubelet[2591]: E0909 23:19:15.675951 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:15.684212 kubelet[2591]: I0909 23:19:15.684026 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.684010098 podStartE2EDuration="1.684010098s" podCreationTimestamp="2025-09-09 23:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:19:15.683771932 +0000 UTC m=+1.123908692" watchObservedRunningTime="2025-09-09 23:19:15.684010098 +0000 UTC m=+1.124146818" Sep 9 23:19:15.692750 kubelet[2591]: I0909 23:19:15.692504 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.692477663 podStartE2EDuration="1.692477663s" podCreationTimestamp="2025-09-09 23:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:19:15.692248569 +0000 UTC m=+1.132385329" watchObservedRunningTime="2025-09-09 23:19:15.692477663 +0000 UTC m=+1.132614423" Sep 9 23:19:15.708713 kubelet[2591]: I0909 23:19:15.708445 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.708428192 podStartE2EDuration="1.708428192s" podCreationTimestamp="2025-09-09 23:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:19:15.699842044 +0000 UTC m=+1.139978884" watchObservedRunningTime="2025-09-09 23:19:15.708428192 +0000 UTC m=+1.148564912" Sep 9 23:19:16.663157 kubelet[2591]: E0909 23:19:16.663128 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:16.663517 kubelet[2591]: E0909 23:19:16.663210 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:17.665941 kubelet[2591]: E0909 23:19:17.665853 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:18.131986 sudo[1662]: pam_unix(sudo:session): session closed for user root Sep 9 23:19:18.134535 sshd[1661]: Connection closed by 10.0.0.1 port 45342 Sep 9 23:19:18.134917 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Sep 9 23:19:18.138173 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:45342.service: Deactivated successfully. Sep 9 23:19:18.140347 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:19:18.141648 systemd[1]: session-7.scope: Consumed 7.206s CPU time, 259.4M memory peak. Sep 9 23:19:18.143268 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:19:18.144289 systemd-logind[1463]: Removed session 7. Sep 9 23:19:19.305687 kubelet[2591]: I0909 23:19:19.305617 2591 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:19:19.313001 containerd[1476]: time="2025-09-09T23:19:19.312943585Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:19:19.313347 kubelet[2591]: I0909 23:19:19.313219 2591 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:19:19.605383 kubelet[2591]: E0909 23:19:19.605203 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:19.623015 kubelet[2591]: E0909 23:19:19.622959 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:19.675593 kubelet[2591]: I0909 23:19:19.674411 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68508392-45a8-47e2-b248-3b6fb821315f-kube-proxy\") pod \"kube-proxy-tf9qn\" (UID: \"68508392-45a8-47e2-b248-3b6fb821315f\") " pod="kube-system/kube-proxy-tf9qn" Sep 9 23:19:19.675593 kubelet[2591]: I0909 23:19:19.674442 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68508392-45a8-47e2-b248-3b6fb821315f-xtables-lock\") pod \"kube-proxy-tf9qn\" (UID: \"68508392-45a8-47e2-b248-3b6fb821315f\") " pod="kube-system/kube-proxy-tf9qn" Sep 9 23:19:19.675593 kubelet[2591]: I0909 23:19:19.674471 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-etc-cni-netd\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.675593 kubelet[2591]: I0909 23:19:19.674489 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-host-proc-sys-net\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.675593 kubelet[2591]: I0909 23:19:19.674525 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68508392-45a8-47e2-b248-3b6fb821315f-lib-modules\") pod \"kube-proxy-tf9qn\" (UID: \"68508392-45a8-47e2-b248-3b6fb821315f\") " pod="kube-system/kube-proxy-tf9qn" Sep 9 23:19:19.675593 kubelet[2591]: I0909 23:19:19.674541 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-bpf-maps\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.675893 kubelet[2591]: I0909 23:19:19.674555 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-hostproc\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.675893 kubelet[2591]: I0909 23:19:19.674572 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86e48b1c-cee2-406a-b36d-625a368f74e4-clustermesh-secrets\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.675893 kubelet[2591]: I0909 23:19:19.674597 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-config-path\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.675893 kubelet[2591]: I0909 23:19:19.674611 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-host-proc-sys-kernel\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.675893 kubelet[2591]: I0909 23:19:19.674624 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-run\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.675893 kubelet[2591]: I0909 23:19:19.674648 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cni-path\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.676015 kubelet[2591]: I0909 23:19:19.674673 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-xtables-lock\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.676015 kubelet[2591]: I0909 23:19:19.674688 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hncqb\" (UniqueName: \"kubernetes.io/projected/86e48b1c-cee2-406a-b36d-625a368f74e4-kube-api-access-hncqb\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.676015 kubelet[2591]: I0909 23:19:19.674736 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mx7t\" (UniqueName: \"kubernetes.io/projected/68508392-45a8-47e2-b248-3b6fb821315f-kube-api-access-7mx7t\") pod \"kube-proxy-tf9qn\" (UID: \"68508392-45a8-47e2-b248-3b6fb821315f\") " pod="kube-system/kube-proxy-tf9qn" Sep 9 23:19:19.676015 kubelet[2591]: I0909 23:19:19.674765 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-cgroup\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.676015 kubelet[2591]: I0909 23:19:19.674780 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-lib-modules\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.676183 kubelet[2591]: I0909 23:19:19.674794 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86e48b1c-cee2-406a-b36d-625a368f74e4-hubble-tls\") pod \"cilium-wgcnl\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " pod="kube-system/cilium-wgcnl" Sep 9 23:19:19.683678 systemd[1]: Created slice kubepods-besteffort-pod68508392_45a8_47e2_b248_3b6fb821315f.slice - libcontainer container kubepods-besteffort-pod68508392_45a8_47e2_b248_3b6fb821315f.slice. Sep 9 23:19:19.699303 systemd[1]: Created slice kubepods-burstable-pod86e48b1c_cee2_406a_b36d_625a368f74e4.slice - libcontainer container kubepods-burstable-pod86e48b1c_cee2_406a_b36d_625a368f74e4.slice. Sep 9 23:19:19.786303 kubelet[2591]: E0909 23:19:19.785994 2591 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 23:19:19.786303 kubelet[2591]: E0909 23:19:19.786035 2591 projected.go:194] Error preparing data for projected volume kube-api-access-hncqb for pod kube-system/cilium-wgcnl: configmap "kube-root-ca.crt" not found Sep 9 23:19:19.786303 kubelet[2591]: E0909 23:19:19.786110 2591 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/86e48b1c-cee2-406a-b36d-625a368f74e4-kube-api-access-hncqb podName:86e48b1c-cee2-406a-b36d-625a368f74e4 nodeName:}" failed. No retries permitted until 2025-09-09 23:19:20.286076751 +0000 UTC m=+5.726213471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hncqb" (UniqueName: "kubernetes.io/projected/86e48b1c-cee2-406a-b36d-625a368f74e4-kube-api-access-hncqb") pod "cilium-wgcnl" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4") : configmap "kube-root-ca.crt" not found Sep 9 23:19:19.787387 kubelet[2591]: E0909 23:19:19.787361 2591 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 23:19:19.787387 kubelet[2591]: E0909 23:19:19.787385 2591 projected.go:194] Error preparing data for projected volume kube-api-access-7mx7t for pod kube-system/kube-proxy-tf9qn: configmap "kube-root-ca.crt" not found Sep 9 23:19:19.788011 kubelet[2591]: E0909 23:19:19.787448 2591 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68508392-45a8-47e2-b248-3b6fb821315f-kube-api-access-7mx7t podName:68508392-45a8-47e2-b248-3b6fb821315f nodeName:}" failed. No retries permitted until 2025-09-09 23:19:20.28743621 +0000 UTC m=+5.727572970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7mx7t" (UniqueName: "kubernetes.io/projected/68508392-45a8-47e2-b248-3b6fb821315f-kube-api-access-7mx7t") pod "kube-proxy-tf9qn" (UID: "68508392-45a8-47e2-b248-3b6fb821315f") : configmap "kube-root-ca.crt" not found Sep 9 23:19:20.485021 systemd[1]: Created slice kubepods-besteffort-pod9c673b83_9dc6_47f7_8e91_7954c89e04c6.slice - libcontainer container kubepods-besteffort-pod9c673b83_9dc6_47f7_8e91_7954c89e04c6.slice. Sep 9 23:19:20.580247 kubelet[2591]: I0909 23:19:20.580161 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prnhz\" (UniqueName: \"kubernetes.io/projected/9c673b83-9dc6-47f7-8e91-7954c89e04c6-kube-api-access-prnhz\") pod \"cilium-operator-6c4d7847fc-xrx8s\" (UID: \"9c673b83-9dc6-47f7-8e91-7954c89e04c6\") " pod="kube-system/cilium-operator-6c4d7847fc-xrx8s" Sep 9 23:19:20.580247 kubelet[2591]: I0909 23:19:20.580201 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c673b83-9dc6-47f7-8e91-7954c89e04c6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xrx8s\" (UID: \"9c673b83-9dc6-47f7-8e91-7954c89e04c6\") " pod="kube-system/cilium-operator-6c4d7847fc-xrx8s" Sep 9 23:19:20.596357 kubelet[2591]: E0909 23:19:20.596302 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:20.596985 containerd[1476]: time="2025-09-09T23:19:20.596948699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tf9qn,Uid:68508392-45a8-47e2-b248-3b6fb821315f,Namespace:kube-system,Attempt:0,}" Sep 9 23:19:20.603269 kubelet[2591]: E0909 23:19:20.603245 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:20.603688 containerd[1476]: time="2025-09-09T23:19:20.603653638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wgcnl,Uid:86e48b1c-cee2-406a-b36d-625a368f74e4,Namespace:kube-system,Attempt:0,}" Sep 9 23:19:20.625705 containerd[1476]: time="2025-09-09T23:19:20.625619652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:19:20.625705 containerd[1476]: time="2025-09-09T23:19:20.625682546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:19:20.625705 containerd[1476]: time="2025-09-09T23:19:20.625697270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:20.626421 containerd[1476]: time="2025-09-09T23:19:20.626386232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:20.635248 containerd[1476]: time="2025-09-09T23:19:20.635141454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:19:20.635248 containerd[1476]: time="2025-09-09T23:19:20.635218432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:19:20.635374 containerd[1476]: time="2025-09-09T23:19:20.635251560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:20.635445 containerd[1476]: time="2025-09-09T23:19:20.635406237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:20.648651 systemd[1]: Started cri-containerd-25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963.scope - libcontainer container 25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963. Sep 9 23:19:20.651517 systemd[1]: Started cri-containerd-dc3dbf734c91852a8eefa903ad2ce8be1d34d86025a3e73636482425b2f6480a.scope - libcontainer container dc3dbf734c91852a8eefa903ad2ce8be1d34d86025a3e73636482425b2f6480a. Sep 9 23:19:20.674047 containerd[1476]: time="2025-09-09T23:19:20.674005128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wgcnl,Uid:86e48b1c-cee2-406a-b36d-625a368f74e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\"" Sep 9 23:19:20.675171 kubelet[2591]: E0909 23:19:20.675145 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:20.678165 containerd[1476]: time="2025-09-09T23:19:20.677914289Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 23:19:20.678629 containerd[1476]: time="2025-09-09T23:19:20.678602931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tf9qn,Uid:68508392-45a8-47e2-b248-3b6fb821315f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc3dbf734c91852a8eefa903ad2ce8be1d34d86025a3e73636482425b2f6480a\"" Sep 9 23:19:20.679333 kubelet[2591]: E0909 23:19:20.679305 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:20.692840 containerd[1476]: time="2025-09-09T23:19:20.692802795Z" level=info msg="CreateContainer within sandbox \"dc3dbf734c91852a8eefa903ad2ce8be1d34d86025a3e73636482425b2f6480a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:19:20.724873 containerd[1476]: time="2025-09-09T23:19:20.724828218Z" level=info msg="CreateContainer within sandbox \"dc3dbf734c91852a8eefa903ad2ce8be1d34d86025a3e73636482425b2f6480a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"75f9d5f395e62ba374d06ef1c4ccb68e03d25ebd9ff171a4c74e280a42abc1f1\"" Sep 9 23:19:20.725536 containerd[1476]: time="2025-09-09T23:19:20.725476251Z" level=info msg="StartContainer for \"75f9d5f395e62ba374d06ef1c4ccb68e03d25ebd9ff171a4c74e280a42abc1f1\"" Sep 9 23:19:20.752686 systemd[1]: Started cri-containerd-75f9d5f395e62ba374d06ef1c4ccb68e03d25ebd9ff171a4c74e280a42abc1f1.scope - libcontainer container 75f9d5f395e62ba374d06ef1c4ccb68e03d25ebd9ff171a4c74e280a42abc1f1. Sep 9 23:19:20.779789 containerd[1476]: time="2025-09-09T23:19:20.779734710Z" level=info msg="StartContainer for \"75f9d5f395e62ba374d06ef1c4ccb68e03d25ebd9ff171a4c74e280a42abc1f1\" returns successfully" Sep 9 23:19:20.790973 kubelet[2591]: E0909 23:19:20.790939 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:20.792674 containerd[1476]: time="2025-09-09T23:19:20.792638230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xrx8s,Uid:9c673b83-9dc6-47f7-8e91-7954c89e04c6,Namespace:kube-system,Attempt:0,}" Sep 9 23:19:20.815913 containerd[1476]: time="2025-09-09T23:19:20.815408433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:19:20.815913 containerd[1476]: time="2025-09-09T23:19:20.815466086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:19:20.815913 containerd[1476]: time="2025-09-09T23:19:20.815480770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:20.815913 containerd[1476]: time="2025-09-09T23:19:20.815571311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:20.838732 systemd[1]: Started cri-containerd-e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f.scope - libcontainer container e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f. Sep 9 23:19:20.869376 containerd[1476]: time="2025-09-09T23:19:20.869337775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xrx8s,Uid:9c673b83-9dc6-47f7-8e91-7954c89e04c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f\"" Sep 9 23:19:20.870240 kubelet[2591]: E0909 23:19:20.870220 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:21.675165 kubelet[2591]: E0909 23:19:21.675136 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:21.744390 kubelet[2591]: I0909 23:19:21.744289 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tf9qn" podStartSLOduration=2.744022769 podStartE2EDuration="2.744022769s" podCreationTimestamp="2025-09-09 23:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:19:21.743615639 +0000 UTC m=+7.183752399" watchObservedRunningTime="2025-09-09 23:19:21.744022769 +0000 UTC m=+7.184159529" Sep 9 23:19:26.188307 kubelet[2591]: E0909 23:19:26.188271 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:26.672027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount44322423.mount: Deactivated successfully. Sep 9 23:19:26.687687 kubelet[2591]: E0909 23:19:26.687657 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:27.973483 containerd[1476]: time="2025-09-09T23:19:27.973423992Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:27.974520 containerd[1476]: time="2025-09-09T23:19:27.974309854Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 23:19:27.976573 containerd[1476]: time="2025-09-09T23:19:27.976541494Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:27.978232 containerd[1476]: time="2025-09-09T23:19:27.978191640Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.299998686s" Sep 9 23:19:27.978275 containerd[1476]: time="2025-09-09T23:19:27.978242408Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 23:19:27.979846 containerd[1476]: time="2025-09-09T23:19:27.979810741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 23:19:27.983860 containerd[1476]: time="2025-09-09T23:19:27.983813587Z" level=info msg="CreateContainer within sandbox \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:19:28.009298 containerd[1476]: time="2025-09-09T23:19:28.009248824Z" level=info msg="CreateContainer within sandbox \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd\"" Sep 9 23:19:28.010026 containerd[1476]: time="2025-09-09T23:19:28.009795467Z" level=info msg="StartContainer for \"5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd\"" Sep 9 23:19:28.040718 systemd[1]: Started cri-containerd-5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd.scope - libcontainer container 5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd. Sep 9 23:19:28.060603 containerd[1476]: time="2025-09-09T23:19:28.060557317Z" level=info msg="StartContainer for \"5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd\" returns successfully" Sep 9 23:19:28.072593 systemd[1]: cri-containerd-5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd.scope: Deactivated successfully. Sep 9 23:19:28.213420 containerd[1476]: time="2025-09-09T23:19:28.207331704Z" level=info msg="shim disconnected" id=5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd namespace=k8s.io Sep 9 23:19:28.213420 containerd[1476]: time="2025-09-09T23:19:28.213264212Z" level=warning msg="cleaning up after shim disconnected" id=5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd namespace=k8s.io Sep 9 23:19:28.213420 containerd[1476]: time="2025-09-09T23:19:28.213282975Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:19:28.694687 kubelet[2591]: E0909 23:19:28.694652 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:28.701366 containerd[1476]: time="2025-09-09T23:19:28.700856048Z" level=info msg="CreateContainer within sandbox \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:19:28.713246 containerd[1476]: time="2025-09-09T23:19:28.713203138Z" level=info msg="CreateContainer within sandbox \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2\"" Sep 9 23:19:28.714322 containerd[1476]: time="2025-09-09T23:19:28.713963974Z" level=info msg="StartContainer for \"32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2\"" Sep 9 23:19:28.753662 systemd[1]: Started cri-containerd-32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2.scope - libcontainer container 32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2. Sep 9 23:19:28.774996 containerd[1476]: time="2025-09-09T23:19:28.774881619Z" level=info msg="StartContainer for \"32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2\" returns successfully" Sep 9 23:19:28.786354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:19:28.786723 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:19:28.786993 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:19:28.792879 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:19:28.793056 systemd[1]: cri-containerd-32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2.scope: Deactivated successfully. Sep 9 23:19:28.813191 containerd[1476]: time="2025-09-09T23:19:28.813138555Z" level=info msg="shim disconnected" id=32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2 namespace=k8s.io Sep 9 23:19:28.813191 containerd[1476]: time="2025-09-09T23:19:28.813192523Z" level=warning msg="cleaning up after shim disconnected" id=32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2 namespace=k8s.io Sep 9 23:19:28.813401 containerd[1476]: time="2025-09-09T23:19:28.813202125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:19:28.819973 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:19:29.004018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd-rootfs.mount: Deactivated successfully. Sep 9 23:19:29.112092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976625297.mount: Deactivated successfully. Sep 9 23:19:29.359233 containerd[1476]: time="2025-09-09T23:19:29.359189052Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:29.360114 containerd[1476]: time="2025-09-09T23:19:29.359974526Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 23:19:29.360976 containerd[1476]: time="2025-09-09T23:19:29.360736677Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:19:29.362337 containerd[1476]: time="2025-09-09T23:19:29.362289223Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.382435315s" Sep 9 23:19:29.362337 containerd[1476]: time="2025-09-09T23:19:29.362327988Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 23:19:29.366401 containerd[1476]: time="2025-09-09T23:19:29.366368936Z" level=info msg="CreateContainer within sandbox \"e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 23:19:29.386436 containerd[1476]: time="2025-09-09T23:19:29.386399930Z" level=info msg="CreateContainer within sandbox \"e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce\"" Sep 9 23:19:29.386997 containerd[1476]: time="2025-09-09T23:19:29.386880679Z" level=info msg="StartContainer for \"ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce\"" Sep 9 23:19:29.411674 systemd[1]: Started cri-containerd-ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce.scope - libcontainer container ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce. Sep 9 23:19:29.436023 containerd[1476]: time="2025-09-09T23:19:29.435982541Z" level=info msg="StartContainer for \"ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce\" returns successfully" Sep 9 23:19:29.612401 kubelet[2591]: E0909 23:19:29.611380 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:29.632968 kubelet[2591]: E0909 23:19:29.632922 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:29.697407 kubelet[2591]: E0909 23:19:29.696946 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:29.700215 kubelet[2591]: E0909 23:19:29.699939 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:29.700215 kubelet[2591]: E0909 23:19:29.700213 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:29.712197 containerd[1476]: time="2025-09-09T23:19:29.712117863Z" level=info msg="CreateContainer within sandbox \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:19:29.726837 containerd[1476]: time="2025-09-09T23:19:29.726702984Z" level=info msg="CreateContainer within sandbox \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c\"" Sep 9 23:19:29.727449 containerd[1476]: time="2025-09-09T23:19:29.727420969Z" level=info msg="StartContainer for \"2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c\"" Sep 9 23:19:29.732596 kubelet[2591]: I0909 23:19:29.732429 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xrx8s" podStartSLOduration=1.240106358 podStartE2EDuration="9.732414015s" podCreationTimestamp="2025-09-09 23:19:20 +0000 UTC" firstStartedPulling="2025-09-09 23:19:20.870857053 +0000 UTC m=+6.310993813" lastFinishedPulling="2025-09-09 23:19:29.36316471 +0000 UTC m=+14.803301470" observedRunningTime="2025-09-09 23:19:29.707044805 +0000 UTC m=+15.147181565" watchObservedRunningTime="2025-09-09 23:19:29.732414015 +0000 UTC m=+15.172550775" Sep 9 23:19:29.759712 systemd[1]: Started cri-containerd-2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c.scope - libcontainer container 2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c. Sep 9 23:19:29.794726 containerd[1476]: time="2025-09-09T23:19:29.794680511Z" level=info msg="StartContainer for \"2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c\" returns successfully" Sep 9 23:19:29.795063 systemd[1]: cri-containerd-2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c.scope: Deactivated successfully. Sep 9 23:19:29.883918 containerd[1476]: time="2025-09-09T23:19:29.883253354Z" level=info msg="shim disconnected" id=2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c namespace=k8s.io Sep 9 23:19:29.883918 containerd[1476]: time="2025-09-09T23:19:29.883724582Z" level=warning msg="cleaning up after shim disconnected" id=2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c namespace=k8s.io Sep 9 23:19:29.883918 containerd[1476]: time="2025-09-09T23:19:29.883742785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:19:30.130601 update_engine[1464]: I20250909 23:19:30.130525 1464 update_attempter.cc:509] Updating boot flags... Sep 9 23:19:30.169597 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3250) Sep 9 23:19:30.216688 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3254) Sep 9 23:19:30.702806 kubelet[2591]: E0909 23:19:30.702760 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:30.703170 kubelet[2591]: E0909 23:19:30.702816 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:30.710265 containerd[1476]: time="2025-09-09T23:19:30.710089025Z" level=info msg="CreateContainer within sandbox \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:19:30.725376 containerd[1476]: time="2025-09-09T23:19:30.725263043Z" level=info msg="CreateContainer within sandbox \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1\"" Sep 9 23:19:30.726001 containerd[1476]: time="2025-09-09T23:19:30.725949978Z" level=info msg="StartContainer for \"9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1\"" Sep 9 23:19:30.752688 systemd[1]: Started cri-containerd-9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1.scope - libcontainer container 9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1. Sep 9 23:19:30.776252 systemd[1]: cri-containerd-9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1.scope: Deactivated successfully. Sep 9 23:19:30.778079 containerd[1476]: time="2025-09-09T23:19:30.778031701Z" level=info msg="StartContainer for \"9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1\" returns successfully" Sep 9 23:19:30.798807 containerd[1476]: time="2025-09-09T23:19:30.798748166Z" level=info msg="shim disconnected" id=9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1 namespace=k8s.io Sep 9 23:19:30.798807 containerd[1476]: time="2025-09-09T23:19:30.798796252Z" level=warning msg="cleaning up after shim disconnected" id=9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1 namespace=k8s.io Sep 9 23:19:30.798807 containerd[1476]: time="2025-09-09T23:19:30.798805934Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:19:31.006084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1-rootfs.mount: Deactivated successfully. Sep 9 23:19:31.706997 kubelet[2591]: E0909 23:19:31.706813 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:31.710837 containerd[1476]: time="2025-09-09T23:19:31.710647795Z" level=info msg="CreateContainer within sandbox \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:19:31.730107 containerd[1476]: time="2025-09-09T23:19:31.730061790Z" level=info msg="CreateContainer within sandbox \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2\"" Sep 9 23:19:31.731333 containerd[1476]: time="2025-09-09T23:19:31.730558335Z" level=info msg="StartContainer for \"2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2\"" Sep 9 23:19:31.758675 systemd[1]: Started cri-containerd-2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2.scope - libcontainer container 2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2. Sep 9 23:19:31.783618 containerd[1476]: time="2025-09-09T23:19:31.783488980Z" level=info msg="StartContainer for \"2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2\" returns successfully" Sep 9 23:19:31.919010 kubelet[2591]: I0909 23:19:31.918972 2591 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 23:19:31.971534 systemd[1]: Created slice kubepods-burstable-pod7cf6b170_ec31_47ed_895d_3b908da1e8ad.slice - libcontainer container kubepods-burstable-pod7cf6b170_ec31_47ed_895d_3b908da1e8ad.slice. Sep 9 23:19:31.981918 systemd[1]: Created slice kubepods-burstable-pod38ea71dd_6659_4715_9ef2_a091547006c0.slice - libcontainer container kubepods-burstable-pod38ea71dd_6659_4715_9ef2_a091547006c0.slice. Sep 9 23:19:32.067826 kubelet[2591]: I0909 23:19:32.067777 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38ea71dd-6659-4715-9ef2-a091547006c0-config-volume\") pod \"coredns-674b8bbfcf-4p9ww\" (UID: \"38ea71dd-6659-4715-9ef2-a091547006c0\") " pod="kube-system/coredns-674b8bbfcf-4p9ww" Sep 9 23:19:32.067979 kubelet[2591]: I0909 23:19:32.067850 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2q7h\" (UniqueName: \"kubernetes.io/projected/38ea71dd-6659-4715-9ef2-a091547006c0-kube-api-access-t2q7h\") pod \"coredns-674b8bbfcf-4p9ww\" (UID: \"38ea71dd-6659-4715-9ef2-a091547006c0\") " pod="kube-system/coredns-674b8bbfcf-4p9ww" Sep 9 23:19:32.067979 kubelet[2591]: I0909 23:19:32.067906 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cf6b170-ec31-47ed-895d-3b908da1e8ad-config-volume\") pod \"coredns-674b8bbfcf-9c49s\" (UID: \"7cf6b170-ec31-47ed-895d-3b908da1e8ad\") " pod="kube-system/coredns-674b8bbfcf-9c49s" Sep 9 23:19:32.067979 kubelet[2591]: I0909 23:19:32.067940 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnspn\" (UniqueName: \"kubernetes.io/projected/7cf6b170-ec31-47ed-895d-3b908da1e8ad-kube-api-access-bnspn\") pod \"coredns-674b8bbfcf-9c49s\" (UID: \"7cf6b170-ec31-47ed-895d-3b908da1e8ad\") " pod="kube-system/coredns-674b8bbfcf-9c49s" Sep 9 23:19:32.278164 kubelet[2591]: E0909 23:19:32.278112 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:32.279296 containerd[1476]: time="2025-09-09T23:19:32.279241584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9c49s,Uid:7cf6b170-ec31-47ed-895d-3b908da1e8ad,Namespace:kube-system,Attempt:0,}" Sep 9 23:19:32.285228 kubelet[2591]: E0909 23:19:32.284970 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:32.285480 containerd[1476]: time="2025-09-09T23:19:32.285447882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4p9ww,Uid:38ea71dd-6659-4715-9ef2-a091547006c0,Namespace:kube-system,Attempt:0,}" Sep 9 23:19:32.711485 kubelet[2591]: E0909 23:19:32.711382 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:32.724959 kubelet[2591]: I0909 23:19:32.724901 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wgcnl" podStartSLOduration=6.421520982 podStartE2EDuration="13.724886346s" podCreationTimestamp="2025-09-09 23:19:19 +0000 UTC" firstStartedPulling="2025-09-09 23:19:20.675830398 +0000 UTC m=+6.115967158" lastFinishedPulling="2025-09-09 23:19:27.979195762 +0000 UTC m=+13.419332522" observedRunningTime="2025-09-09 23:19:32.723890581 +0000 UTC m=+18.164027341" watchObservedRunningTime="2025-09-09 23:19:32.724886346 +0000 UTC m=+18.165023106" Sep 9 23:19:33.712878 kubelet[2591]: E0909 23:19:33.712841 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:33.811536 systemd-networkd[1386]: cilium_host: Link UP Sep 9 23:19:33.811659 systemd-networkd[1386]: cilium_net: Link UP Sep 9 23:19:33.811785 systemd-networkd[1386]: cilium_net: Gained carrier Sep 9 23:19:33.811920 systemd-networkd[1386]: cilium_host: Gained carrier Sep 9 23:19:33.886071 systemd-networkd[1386]: cilium_vxlan: Link UP Sep 9 23:19:33.886077 systemd-networkd[1386]: cilium_vxlan: Gained carrier Sep 9 23:19:34.132556 kernel: NET: Registered PF_ALG protocol family Sep 9 23:19:34.464761 systemd-networkd[1386]: cilium_net: Gained IPv6LL Sep 9 23:19:34.700304 systemd-networkd[1386]: lxc_health: Link UP Sep 9 23:19:34.700577 systemd-networkd[1386]: lxc_health: Gained carrier Sep 9 23:19:34.714528 kubelet[2591]: E0909 23:19:34.713998 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:34.846848 systemd-networkd[1386]: lxc56ce73e2c554: Link UP Sep 9 23:19:34.868427 systemd-networkd[1386]: lxcb6901e2019bc: Link UP Sep 9 23:19:34.869609 kernel: eth0: renamed from tmp63435 Sep 9 23:19:34.877556 kernel: eth0: renamed from tmp6921b Sep 9 23:19:34.882295 systemd-networkd[1386]: cilium_host: Gained IPv6LL Sep 9 23:19:34.882923 systemd-networkd[1386]: lxc56ce73e2c554: Gained carrier Sep 9 23:19:34.884361 systemd-networkd[1386]: lxcb6901e2019bc: Gained carrier Sep 9 23:19:35.105603 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Sep 9 23:19:36.065664 systemd-networkd[1386]: lxc_health: Gained IPv6LL Sep 9 23:19:36.576676 systemd-networkd[1386]: lxc56ce73e2c554: Gained IPv6LL Sep 9 23:19:36.610926 kubelet[2591]: E0909 23:19:36.610891 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:36.704719 systemd-networkd[1386]: lxcb6901e2019bc: Gained IPv6LL Sep 9 23:19:36.720394 kubelet[2591]: I0909 23:19:36.720334 2591 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:19:36.721449 kubelet[2591]: E0909 23:19:36.721147 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:37.720782 kubelet[2591]: E0909 23:19:37.720737 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:38.381560 containerd[1476]: time="2025-09-09T23:19:38.381436685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:19:38.382326 containerd[1476]: time="2025-09-09T23:19:38.381974056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:19:38.382326 containerd[1476]: time="2025-09-09T23:19:38.382001739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:38.382326 containerd[1476]: time="2025-09-09T23:19:38.382090948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:38.397921 containerd[1476]: time="2025-09-09T23:19:38.397339277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:19:38.397921 containerd[1476]: time="2025-09-09T23:19:38.397767717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:19:38.397921 containerd[1476]: time="2025-09-09T23:19:38.397778478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:38.397921 containerd[1476]: time="2025-09-09T23:19:38.397858966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:19:38.400666 systemd[1]: Started cri-containerd-6921b9d1b3b9c9c48e4a6e6389a6a8b28dc4f230fe8cd5e07854454e2428e2ad.scope - libcontainer container 6921b9d1b3b9c9c48e4a6e6389a6a8b28dc4f230fe8cd5e07854454e2428e2ad. Sep 9 23:19:38.413561 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:19:38.422675 systemd[1]: Started cri-containerd-634357c17cc0cd788728580a73a86fc51281ee31f6d7971e63c5a486fbb42d5e.scope - libcontainer container 634357c17cc0cd788728580a73a86fc51281ee31f6d7971e63c5a486fbb42d5e. Sep 9 23:19:38.433834 containerd[1476]: time="2025-09-09T23:19:38.433790941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9c49s,Uid:7cf6b170-ec31-47ed-895d-3b908da1e8ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"6921b9d1b3b9c9c48e4a6e6389a6a8b28dc4f230fe8cd5e07854454e2428e2ad\"" Sep 9 23:19:38.434674 kubelet[2591]: E0909 23:19:38.434646 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:38.436717 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:19:38.439706 containerd[1476]: time="2025-09-09T23:19:38.439419356Z" level=info msg="CreateContainer within sandbox \"6921b9d1b3b9c9c48e4a6e6389a6a8b28dc4f230fe8cd5e07854454e2428e2ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:19:38.454095 containerd[1476]: time="2025-09-09T23:19:38.454024224Z" level=info msg="CreateContainer within sandbox \"6921b9d1b3b9c9c48e4a6e6389a6a8b28dc4f230fe8cd5e07854454e2428e2ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1fa5ba8da8e1ebeeabaa0a7a4bb47c3d95ba9784b524dbc26526fd032ff23d29\"" Sep 9 23:19:38.455683 containerd[1476]: time="2025-09-09T23:19:38.454808659Z" level=info msg="StartContainer for \"1fa5ba8da8e1ebeeabaa0a7a4bb47c3d95ba9784b524dbc26526fd032ff23d29\"" Sep 9 23:19:38.461800 containerd[1476]: time="2025-09-09T23:19:38.461754759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4p9ww,Uid:38ea71dd-6659-4715-9ef2-a091547006c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"634357c17cc0cd788728580a73a86fc51281ee31f6d7971e63c5a486fbb42d5e\"" Sep 9 23:19:38.462932 kubelet[2591]: E0909 23:19:38.462903 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:38.471944 containerd[1476]: time="2025-09-09T23:19:38.471901123Z" level=info msg="CreateContainer within sandbox \"634357c17cc0cd788728580a73a86fc51281ee31f6d7971e63c5a486fbb42d5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:19:38.487526 containerd[1476]: time="2025-09-09T23:19:38.487468403Z" level=info msg="CreateContainer within sandbox \"634357c17cc0cd788728580a73a86fc51281ee31f6d7971e63c5a486fbb42d5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0780c73fdc9b681d0e34b3b1e478c1be0a45eb8ce6277b5936b1100486a1319d\"" Sep 9 23:19:38.488303 containerd[1476]: time="2025-09-09T23:19:38.488238276Z" level=info msg="StartContainer for \"0780c73fdc9b681d0e34b3b1e478c1be0a45eb8ce6277b5936b1100486a1319d\"" Sep 9 23:19:38.490506 systemd[1]: Started cri-containerd-1fa5ba8da8e1ebeeabaa0a7a4bb47c3d95ba9784b524dbc26526fd032ff23d29.scope - libcontainer container 1fa5ba8da8e1ebeeabaa0a7a4bb47c3d95ba9784b524dbc26526fd032ff23d29. Sep 9 23:19:38.518377 containerd[1476]: time="2025-09-09T23:19:38.518311334Z" level=info msg="StartContainer for \"1fa5ba8da8e1ebeeabaa0a7a4bb47c3d95ba9784b524dbc26526fd032ff23d29\" returns successfully" Sep 9 23:19:38.530860 systemd[1]: Started cri-containerd-0780c73fdc9b681d0e34b3b1e478c1be0a45eb8ce6277b5936b1100486a1319d.scope - libcontainer container 0780c73fdc9b681d0e34b3b1e478c1be0a45eb8ce6277b5936b1100486a1319d. Sep 9 23:19:38.559753 containerd[1476]: time="2025-09-09T23:19:38.559685186Z" level=info msg="StartContainer for \"0780c73fdc9b681d0e34b3b1e478c1be0a45eb8ce6277b5936b1100486a1319d\" returns successfully" Sep 9 23:19:38.723859 kubelet[2591]: E0909 23:19:38.723680 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:38.729202 kubelet[2591]: E0909 23:19:38.728275 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:38.739128 kubelet[2591]: I0909 23:19:38.737801 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4p9ww" podStartSLOduration=18.737785112 podStartE2EDuration="18.737785112s" podCreationTimestamp="2025-09-09 23:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:19:38.736716291 +0000 UTC m=+24.176853011" watchObservedRunningTime="2025-09-09 23:19:38.737785112 +0000 UTC m=+24.177921872" Sep 9 23:19:38.748703 kubelet[2591]: I0909 23:19:38.748377 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9c49s" podStartSLOduration=18.748360677 podStartE2EDuration="18.748360677s" podCreationTimestamp="2025-09-09 23:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:19:38.748266148 +0000 UTC m=+24.188402948" watchObservedRunningTime="2025-09-09 23:19:38.748360677 +0000 UTC m=+24.188497477" Sep 9 23:19:39.386940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820833916.mount: Deactivated successfully. Sep 9 23:19:39.729728 kubelet[2591]: E0909 23:19:39.729620 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:39.730130 kubelet[2591]: E0909 23:19:39.729833 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:40.731853 kubelet[2591]: E0909 23:19:40.731702 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:40.731853 kubelet[2591]: E0909 23:19:40.731783 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:19:42.038219 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:38756.service - OpenSSH per-connection server daemon (10.0.0.1:38756). Sep 9 23:19:42.086793 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 38756 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:19:42.088154 sshd-session[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:19:42.092390 systemd-logind[1463]: New session 8 of user core. Sep 9 23:19:42.098673 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:19:42.230257 sshd[4005]: Connection closed by 10.0.0.1 port 38756 Sep 9 23:19:42.231591 sshd-session[4003]: pam_unix(sshd:session): session closed for user core Sep 9 23:19:42.235559 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:38756.service: Deactivated successfully. Sep 9 23:19:42.237258 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:19:42.237921 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:19:42.238846 systemd-logind[1463]: Removed session 8. Sep 9 23:19:47.244033 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:38760.service - OpenSSH per-connection server daemon (10.0.0.1:38760). Sep 9 23:19:47.284233 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 38760 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:19:47.285676 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:19:47.290050 systemd-logind[1463]: New session 9 of user core. Sep 9 23:19:47.299670 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:19:47.418150 sshd[4031]: Connection closed by 10.0.0.1 port 38760 Sep 9 23:19:47.418523 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Sep 9 23:19:47.421079 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:38760.service: Deactivated successfully. Sep 9 23:19:47.423117 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:19:47.424828 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:19:47.426187 systemd-logind[1463]: Removed session 9. Sep 9 23:19:52.431641 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:59290.service - OpenSSH per-connection server daemon (10.0.0.1:59290). Sep 9 23:19:52.475396 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 59290 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:19:52.476627 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:19:52.480573 systemd-logind[1463]: New session 10 of user core. Sep 9 23:19:52.488719 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:19:52.639410 sshd[4053]: Connection closed by 10.0.0.1 port 59290 Sep 9 23:19:52.639783 sshd-session[4050]: pam_unix(sshd:session): session closed for user core Sep 9 23:19:52.642910 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:59290.service: Deactivated successfully. Sep 9 23:19:52.646149 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:19:52.647752 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:19:52.648672 systemd-logind[1463]: Removed session 10. Sep 9 23:19:57.660092 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:59304.service - OpenSSH per-connection server daemon (10.0.0.1:59304). Sep 9 23:19:57.702962 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 59304 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:19:57.704454 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:19:57.711226 systemd-logind[1463]: New session 11 of user core. Sep 9 23:19:57.720081 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:19:57.836552 sshd[4069]: Connection closed by 10.0.0.1 port 59304 Sep 9 23:19:57.836742 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Sep 9 23:19:57.850789 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:59304.service: Deactivated successfully. Sep 9 23:19:57.852404 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:19:57.853706 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:19:57.862906 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:59316.service - OpenSSH per-connection server daemon (10.0.0.1:59316). Sep 9 23:19:57.864219 systemd-logind[1463]: Removed session 11. Sep 9 23:19:57.907559 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 59316 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:19:57.909233 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:19:57.913556 systemd-logind[1463]: New session 12 of user core. Sep 9 23:19:57.924669 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:19:58.074040 sshd[4085]: Connection closed by 10.0.0.1 port 59316 Sep 9 23:19:58.074579 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Sep 9 23:19:58.085303 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:59316.service: Deactivated successfully. Sep 9 23:19:58.086730 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:19:58.093926 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:19:58.109841 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:59318.service - OpenSSH per-connection server daemon (10.0.0.1:59318). Sep 9 23:19:58.110688 systemd-logind[1463]: Removed session 12. Sep 9 23:19:58.147681 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 59318 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:19:58.149072 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:19:58.153553 systemd-logind[1463]: New session 13 of user core. Sep 9 23:19:58.173752 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:19:58.290583 sshd[4099]: Connection closed by 10.0.0.1 port 59318 Sep 9 23:19:58.290944 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Sep 9 23:19:58.294176 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:59318.service: Deactivated successfully. Sep 9 23:19:58.296023 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:19:58.299020 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:19:58.304089 systemd-logind[1463]: Removed session 13. Sep 9 23:20:03.303125 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:34372.service - OpenSSH per-connection server daemon (10.0.0.1:34372). Sep 9 23:20:03.345454 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 34372 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:03.347103 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:03.358501 systemd-logind[1463]: New session 14 of user core. Sep 9 23:20:03.371707 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:20:03.485430 sshd[4114]: Connection closed by 10.0.0.1 port 34372 Sep 9 23:20:03.485984 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:03.490040 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:20:03.490346 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:34372.service: Deactivated successfully. Sep 9 23:20:03.491949 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:20:03.494252 systemd-logind[1463]: Removed session 14. Sep 9 23:20:08.498224 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:34376.service - OpenSSH per-connection server daemon (10.0.0.1:34376). Sep 9 23:20:08.537161 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 34376 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:08.538357 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:08.541993 systemd-logind[1463]: New session 15 of user core. Sep 9 23:20:08.549646 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:20:08.656280 sshd[4130]: Connection closed by 10.0.0.1 port 34376 Sep 9 23:20:08.656639 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:08.671081 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:34376.service: Deactivated successfully. Sep 9 23:20:08.672565 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:20:08.673152 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:20:08.686767 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:34390.service - OpenSSH per-connection server daemon (10.0.0.1:34390). Sep 9 23:20:08.687807 systemd-logind[1463]: Removed session 15. Sep 9 23:20:08.722944 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 34390 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:08.724761 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:08.728797 systemd-logind[1463]: New session 16 of user core. Sep 9 23:20:08.737672 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:20:08.911987 sshd[4145]: Connection closed by 10.0.0.1 port 34390 Sep 9 23:20:08.912577 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:08.922802 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:34390.service: Deactivated successfully. Sep 9 23:20:08.924332 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:20:08.925704 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:20:08.927192 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:34402.service - OpenSSH per-connection server daemon (10.0.0.1:34402). Sep 9 23:20:08.928041 systemd-logind[1463]: Removed session 16. Sep 9 23:20:08.972760 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 34402 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:08.974113 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:08.977917 systemd-logind[1463]: New session 17 of user core. Sep 9 23:20:08.985650 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:20:09.609430 sshd[4158]: Connection closed by 10.0.0.1 port 34402 Sep 9 23:20:09.609767 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:09.622328 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:34402.service: Deactivated successfully. Sep 9 23:20:09.626205 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:20:09.629649 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:20:09.638842 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:34408.service - OpenSSH per-connection server daemon (10.0.0.1:34408). Sep 9 23:20:09.641413 systemd-logind[1463]: Removed session 17. Sep 9 23:20:09.676454 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 34408 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:09.677831 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:09.681906 systemd-logind[1463]: New session 18 of user core. Sep 9 23:20:09.691667 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:20:09.905790 sshd[4179]: Connection closed by 10.0.0.1 port 34408 Sep 9 23:20:09.907185 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:09.915323 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:57120.service - OpenSSH per-connection server daemon (10.0.0.1:57120). Sep 9 23:20:09.915760 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:34408.service: Deactivated successfully. Sep 9 23:20:09.917243 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 23:20:09.919892 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Sep 9 23:20:09.930483 systemd-logind[1463]: Removed session 18. Sep 9 23:20:09.965774 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 57120 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:09.967010 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:09.971650 systemd-logind[1463]: New session 19 of user core. Sep 9 23:20:09.975644 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 23:20:10.081220 sshd[4193]: Connection closed by 10.0.0.1 port 57120 Sep 9 23:20:10.081593 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:10.084811 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:57120.service: Deactivated successfully. Sep 9 23:20:10.086575 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 23:20:10.087930 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Sep 9 23:20:10.088890 systemd-logind[1463]: Removed session 19. Sep 9 23:20:15.097099 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:57124.service - OpenSSH per-connection server daemon (10.0.0.1:57124). Sep 9 23:20:15.145996 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 57124 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:15.152256 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:15.157366 systemd-logind[1463]: New session 20 of user core. Sep 9 23:20:15.161645 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 23:20:15.281740 sshd[4214]: Connection closed by 10.0.0.1 port 57124 Sep 9 23:20:15.283093 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:15.286283 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:57124.service: Deactivated successfully. Sep 9 23:20:15.289453 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 23:20:15.290322 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Sep 9 23:20:15.291199 systemd-logind[1463]: Removed session 20. Sep 9 23:20:20.298639 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:54730.service - OpenSSH per-connection server daemon (10.0.0.1:54730). Sep 9 23:20:20.339082 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 54730 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:20.340405 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:20.345017 systemd-logind[1463]: New session 21 of user core. Sep 9 23:20:20.354687 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 23:20:20.469546 sshd[4229]: Connection closed by 10.0.0.1 port 54730 Sep 9 23:20:20.469402 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:20.473812 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:54730.service: Deactivated successfully. Sep 9 23:20:20.475459 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 23:20:20.476105 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Sep 9 23:20:20.476970 systemd-logind[1463]: Removed session 21. Sep 9 23:20:25.488186 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:54746.service - OpenSSH per-connection server daemon (10.0.0.1:54746). Sep 9 23:20:25.529256 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 54746 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:25.530595 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:25.534961 systemd-logind[1463]: New session 22 of user core. Sep 9 23:20:25.542704 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 23:20:25.672760 sshd[4246]: Connection closed by 10.0.0.1 port 54746 Sep 9 23:20:25.673221 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:25.685999 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:54746.service: Deactivated successfully. Sep 9 23:20:25.688998 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 23:20:25.690338 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Sep 9 23:20:25.698994 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:54762.service - OpenSSH per-connection server daemon (10.0.0.1:54762). Sep 9 23:20:25.700468 systemd-logind[1463]: Removed session 22. Sep 9 23:20:25.741179 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 54762 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:25.742979 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:25.753049 systemd-logind[1463]: New session 23 of user core. Sep 9 23:20:25.762675 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 23:20:27.635608 containerd[1476]: time="2025-09-09T23:20:27.635569612Z" level=info msg="StopContainer for \"ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce\" with timeout 30 (s)" Sep 9 23:20:27.636303 containerd[1476]: time="2025-09-09T23:20:27.636101128Z" level=info msg="Stop container \"ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce\" with signal terminated" Sep 9 23:20:27.645586 systemd[1]: cri-containerd-ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce.scope: Deactivated successfully. Sep 9 23:20:27.676857 containerd[1476]: time="2025-09-09T23:20:27.676818607Z" level=info msg="StopContainer for \"2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2\" with timeout 2 (s)" Sep 9 23:20:27.677284 containerd[1476]: time="2025-09-09T23:20:27.677257924Z" level=info msg="Stop container \"2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2\" with signal terminated" Sep 9 23:20:27.678915 containerd[1476]: time="2025-09-09T23:20:27.678875791Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:20:27.680868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce-rootfs.mount: Deactivated successfully. Sep 9 23:20:27.685542 systemd-networkd[1386]: lxc_health: Link DOWN Sep 9 23:20:27.685558 systemd-networkd[1386]: lxc_health: Lost carrier Sep 9 23:20:27.690460 containerd[1476]: time="2025-09-09T23:20:27.690314901Z" level=info msg="shim disconnected" id=ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce namespace=k8s.io Sep 9 23:20:27.690460 containerd[1476]: time="2025-09-09T23:20:27.690424980Z" level=warning msg="cleaning up after shim disconnected" id=ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce namespace=k8s.io Sep 9 23:20:27.690460 containerd[1476]: time="2025-09-09T23:20:27.690435460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:20:27.702630 systemd[1]: cri-containerd-2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2.scope: Deactivated successfully. Sep 9 23:20:27.703949 systemd[1]: cri-containerd-2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2.scope: Consumed 6.174s CPU time, 124.4M memory peak, 144K read from disk, 12.9M written to disk. Sep 9 23:20:27.721850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2-rootfs.mount: Deactivated successfully. Sep 9 23:20:27.743040 containerd[1476]: time="2025-09-09T23:20:27.742966846Z" level=info msg="shim disconnected" id=2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2 namespace=k8s.io Sep 9 23:20:27.743040 containerd[1476]: time="2025-09-09T23:20:27.743032165Z" level=warning msg="cleaning up after shim disconnected" id=2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2 namespace=k8s.io Sep 9 23:20:27.743040 containerd[1476]: time="2025-09-09T23:20:27.743041725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:20:27.757406 containerd[1476]: time="2025-09-09T23:20:27.757359092Z" level=info msg="StopContainer for \"ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce\" returns successfully" Sep 9 23:20:27.757406 containerd[1476]: time="2025-09-09T23:20:27.757401612Z" level=info msg="StopContainer for \"2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2\" returns successfully" Sep 9 23:20:27.758140 containerd[1476]: time="2025-09-09T23:20:27.758113726Z" level=info msg="StopPodSandbox for \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\"" Sep 9 23:20:27.758180 containerd[1476]: time="2025-09-09T23:20:27.758153726Z" level=info msg="Container to stop \"5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:20:27.758180 containerd[1476]: time="2025-09-09T23:20:27.758165446Z" level=info msg="Container to stop \"9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:20:27.758180 containerd[1476]: time="2025-09-09T23:20:27.758173686Z" level=info msg="Container to stop \"2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:20:27.758254 containerd[1476]: time="2025-09-09T23:20:27.758182926Z" level=info msg="Container to stop \"32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:20:27.758254 containerd[1476]: time="2025-09-09T23:20:27.758191005Z" level=info msg="Container to stop \"2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:20:27.758494 containerd[1476]: time="2025-09-09T23:20:27.758465643Z" level=info msg="StopPodSandbox for \"e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f\"" Sep 9 23:20:27.758531 containerd[1476]: time="2025-09-09T23:20:27.758513483Z" level=info msg="Container to stop \"ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:20:27.760123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f-shm.mount: Deactivated successfully. Sep 9 23:20:27.760237 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963-shm.mount: Deactivated successfully. Sep 9 23:20:27.764869 systemd[1]: cri-containerd-25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963.scope: Deactivated successfully. Sep 9 23:20:27.770765 systemd[1]: cri-containerd-e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f.scope: Deactivated successfully. Sep 9 23:20:27.799756 containerd[1476]: time="2025-09-09T23:20:27.799681118Z" level=info msg="shim disconnected" id=25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963 namespace=k8s.io Sep 9 23:20:27.800609 containerd[1476]: time="2025-09-09T23:20:27.800422313Z" level=warning msg="cleaning up after shim disconnected" id=25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963 namespace=k8s.io Sep 9 23:20:27.800609 containerd[1476]: time="2025-09-09T23:20:27.800446752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:20:27.808675 containerd[1476]: time="2025-09-09T23:20:27.808607448Z" level=info msg="shim disconnected" id=e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f namespace=k8s.io Sep 9 23:20:27.808675 containerd[1476]: time="2025-09-09T23:20:27.808666848Z" level=warning msg="cleaning up after shim disconnected" id=e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f namespace=k8s.io Sep 9 23:20:27.808675 containerd[1476]: time="2025-09-09T23:20:27.808678887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:20:27.848598 containerd[1476]: time="2025-09-09T23:20:27.848556373Z" level=info msg="TearDown network for sandbox \"e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f\" successfully" Sep 9 23:20:27.848776 containerd[1476]: time="2025-09-09T23:20:27.848759612Z" level=info msg="StopPodSandbox for \"e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f\" returns successfully" Sep 9 23:20:27.853350 containerd[1476]: time="2025-09-09T23:20:27.853315416Z" level=info msg="TearDown network for sandbox \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" successfully" Sep 9 23:20:27.853575 containerd[1476]: time="2025-09-09T23:20:27.853558254Z" level=info msg="StopPodSandbox for \"25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963\" returns successfully" Sep 9 23:20:27.909594 kubelet[2591]: I0909 23:20:27.909453 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86e48b1c-cee2-406a-b36d-625a368f74e4-hubble-tls\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.909594 kubelet[2591]: I0909 23:20:27.909523 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-host-proc-sys-kernel\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.909594 kubelet[2591]: I0909 23:20:27.909565 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-run\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.910394 kubelet[2591]: I0909 23:20:27.910369 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-bpf-maps\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.910437 kubelet[2591]: I0909 23:20:27.910411 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-lib-modules\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.910437 kubelet[2591]: I0909 23:20:27.910428 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-cgroup\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.910509 kubelet[2591]: I0909 23:20:27.910446 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hncqb\" (UniqueName: \"kubernetes.io/projected/86e48b1c-cee2-406a-b36d-625a368f74e4-kube-api-access-hncqb\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.910509 kubelet[2591]: I0909 23:20:27.910461 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cni-path\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.912516 kubelet[2591]: I0909 23:20:27.910488 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prnhz\" (UniqueName: \"kubernetes.io/projected/9c673b83-9dc6-47f7-8e91-7954c89e04c6-kube-api-access-prnhz\") pod \"9c673b83-9dc6-47f7-8e91-7954c89e04c6\" (UID: \"9c673b83-9dc6-47f7-8e91-7954c89e04c6\") " Sep 9 23:20:27.912516 kubelet[2591]: I0909 23:20:27.912437 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-host-proc-sys-net\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.912516 kubelet[2591]: I0909 23:20:27.912459 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86e48b1c-cee2-406a-b36d-625a368f74e4-clustermesh-secrets\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.912516 kubelet[2591]: I0909 23:20:27.912475 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-config-path\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.912516 kubelet[2591]: I0909 23:20:27.912498 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c673b83-9dc6-47f7-8e91-7954c89e04c6-cilium-config-path\") pod \"9c673b83-9dc6-47f7-8e91-7954c89e04c6\" (UID: \"9c673b83-9dc6-47f7-8e91-7954c89e04c6\") " Sep 9 23:20:27.912694 kubelet[2591]: I0909 23:20:27.912539 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-hostproc\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.912694 kubelet[2591]: I0909 23:20:27.912564 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-etc-cni-netd\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.912694 kubelet[2591]: I0909 23:20:27.912580 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-xtables-lock\") pod \"86e48b1c-cee2-406a-b36d-625a368f74e4\" (UID: \"86e48b1c-cee2-406a-b36d-625a368f74e4\") " Sep 9 23:20:27.912694 kubelet[2591]: I0909 23:20:27.911772 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:20:27.912694 kubelet[2591]: I0909 23:20:27.911795 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:20:27.912801 kubelet[2591]: I0909 23:20:27.911810 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:20:27.912801 kubelet[2591]: I0909 23:20:27.911822 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:20:27.912801 kubelet[2591]: I0909 23:20:27.912376 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:20:27.912801 kubelet[2591]: I0909 23:20:27.912396 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cni-path" (OuterVolumeSpecName: "cni-path") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:20:27.912801 kubelet[2591]: I0909 23:20:27.912630 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:20:27.912907 kubelet[2591]: I0909 23:20:27.912693 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:20:27.913925 kubelet[2591]: I0909 23:20:27.913889 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-hostproc" (OuterVolumeSpecName: "hostproc") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:20:27.915038 kubelet[2591]: I0909 23:20:27.915008 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c673b83-9dc6-47f7-8e91-7954c89e04c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9c673b83-9dc6-47f7-8e91-7954c89e04c6" (UID: "9c673b83-9dc6-47f7-8e91-7954c89e04c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:20:27.915082 kubelet[2591]: I0909 23:20:27.915069 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:20:27.916722 kubelet[2591]: I0909 23:20:27.916688 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:20:27.916945 kubelet[2591]: I0909 23:20:27.916920 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e48b1c-cee2-406a-b36d-625a368f74e4-kube-api-access-hncqb" (OuterVolumeSpecName: "kube-api-access-hncqb") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "kube-api-access-hncqb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:20:27.917029 kubelet[2591]: I0909 23:20:27.916951 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c673b83-9dc6-47f7-8e91-7954c89e04c6-kube-api-access-prnhz" (OuterVolumeSpecName: "kube-api-access-prnhz") pod "9c673b83-9dc6-47f7-8e91-7954c89e04c6" (UID: "9c673b83-9dc6-47f7-8e91-7954c89e04c6"). InnerVolumeSpecName "kube-api-access-prnhz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:20:27.917029 kubelet[2591]: I0909 23:20:27.917017 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86e48b1c-cee2-406a-b36d-625a368f74e4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 23:20:27.917075 kubelet[2591]: I0909 23:20:27.917027 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e48b1c-cee2-406a-b36d-625a368f74e4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "86e48b1c-cee2-406a-b36d-625a368f74e4" (UID: "86e48b1c-cee2-406a-b36d-625a368f74e4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:20:28.013540 kubelet[2591]: I0909 23:20:28.013468 2591 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86e48b1c-cee2-406a-b36d-625a368f74e4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013540 kubelet[2591]: I0909 23:20:28.013531 2591 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013540 kubelet[2591]: I0909 23:20:28.013551 2591 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013734 kubelet[2591]: I0909 23:20:28.013561 2591 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013734 kubelet[2591]: I0909 23:20:28.013569 2591 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013734 kubelet[2591]: I0909 23:20:28.013576 2591 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013734 kubelet[2591]: I0909 23:20:28.013584 2591 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hncqb\" (UniqueName: \"kubernetes.io/projected/86e48b1c-cee2-406a-b36d-625a368f74e4-kube-api-access-hncqb\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013734 kubelet[2591]: I0909 23:20:28.013592 2591 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013734 kubelet[2591]: I0909 23:20:28.013599 2591 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-prnhz\" (UniqueName: \"kubernetes.io/projected/9c673b83-9dc6-47f7-8e91-7954c89e04c6-kube-api-access-prnhz\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013734 kubelet[2591]: I0909 23:20:28.013607 2591 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013734 kubelet[2591]: I0909 23:20:28.013615 2591 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86e48b1c-cee2-406a-b36d-625a368f74e4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013887 kubelet[2591]: I0909 23:20:28.013622 2591 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86e48b1c-cee2-406a-b36d-625a368f74e4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013887 kubelet[2591]: I0909 23:20:28.013629 2591 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c673b83-9dc6-47f7-8e91-7954c89e04c6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013887 kubelet[2591]: I0909 23:20:28.013637 2591 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013887 kubelet[2591]: I0909 23:20:28.013646 2591 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.013887 kubelet[2591]: I0909 23:20:28.013653 2591 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86e48b1c-cee2-406a-b36d-625a368f74e4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 23:20:28.655429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1a3236e8239522922ade78de6ebccfaf2996ef5ead0045ca0af731b98443a7f-rootfs.mount: Deactivated successfully. Sep 9 23:20:28.655565 systemd[1]: var-lib-kubelet-pods-9c673b83\x2d9dc6\x2d47f7\x2d8e91\x2d7954c89e04c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dprnhz.mount: Deactivated successfully. Sep 9 23:20:28.655642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25f6abaa66e0c7648738089faf36994adac8879c8198a655e3f0c3dcf7962963-rootfs.mount: Deactivated successfully. Sep 9 23:20:28.655703 systemd[1]: var-lib-kubelet-pods-86e48b1c\x2dcee2\x2d406a\x2db36d\x2d625a368f74e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhncqb.mount: Deactivated successfully. Sep 9 23:20:28.655764 systemd[1]: var-lib-kubelet-pods-86e48b1c\x2dcee2\x2d406a\x2db36d\x2d625a368f74e4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 23:20:28.655826 systemd[1]: var-lib-kubelet-pods-86e48b1c\x2dcee2\x2d406a\x2db36d\x2d625a368f74e4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 23:20:28.659930 systemd[1]: Removed slice kubepods-besteffort-pod9c673b83_9dc6_47f7_8e91_7954c89e04c6.slice - libcontainer container kubepods-besteffort-pod9c673b83_9dc6_47f7_8e91_7954c89e04c6.slice. Sep 9 23:20:28.662178 systemd[1]: Removed slice kubepods-burstable-pod86e48b1c_cee2_406a_b36d_625a368f74e4.slice - libcontainer container kubepods-burstable-pod86e48b1c_cee2_406a_b36d_625a368f74e4.slice. Sep 9 23:20:28.662369 systemd[1]: kubepods-burstable-pod86e48b1c_cee2_406a_b36d_625a368f74e4.slice: Consumed 6.251s CPU time, 124.7M memory peak, 148K read from disk, 12.9M written to disk. Sep 9 23:20:28.858510 kubelet[2591]: I0909 23:20:28.858473 2591 scope.go:117] "RemoveContainer" containerID="ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce" Sep 9 23:20:28.861069 containerd[1476]: time="2025-09-09T23:20:28.861033940Z" level=info msg="RemoveContainer for \"ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce\"" Sep 9 23:20:28.892087 containerd[1476]: time="2025-09-09T23:20:28.892044049Z" level=info msg="RemoveContainer for \"ab64c9a5f2d6c33c88e130200dc9c139b0893dba06b5e1a0e05785a818235cce\" returns successfully" Sep 9 23:20:28.892527 kubelet[2591]: I0909 23:20:28.892500 2591 scope.go:117] "RemoveContainer" containerID="2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2" Sep 9 23:20:28.893780 containerd[1476]: time="2025-09-09T23:20:28.893739637Z" level=info msg="RemoveContainer for \"2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2\"" Sep 9 23:20:28.896277 containerd[1476]: time="2025-09-09T23:20:28.896241140Z" level=info msg="RemoveContainer for \"2b525161e6df0f6f2e5e1cdae9c8a1ae08ea6d217a1b405ca1e4d640872d80c2\" returns successfully" Sep 9 23:20:28.896460 kubelet[2591]: I0909 23:20:28.896429 2591 scope.go:117] "RemoveContainer" containerID="9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1" Sep 9 23:20:28.897517 containerd[1476]: time="2025-09-09T23:20:28.897484892Z" level=info msg="RemoveContainer for \"9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1\"" Sep 9 23:20:28.900079 containerd[1476]: time="2025-09-09T23:20:28.900043314Z" level=info msg="RemoveContainer for \"9cb94be6575b3c7ff9d7dc856739cfd50d1f05a69ecbe52ae0740db9e14ba4b1\" returns successfully" Sep 9 23:20:28.900225 kubelet[2591]: I0909 23:20:28.900198 2591 scope.go:117] "RemoveContainer" containerID="2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c" Sep 9 23:20:28.901130 containerd[1476]: time="2025-09-09T23:20:28.901108747Z" level=info msg="RemoveContainer for \"2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c\"" Sep 9 23:20:28.903349 containerd[1476]: time="2025-09-09T23:20:28.903314452Z" level=info msg="RemoveContainer for \"2dfeddd8af877248b0e62ec092f5ebdafeb716380053ecaec3487d32f195977c\" returns successfully" Sep 9 23:20:28.903500 kubelet[2591]: I0909 23:20:28.903468 2591 scope.go:117] "RemoveContainer" containerID="32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2" Sep 9 23:20:28.904345 containerd[1476]: time="2025-09-09T23:20:28.904324845Z" level=info msg="RemoveContainer for \"32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2\"" Sep 9 23:20:28.921694 containerd[1476]: time="2025-09-09T23:20:28.921585127Z" level=info msg="RemoveContainer for \"32f89b521b2c8fd1fc485e417cbdb5980aabfcc89ba0b5cacd48b9eecc71b7c2\" returns successfully" Sep 9 23:20:28.921860 kubelet[2591]: I0909 23:20:28.921835 2591 scope.go:117] "RemoveContainer" containerID="5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd" Sep 9 23:20:28.923793 containerd[1476]: time="2025-09-09T23:20:28.923525714Z" level=info msg="RemoveContainer for \"5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd\"" Sep 9 23:20:28.932058 containerd[1476]: time="2025-09-09T23:20:28.932019456Z" level=info msg="RemoveContainer for \"5026cdd364ae7972e39ea92c91cfe6954926b8c655ed4af370c7d66f845468dd\" returns successfully" Sep 9 23:20:29.597715 sshd[4261]: Connection closed by 10.0.0.1 port 54762 Sep 9 23:20:29.599082 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:29.613064 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:54762.service: Deactivated successfully. Sep 9 23:20:29.617462 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 23:20:29.617814 systemd[1]: session-23.scope: Consumed 1.213s CPU time, 28.6M memory peak. Sep 9 23:20:29.619524 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Sep 9 23:20:29.639885 systemd[1]: Started sshd@23-10.0.0.6:22-10.0.0.1:54770.service - OpenSSH per-connection server daemon (10.0.0.1:54770). Sep 9 23:20:29.640644 systemd-logind[1463]: Removed session 23. Sep 9 23:20:29.679454 sshd[4421]: Accepted publickey for core from 10.0.0.1 port 54770 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:29.680782 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:29.685568 systemd-logind[1463]: New session 24 of user core. Sep 9 23:20:29.694699 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 23:20:29.701073 kubelet[2591]: E0909 23:20:29.701012 2591 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 23:20:30.654690 kubelet[2591]: I0909 23:20:30.654648 2591 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e48b1c-cee2-406a-b36d-625a368f74e4" path="/var/lib/kubelet/pods/86e48b1c-cee2-406a-b36d-625a368f74e4/volumes" Sep 9 23:20:30.655275 kubelet[2591]: I0909 23:20:30.655242 2591 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c673b83-9dc6-47f7-8e91-7954c89e04c6" path="/var/lib/kubelet/pods/9c673b83-9dc6-47f7-8e91-7954c89e04c6/volumes" Sep 9 23:20:32.076195 sshd[4424]: Connection closed by 10.0.0.1 port 54770 Sep 9 23:20:32.076552 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:32.094750 systemd[1]: sshd@23-10.0.0.6:22-10.0.0.1:54770.service: Deactivated successfully. Sep 9 23:20:32.096210 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 23:20:32.096398 systemd[1]: session-24.scope: Consumed 2.300s CPU time, 28.7M memory peak. Sep 9 23:20:32.099157 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Sep 9 23:20:32.109656 systemd[1]: Started sshd@24-10.0.0.6:22-10.0.0.1:59602.service - OpenSSH per-connection server daemon (10.0.0.1:59602). Sep 9 23:20:32.113472 systemd-logind[1463]: Removed session 24. Sep 9 23:20:32.127447 systemd[1]: Created slice kubepods-burstable-pod90a4928e_0465_4baa_84b0_444eb2ce78c7.slice - libcontainer container kubepods-burstable-pod90a4928e_0465_4baa_84b0_444eb2ce78c7.slice. Sep 9 23:20:32.136850 kubelet[2591]: I0909 23:20:32.136776 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90a4928e-0465-4baa-84b0-444eb2ce78c7-bpf-maps\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.136850 kubelet[2591]: I0909 23:20:32.136818 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90a4928e-0465-4baa-84b0-444eb2ce78c7-cilium-ipsec-secrets\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.136850 kubelet[2591]: I0909 23:20:32.136838 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90a4928e-0465-4baa-84b0-444eb2ce78c7-host-proc-sys-net\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.136850 kubelet[2591]: I0909 23:20:32.136854 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90a4928e-0465-4baa-84b0-444eb2ce78c7-xtables-lock\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.137725 kubelet[2591]: I0909 23:20:32.137120 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90a4928e-0465-4baa-84b0-444eb2ce78c7-hostproc\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.137725 kubelet[2591]: I0909 23:20:32.137148 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90a4928e-0465-4baa-84b0-444eb2ce78c7-cilium-config-path\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.137725 kubelet[2591]: I0909 23:20:32.137340 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90a4928e-0465-4baa-84b0-444eb2ce78c7-hubble-tls\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.137725 kubelet[2591]: I0909 23:20:32.137360 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90a4928e-0465-4baa-84b0-444eb2ce78c7-lib-modules\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.137725 kubelet[2591]: I0909 23:20:32.137376 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90a4928e-0465-4baa-84b0-444eb2ce78c7-host-proc-sys-kernel\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.137725 kubelet[2591]: I0909 23:20:32.137392 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90a4928e-0465-4baa-84b0-444eb2ce78c7-cilium-run\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.137986 kubelet[2591]: I0909 23:20:32.137407 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tvb\" (UniqueName: \"kubernetes.io/projected/90a4928e-0465-4baa-84b0-444eb2ce78c7-kube-api-access-c2tvb\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.137986 kubelet[2591]: I0909 23:20:32.137421 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90a4928e-0465-4baa-84b0-444eb2ce78c7-cni-path\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.137986 kubelet[2591]: I0909 23:20:32.137436 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90a4928e-0465-4baa-84b0-444eb2ce78c7-cilium-cgroup\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.137986 kubelet[2591]: I0909 23:20:32.137449 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90a4928e-0465-4baa-84b0-444eb2ce78c7-etc-cni-netd\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.137986 kubelet[2591]: I0909 23:20:32.137463 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90a4928e-0465-4baa-84b0-444eb2ce78c7-clustermesh-secrets\") pod \"cilium-6c4ks\" (UID: \"90a4928e-0465-4baa-84b0-444eb2ce78c7\") " pod="kube-system/cilium-6c4ks" Sep 9 23:20:32.154410 sshd[4436]: Accepted publickey for core from 10.0.0.1 port 59602 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:32.155721 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:32.159459 systemd-logind[1463]: New session 25 of user core. Sep 9 23:20:32.169627 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 23:20:32.218634 sshd[4439]: Connection closed by 10.0.0.1 port 59602 Sep 9 23:20:32.219115 sshd-session[4436]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:32.229934 systemd[1]: sshd@24-10.0.0.6:22-10.0.0.1:59602.service: Deactivated successfully. Sep 9 23:20:32.231488 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 23:20:32.232862 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Sep 9 23:20:32.239870 systemd[1]: Started sshd@25-10.0.0.6:22-10.0.0.1:59618.service - OpenSSH per-connection server daemon (10.0.0.1:59618). Sep 9 23:20:32.258173 systemd-logind[1463]: Removed session 25. Sep 9 23:20:32.282886 sshd[4445]: Accepted publickey for core from 10.0.0.1 port 59618 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:20:32.284039 sshd-session[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:20:32.288529 systemd-logind[1463]: New session 26 of user core. Sep 9 23:20:32.297671 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 23:20:32.431380 kubelet[2591]: E0909 23:20:32.431265 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:32.432037 containerd[1476]: time="2025-09-09T23:20:32.431994220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6c4ks,Uid:90a4928e-0465-4baa-84b0-444eb2ce78c7,Namespace:kube-system,Attempt:0,}" Sep 9 23:20:32.448431 containerd[1476]: time="2025-09-09T23:20:32.448317932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:20:32.448609 containerd[1476]: time="2025-09-09T23:20:32.448471572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:20:32.448609 containerd[1476]: time="2025-09-09T23:20:32.448512852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:20:32.449133 containerd[1476]: time="2025-09-09T23:20:32.448962291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:20:32.470683 systemd[1]: Started cri-containerd-f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f.scope - libcontainer container f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f. Sep 9 23:20:32.488442 containerd[1476]: time="2025-09-09T23:20:32.488406215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6c4ks,Uid:90a4928e-0465-4baa-84b0-444eb2ce78c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f\"" Sep 9 23:20:32.489014 kubelet[2591]: E0909 23:20:32.488995 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:32.493150 containerd[1476]: time="2025-09-09T23:20:32.493000722Z" level=info msg="CreateContainer within sandbox \"f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:20:32.502563 containerd[1476]: time="2025-09-09T23:20:32.502521774Z" level=info msg="CreateContainer within sandbox \"f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d23c1c3c33d71394e07ca5f7746bc272e662bbb459ce6c672b60a5a44e9a6e3d\"" Sep 9 23:20:32.503107 containerd[1476]: time="2025-09-09T23:20:32.503080492Z" level=info msg="StartContainer for \"d23c1c3c33d71394e07ca5f7746bc272e662bbb459ce6c672b60a5a44e9a6e3d\"" Sep 9 23:20:32.528677 systemd[1]: Started cri-containerd-d23c1c3c33d71394e07ca5f7746bc272e662bbb459ce6c672b60a5a44e9a6e3d.scope - libcontainer container d23c1c3c33d71394e07ca5f7746bc272e662bbb459ce6c672b60a5a44e9a6e3d. Sep 9 23:20:32.551313 containerd[1476]: time="2025-09-09T23:20:32.551273672Z" level=info msg="StartContainer for \"d23c1c3c33d71394e07ca5f7746bc272e662bbb459ce6c672b60a5a44e9a6e3d\" returns successfully" Sep 9 23:20:32.559810 systemd[1]: cri-containerd-d23c1c3c33d71394e07ca5f7746bc272e662bbb459ce6c672b60a5a44e9a6e3d.scope: Deactivated successfully. Sep 9 23:20:32.586821 containerd[1476]: time="2025-09-09T23:20:32.586591128Z" level=info msg="shim disconnected" id=d23c1c3c33d71394e07ca5f7746bc272e662bbb459ce6c672b60a5a44e9a6e3d namespace=k8s.io Sep 9 23:20:32.586821 containerd[1476]: time="2025-09-09T23:20:32.586656088Z" level=warning msg="cleaning up after shim disconnected" id=d23c1c3c33d71394e07ca5f7746bc272e662bbb459ce6c672b60a5a44e9a6e3d namespace=k8s.io Sep 9 23:20:32.586821 containerd[1476]: time="2025-09-09T23:20:32.586665288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:20:32.876590 kubelet[2591]: E0909 23:20:32.876373 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:32.881615 containerd[1476]: time="2025-09-09T23:20:32.881576427Z" level=info msg="CreateContainer within sandbox \"f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:20:32.895562 containerd[1476]: time="2025-09-09T23:20:32.895520266Z" level=info msg="CreateContainer within sandbox \"f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1f5c0431758a7202fad0419a37b5f72ef66b5c3148212277bf53866857362602\"" Sep 9 23:20:32.897113 containerd[1476]: time="2025-09-09T23:20:32.896263184Z" level=info msg="StartContainer for \"1f5c0431758a7202fad0419a37b5f72ef66b5c3148212277bf53866857362602\"" Sep 9 23:20:32.922685 systemd[1]: Started cri-containerd-1f5c0431758a7202fad0419a37b5f72ef66b5c3148212277bf53866857362602.scope - libcontainer container 1f5c0431758a7202fad0419a37b5f72ef66b5c3148212277bf53866857362602. Sep 9 23:20:32.945829 containerd[1476]: time="2025-09-09T23:20:32.943853125Z" level=info msg="StartContainer for \"1f5c0431758a7202fad0419a37b5f72ef66b5c3148212277bf53866857362602\" returns successfully" Sep 9 23:20:32.949442 systemd[1]: cri-containerd-1f5c0431758a7202fad0419a37b5f72ef66b5c3148212277bf53866857362602.scope: Deactivated successfully. Sep 9 23:20:32.967602 containerd[1476]: time="2025-09-09T23:20:32.967528936Z" level=info msg="shim disconnected" id=1f5c0431758a7202fad0419a37b5f72ef66b5c3148212277bf53866857362602 namespace=k8s.io Sep 9 23:20:32.967602 containerd[1476]: time="2025-09-09T23:20:32.967579975Z" level=warning msg="cleaning up after shim disconnected" id=1f5c0431758a7202fad0419a37b5f72ef66b5c3148212277bf53866857362602 namespace=k8s.io Sep 9 23:20:32.967950 containerd[1476]: time="2025-09-09T23:20:32.967788855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:20:33.879669 kubelet[2591]: E0909 23:20:33.879629 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:33.888770 containerd[1476]: time="2025-09-09T23:20:33.888729565Z" level=info msg="CreateContainer within sandbox \"f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:20:33.904546 containerd[1476]: time="2025-09-09T23:20:33.904488693Z" level=info msg="CreateContainer within sandbox \"f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6fbdf5ed4dee26da97e6b52967b0a020e29dcdcd9cccb4ca12f751cd67fe98a8\"" Sep 9 23:20:33.905272 containerd[1476]: time="2025-09-09T23:20:33.905237492Z" level=info msg="StartContainer for \"6fbdf5ed4dee26da97e6b52967b0a020e29dcdcd9cccb4ca12f751cd67fe98a8\"" Sep 9 23:20:33.935691 systemd[1]: Started cri-containerd-6fbdf5ed4dee26da97e6b52967b0a020e29dcdcd9cccb4ca12f751cd67fe98a8.scope - libcontainer container 6fbdf5ed4dee26da97e6b52967b0a020e29dcdcd9cccb4ca12f751cd67fe98a8. Sep 9 23:20:33.963911 containerd[1476]: time="2025-09-09T23:20:33.963847253Z" level=info msg="StartContainer for \"6fbdf5ed4dee26da97e6b52967b0a020e29dcdcd9cccb4ca12f751cd67fe98a8\" returns successfully" Sep 9 23:20:33.965115 systemd[1]: cri-containerd-6fbdf5ed4dee26da97e6b52967b0a020e29dcdcd9cccb4ca12f751cd67fe98a8.scope: Deactivated successfully. Sep 9 23:20:33.987543 containerd[1476]: time="2025-09-09T23:20:33.987450245Z" level=info msg="shim disconnected" id=6fbdf5ed4dee26da97e6b52967b0a020e29dcdcd9cccb4ca12f751cd67fe98a8 namespace=k8s.io Sep 9 23:20:33.987543 containerd[1476]: time="2025-09-09T23:20:33.987533205Z" level=warning msg="cleaning up after shim disconnected" id=6fbdf5ed4dee26da97e6b52967b0a020e29dcdcd9cccb4ca12f751cd67fe98a8 namespace=k8s.io Sep 9 23:20:33.987543 containerd[1476]: time="2025-09-09T23:20:33.987543205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:20:34.242474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fbdf5ed4dee26da97e6b52967b0a020e29dcdcd9cccb4ca12f751cd67fe98a8-rootfs.mount: Deactivated successfully. Sep 9 23:20:34.701599 kubelet[2591]: E0909 23:20:34.701560 2591 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 23:20:34.883190 kubelet[2591]: E0909 23:20:34.883014 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:34.886966 containerd[1476]: time="2025-09-09T23:20:34.886931162Z" level=info msg="CreateContainer within sandbox \"f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:20:34.902782 containerd[1476]: time="2025-09-09T23:20:34.902741784Z" level=info msg="CreateContainer within sandbox \"f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"24f8cf36ad93b5f5a12b691640d02c1a42cf8c4892631733c998b83e1d0ea9ae\"" Sep 9 23:20:34.903511 containerd[1476]: time="2025-09-09T23:20:34.903466903Z" level=info msg="StartContainer for \"24f8cf36ad93b5f5a12b691640d02c1a42cf8c4892631733c998b83e1d0ea9ae\"" Sep 9 23:20:34.936655 systemd[1]: Started cri-containerd-24f8cf36ad93b5f5a12b691640d02c1a42cf8c4892631733c998b83e1d0ea9ae.scope - libcontainer container 24f8cf36ad93b5f5a12b691640d02c1a42cf8c4892631733c998b83e1d0ea9ae. Sep 9 23:20:34.956880 systemd[1]: cri-containerd-24f8cf36ad93b5f5a12b691640d02c1a42cf8c4892631733c998b83e1d0ea9ae.scope: Deactivated successfully. Sep 9 23:20:34.959851 containerd[1476]: time="2025-09-09T23:20:34.959815399Z" level=info msg="StartContainer for \"24f8cf36ad93b5f5a12b691640d02c1a42cf8c4892631733c998b83e1d0ea9ae\" returns successfully" Sep 9 23:20:34.979724 containerd[1476]: time="2025-09-09T23:20:34.979670816Z" level=info msg="shim disconnected" id=24f8cf36ad93b5f5a12b691640d02c1a42cf8c4892631733c998b83e1d0ea9ae namespace=k8s.io Sep 9 23:20:34.979724 containerd[1476]: time="2025-09-09T23:20:34.979722536Z" level=warning msg="cleaning up after shim disconnected" id=24f8cf36ad93b5f5a12b691640d02c1a42cf8c4892631733c998b83e1d0ea9ae namespace=k8s.io Sep 9 23:20:34.979891 containerd[1476]: time="2025-09-09T23:20:34.979734096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:20:35.242591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24f8cf36ad93b5f5a12b691640d02c1a42cf8c4892631733c998b83e1d0ea9ae-rootfs.mount: Deactivated successfully. Sep 9 23:20:35.887030 kubelet[2591]: E0909 23:20:35.886973 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:35.891604 containerd[1476]: time="2025-09-09T23:20:35.891438964Z" level=info msg="CreateContainer within sandbox \"f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:20:35.902976 containerd[1476]: time="2025-09-09T23:20:35.902905600Z" level=info msg="CreateContainer within sandbox \"f39658f15da716431bf7bcdc25e5d9597394bed1488aef0c046452883eebd11f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f2635adba254a9ebadf87c0d2936d48de8fa2d0732a223a0e11f75fedd443f25\"" Sep 9 23:20:35.904812 containerd[1476]: time="2025-09-09T23:20:35.904776040Z" level=info msg="StartContainer for \"f2635adba254a9ebadf87c0d2936d48de8fa2d0732a223a0e11f75fedd443f25\"" Sep 9 23:20:35.931681 systemd[1]: Started cri-containerd-f2635adba254a9ebadf87c0d2936d48de8fa2d0732a223a0e11f75fedd443f25.scope - libcontainer container f2635adba254a9ebadf87c0d2936d48de8fa2d0732a223a0e11f75fedd443f25. Sep 9 23:20:35.965215 containerd[1476]: time="2025-09-09T23:20:35.965104581Z" level=info msg="StartContainer for \"f2635adba254a9ebadf87c0d2936d48de8fa2d0732a223a0e11f75fedd443f25\" returns successfully" Sep 9 23:20:36.215568 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 23:20:36.392532 kubelet[2591]: I0909 23:20:36.388541 2591 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T23:20:36Z","lastTransitionTime":"2025-09-09T23:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 23:20:36.891693 kubelet[2591]: E0909 23:20:36.891654 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:36.913050 kubelet[2591]: I0909 23:20:36.912986 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6c4ks" podStartSLOduration=4.912969764 podStartE2EDuration="4.912969764s" podCreationTimestamp="2025-09-09 23:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:20:36.907766281 +0000 UTC m=+82.347903041" watchObservedRunningTime="2025-09-09 23:20:36.912969764 +0000 UTC m=+82.353106484" Sep 9 23:20:38.432161 kubelet[2591]: E0909 23:20:38.432093 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:38.977949 systemd-networkd[1386]: lxc_health: Link UP Sep 9 23:20:38.979103 systemd-networkd[1386]: lxc_health: Gained carrier Sep 9 23:20:40.384706 systemd-networkd[1386]: lxc_health: Gained IPv6LL Sep 9 23:20:40.433319 kubelet[2591]: E0909 23:20:40.433276 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:40.898581 kubelet[2591]: E0909 23:20:40.898446 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:41.900343 kubelet[2591]: E0909 23:20:41.899926 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:42.652386 kubelet[2591]: E0909 23:20:42.652350 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:43.653367 kubelet[2591]: E0909 23:20:43.652980 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 23:20:45.012715 sshd[4452]: Connection closed by 10.0.0.1 port 59618 Sep 9 23:20:45.013170 sshd-session[4445]: pam_unix(sshd:session): session closed for user core Sep 9 23:20:45.017047 systemd[1]: sshd@25-10.0.0.6:22-10.0.0.1:59618.service: Deactivated successfully. Sep 9 23:20:45.018887 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 23:20:45.019672 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. Sep 9 23:20:45.020602 systemd-logind[1463]: Removed session 26.