Oct 27 23:23:01.888646 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 27 23:23:01.888669 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Oct 27 22:11:36 -00 2025 Oct 27 23:23:01.888693 kernel: KASLR enabled Oct 27 23:23:01.888699 kernel: efi: EFI v2.7 by EDK II Oct 27 23:23:01.888705 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Oct 27 23:23:01.888711 kernel: random: crng init done Oct 27 23:23:01.888718 kernel: secureboot: Secure boot disabled Oct 27 23:23:01.888724 kernel: ACPI: Early table checksum verification disabled Oct 27 23:23:01.888730 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Oct 27 23:23:01.888737 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 27 23:23:01.888743 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:23:01.888749 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:23:01.888755 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:23:01.888762 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:23:01.888769 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:23:01.888777 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:23:01.888783 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:23:01.888789 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:23:01.888796 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:23:01.888802 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 27 23:23:01.888808 kernel: NUMA: Failed to initialise from firmware Oct 27 23:23:01.888814 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 23:23:01.888829 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Oct 27 23:23:01.888836 kernel: Zone ranges: Oct 27 23:23:01.888843 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 23:23:01.888851 kernel: DMA32 empty Oct 27 23:23:01.888857 kernel: Normal empty Oct 27 23:23:01.888863 kernel: Movable zone start for each node Oct 27 23:23:01.888869 kernel: Early memory node ranges Oct 27 23:23:01.888875 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Oct 27 23:23:01.888881 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Oct 27 23:23:01.888887 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Oct 27 23:23:01.888893 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 27 23:23:01.888900 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 27 23:23:01.888906 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 27 23:23:01.888912 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 27 23:23:01.888918 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 27 23:23:01.888925 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 27 23:23:01.888932 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 23:23:01.888938 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 27 23:23:01.888947 kernel: psci: probing for conduit method from ACPI. Oct 27 23:23:01.888954 kernel: psci: PSCIv1.1 detected in firmware. Oct 27 23:23:01.888960 kernel: psci: Using standard PSCI v0.2 function IDs Oct 27 23:23:01.888969 kernel: psci: Trusted OS migration not required Oct 27 23:23:01.888975 kernel: psci: SMC Calling Convention v1.1 Oct 27 23:23:01.888982 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 27 23:23:01.888989 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Oct 27 23:23:01.888995 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Oct 27 23:23:01.889002 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 27 23:23:01.889009 kernel: Detected PIPT I-cache on CPU0 Oct 27 23:23:01.889015 kernel: CPU features: detected: GIC system register CPU interface Oct 27 23:23:01.889022 kernel: CPU features: detected: Hardware dirty bit management Oct 27 23:23:01.889028 kernel: CPU features: detected: Spectre-v4 Oct 27 23:23:01.889036 kernel: CPU features: detected: Spectre-BHB Oct 27 23:23:01.889043 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 27 23:23:01.889049 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 27 23:23:01.889056 kernel: CPU features: detected: ARM erratum 1418040 Oct 27 23:23:01.889062 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 27 23:23:01.889069 kernel: alternatives: applying boot alternatives Oct 27 23:23:01.889076 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e7e3bb3d45cdf83dc44aaf22327a51afe76152af638616b83c00ab1a45937f6d Oct 27 23:23:01.890003 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 27 23:23:01.890029 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 23:23:01.890036 kernel: Fallback order for Node 0: 0 Oct 27 23:23:01.890043 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 27 23:23:01.890056 kernel: Policy zone: DMA Oct 27 23:23:01.890063 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 23:23:01.890069 kernel: software IO TLB: area num 4. Oct 27 23:23:01.890076 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 27 23:23:01.890092 kernel: Memory: 2387408K/2572288K available (10368K kernel code, 2180K rwdata, 8104K rodata, 38400K init, 897K bss, 184880K reserved, 0K cma-reserved) Oct 27 23:23:01.890101 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 27 23:23:01.890108 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 23:23:01.890115 kernel: rcu: RCU event tracing is enabled. Oct 27 23:23:01.890122 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 27 23:23:01.890129 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 23:23:01.890135 kernel: Tracing variant of Tasks RCU enabled. Oct 27 23:23:01.890142 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 23:23:01.890151 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 27 23:23:01.890157 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 27 23:23:01.890164 kernel: GICv3: 256 SPIs implemented Oct 27 23:23:01.890170 kernel: GICv3: 0 Extended SPIs implemented Oct 27 23:23:01.890177 kernel: Root IRQ handler: gic_handle_irq Oct 27 23:23:01.890183 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 27 23:23:01.890190 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 27 23:23:01.890196 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 27 23:23:01.890203 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Oct 27 23:23:01.890210 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Oct 27 23:23:01.890216 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 27 23:23:01.890225 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 27 23:23:01.890231 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 27 23:23:01.890238 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:23:01.890245 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 27 23:23:01.890251 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 27 23:23:01.890258 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 27 23:23:01.890265 kernel: arm-pv: using stolen time PV Oct 27 23:23:01.890272 kernel: Console: colour dummy device 80x25 Oct 27 23:23:01.890278 kernel: ACPI: Core revision 20230628 Oct 27 23:23:01.890285 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 27 23:23:01.890292 kernel: pid_max: default: 32768 minimum: 301 Oct 27 23:23:01.890301 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 27 23:23:01.890307 kernel: landlock: Up and running. Oct 27 23:23:01.890314 kernel: SELinux: Initializing. Oct 27 23:23:01.890321 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 23:23:01.890328 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 23:23:01.890335 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 23:23:01.890342 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 23:23:01.890348 kernel: rcu: Hierarchical SRCU implementation. Oct 27 23:23:01.890355 kernel: rcu: Max phase no-delay instances is 400. Oct 27 23:23:01.890364 kernel: Platform MSI: ITS@0x8080000 domain created Oct 27 23:23:01.890370 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 27 23:23:01.890377 kernel: Remapping and enabling EFI services. Oct 27 23:23:01.890384 kernel: smp: Bringing up secondary CPUs ... Oct 27 23:23:01.890391 kernel: Detected PIPT I-cache on CPU1 Oct 27 23:23:01.890398 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 27 23:23:01.890405 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 27 23:23:01.890412 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:23:01.890418 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 27 23:23:01.890428 kernel: Detected PIPT I-cache on CPU2 Oct 27 23:23:01.890435 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 27 23:23:01.890447 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 27 23:23:01.890456 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:23:01.890463 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 27 23:23:01.890470 kernel: Detected PIPT I-cache on CPU3 Oct 27 23:23:01.890477 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 27 23:23:01.890485 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 27 23:23:01.890538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:23:01.890548 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 27 23:23:01.890556 kernel: smp: Brought up 1 node, 4 CPUs Oct 27 23:23:01.890563 kernel: SMP: Total of 4 processors activated. Oct 27 23:23:01.890571 kernel: CPU features: detected: 32-bit EL0 Support Oct 27 23:23:01.890578 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 27 23:23:01.890585 kernel: CPU features: detected: Common not Private translations Oct 27 23:23:01.890592 kernel: CPU features: detected: CRC32 instructions Oct 27 23:23:01.890600 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 27 23:23:01.890610 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 27 23:23:01.890617 kernel: CPU features: detected: LSE atomic instructions Oct 27 23:23:01.890625 kernel: CPU features: detected: Privileged Access Never Oct 27 23:23:01.890633 kernel: CPU features: detected: RAS Extension Support Oct 27 23:23:01.890640 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 27 23:23:01.890647 kernel: CPU: All CPU(s) started at EL1 Oct 27 23:23:01.890654 kernel: alternatives: applying system-wide alternatives Oct 27 23:23:01.890661 kernel: devtmpfs: initialized Oct 27 23:23:01.890669 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 23:23:01.890678 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 27 23:23:01.890685 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 23:23:01.890692 kernel: SMBIOS 3.0.0 present. Oct 27 23:23:01.890721 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 27 23:23:01.890729 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 23:23:01.890736 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 27 23:23:01.890743 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 27 23:23:01.890750 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 27 23:23:01.890757 kernel: audit: initializing netlink subsys (disabled) Oct 27 23:23:01.890766 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Oct 27 23:23:01.890773 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 23:23:01.890780 kernel: cpuidle: using governor menu Oct 27 23:23:01.890787 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 27 23:23:01.890794 kernel: ASID allocator initialised with 32768 entries Oct 27 23:23:01.890802 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 23:23:01.890809 kernel: Serial: AMBA PL011 UART driver Oct 27 23:23:01.890816 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 27 23:23:01.890839 kernel: Modules: 0 pages in range for non-PLT usage Oct 27 23:23:01.890886 kernel: Modules: 509248 pages in range for PLT usage Oct 27 23:23:01.890895 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 27 23:23:01.890902 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 27 23:23:01.890909 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 27 23:23:01.890917 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 27 23:23:01.890924 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 23:23:01.890931 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 23:23:01.890938 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 27 23:23:01.890945 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 27 23:23:01.891744 kernel: ACPI: Added _OSI(Module Device) Oct 27 23:23:01.891753 kernel: ACPI: Added _OSI(Processor Device) Oct 27 23:23:01.891760 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 23:23:01.891768 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 23:23:01.891774 kernel: ACPI: Interpreter enabled Oct 27 23:23:01.891782 kernel: ACPI: Using GIC for interrupt routing Oct 27 23:23:01.891789 kernel: ACPI: MCFG table detected, 1 entries Oct 27 23:23:01.891796 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 27 23:23:01.891803 kernel: printk: console [ttyAMA0] enabled Oct 27 23:23:01.891810 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 27 23:23:01.892007 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 27 23:23:01.892094 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 27 23:23:01.892167 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 27 23:23:01.892232 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 27 23:23:01.892297 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 27 23:23:01.892306 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 27 23:23:01.892318 kernel: PCI host bridge to bus 0000:00 Oct 27 23:23:01.892405 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 27 23:23:01.892466 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 27 23:23:01.892642 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 27 23:23:01.895790 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 27 23:23:01.895915 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 27 23:23:01.896002 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 27 23:23:01.896082 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 27 23:23:01.896167 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 27 23:23:01.896235 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 27 23:23:01.896302 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 27 23:23:01.896368 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 27 23:23:01.896434 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 27 23:23:01.897329 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 27 23:23:01.897464 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 27 23:23:01.897527 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 27 23:23:01.897537 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 27 23:23:01.897544 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 27 23:23:01.897552 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 27 23:23:01.897559 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 27 23:23:01.897566 kernel: iommu: Default domain type: Translated Oct 27 23:23:01.897573 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 27 23:23:01.897584 kernel: efivars: Registered efivars operations Oct 27 23:23:01.897591 kernel: vgaarb: loaded Oct 27 23:23:01.897598 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 27 23:23:01.897605 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 23:23:01.897613 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 23:23:01.897620 kernel: pnp: PnP ACPI init Oct 27 23:23:01.897697 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 27 23:23:01.897709 kernel: pnp: PnP ACPI: found 1 devices Oct 27 23:23:01.897716 kernel: NET: Registered PF_INET protocol family Oct 27 23:23:01.897725 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 27 23:23:01.897733 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 27 23:23:01.897740 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 23:23:01.897747 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 23:23:01.897755 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 27 23:23:01.897762 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 27 23:23:01.897769 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 23:23:01.897777 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 23:23:01.897786 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 23:23:01.897793 kernel: PCI: CLS 0 bytes, default 64 Oct 27 23:23:01.897800 kernel: kvm [1]: HYP mode not available Oct 27 23:23:01.897807 kernel: Initialise system trusted keyrings Oct 27 23:23:01.897815 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 27 23:23:01.897835 kernel: Key type asymmetric registered Oct 27 23:23:01.897845 kernel: Asymmetric key parser 'x509' registered Oct 27 23:23:01.897852 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 27 23:23:01.897859 kernel: io scheduler mq-deadline registered Oct 27 23:23:01.897869 kernel: io scheduler kyber registered Oct 27 23:23:01.897876 kernel: io scheduler bfq registered Oct 27 23:23:01.897884 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 27 23:23:01.897891 kernel: ACPI: button: Power Button [PWRB] Oct 27 23:23:01.897899 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 27 23:23:01.897980 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 27 23:23:01.897991 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 23:23:01.897998 kernel: thunder_xcv, ver 1.0 Oct 27 23:23:01.898005 kernel: thunder_bgx, ver 1.0 Oct 27 23:23:01.898015 kernel: nicpf, ver 1.0 Oct 27 23:23:01.898022 kernel: nicvf, ver 1.0 Oct 27 23:23:01.898128 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 27 23:23:01.898221 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-27T23:23:01 UTC (1761607381) Oct 27 23:23:01.898232 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 27 23:23:01.898240 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 27 23:23:01.898247 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 27 23:23:01.898255 kernel: watchdog: Hard watchdog permanently disabled Oct 27 23:23:01.898266 kernel: NET: Registered PF_INET6 protocol family Oct 27 23:23:01.898273 kernel: Segment Routing with IPv6 Oct 27 23:23:01.898280 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 23:23:01.898287 kernel: NET: Registered PF_PACKET protocol family Oct 27 23:23:01.898294 kernel: Key type dns_resolver registered Oct 27 23:23:01.898301 kernel: registered taskstats version 1 Oct 27 23:23:01.898308 kernel: Loading compiled-in X.509 certificates Oct 27 23:23:01.898316 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: 410133625b419ed591a9386099a5a05c0b3153a6' Oct 27 23:23:01.898323 kernel: Key type .fscrypt registered Oct 27 23:23:01.898330 kernel: Key type fscrypt-provisioning registered Oct 27 23:23:01.898338 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 23:23:01.898346 kernel: ima: Allocated hash algorithm: sha1 Oct 27 23:23:01.898353 kernel: ima: No architecture policies found Oct 27 23:23:01.898360 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 27 23:23:01.898368 kernel: clk: Disabling unused clocks Oct 27 23:23:01.898375 kernel: Freeing unused kernel memory: 38400K Oct 27 23:23:01.898382 kernel: Run /init as init process Oct 27 23:23:01.898389 kernel: with arguments: Oct 27 23:23:01.898396 kernel: /init Oct 27 23:23:01.898405 kernel: with environment: Oct 27 23:23:01.898412 kernel: HOME=/ Oct 27 23:23:01.898419 kernel: TERM=linux Oct 27 23:23:01.898427 systemd[1]: Successfully made /usr/ read-only. Oct 27 23:23:01.898437 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 23:23:01.898445 systemd[1]: Detected virtualization kvm. Oct 27 23:23:01.898453 systemd[1]: Detected architecture arm64. Oct 27 23:23:01.898462 systemd[1]: Running in initrd. Oct 27 23:23:01.898469 systemd[1]: No hostname configured, using default hostname. Oct 27 23:23:01.898477 systemd[1]: Hostname set to . Oct 27 23:23:01.898485 systemd[1]: Initializing machine ID from VM UUID. Oct 27 23:23:01.898493 systemd[1]: Queued start job for default target initrd.target. Oct 27 23:23:01.898501 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 23:23:01.898508 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 23:23:01.898517 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 23:23:01.898526 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 23:23:01.898534 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 23:23:01.898542 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 23:23:01.898551 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 27 23:23:01.898559 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 27 23:23:01.898567 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 23:23:01.898575 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 23:23:01.898584 systemd[1]: Reached target paths.target - Path Units. Oct 27 23:23:01.898592 systemd[1]: Reached target slices.target - Slice Units. Oct 27 23:23:01.898600 systemd[1]: Reached target swap.target - Swaps. Oct 27 23:23:01.898608 systemd[1]: Reached target timers.target - Timer Units. Oct 27 23:23:01.898615 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 23:23:01.898623 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 23:23:01.898631 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 23:23:01.898638 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 23:23:01.898646 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 23:23:01.898655 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 23:23:01.898663 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 23:23:01.898671 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 23:23:01.898679 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 23:23:01.898686 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 23:23:01.898694 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 23:23:01.898702 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 23:23:01.898710 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 23:23:01.898720 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 23:23:01.898727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:23:01.898735 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 23:23:01.898743 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 23:23:01.898751 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 23:23:01.898782 systemd-journald[239]: Collecting audit messages is disabled. Oct 27 23:23:01.898803 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 23:23:01.898811 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 23:23:01.898819 kernel: Bridge firewalling registered Oct 27 23:23:01.898880 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:23:01.898888 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 23:23:01.898896 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 23:23:01.898905 systemd-journald[239]: Journal started Oct 27 23:23:01.898923 systemd-journald[239]: Runtime Journal (/run/log/journal/d1aedafe66794da795034e7d9d07969f) is 5.9M, max 47.3M, 41.4M free. Oct 27 23:23:01.872725 systemd-modules-load[240]: Inserted module 'overlay' Oct 27 23:23:01.890460 systemd-modules-load[240]: Inserted module 'br_netfilter' Oct 27 23:23:01.905328 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 23:23:01.920099 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 23:23:01.923552 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 23:23:01.925262 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 23:23:01.929815 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 23:23:01.940104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 23:23:01.942518 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 23:23:01.944753 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:23:01.955047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 23:23:01.956426 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 23:23:01.963000 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 23:23:01.986467 systemd-resolved[280]: Positive Trust Anchors: Oct 27 23:23:01.986487 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 23:23:01.986518 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 23:23:01.991307 systemd-resolved[280]: Defaulting to hostname 'linux'. Oct 27 23:23:01.992306 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 23:23:02.000212 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 23:23:02.006537 dracut-cmdline[282]: dracut-dracut-053 Oct 27 23:23:02.009179 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e7e3bb3d45cdf83dc44aaf22327a51afe76152af638616b83c00ab1a45937f6d Oct 27 23:23:02.076860 kernel: SCSI subsystem initialized Oct 27 23:23:02.080852 kernel: Loading iSCSI transport class v2.0-870. Oct 27 23:23:02.088857 kernel: iscsi: registered transport (tcp) Oct 27 23:23:02.102884 kernel: iscsi: registered transport (qla4xxx) Oct 27 23:23:02.102961 kernel: QLogic iSCSI HBA Driver Oct 27 23:23:02.146439 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 23:23:02.152025 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 23:23:02.169142 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 23:23:02.169909 kernel: device-mapper: uevent: version 1.0.3 Oct 27 23:23:02.169934 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 27 23:23:02.216912 kernel: raid6: neonx8 gen() 15714 MB/s Oct 27 23:23:02.233873 kernel: raid6: neonx4 gen() 15757 MB/s Oct 27 23:23:02.250870 kernel: raid6: neonx2 gen() 13296 MB/s Oct 27 23:23:02.267869 kernel: raid6: neonx1 gen() 10505 MB/s Oct 27 23:23:02.284872 kernel: raid6: int64x8 gen() 6780 MB/s Oct 27 23:23:02.301868 kernel: raid6: int64x4 gen() 7321 MB/s Oct 27 23:23:02.318864 kernel: raid6: int64x2 gen() 6104 MB/s Oct 27 23:23:02.336122 kernel: raid6: int64x1 gen() 5050 MB/s Oct 27 23:23:02.336166 kernel: raid6: using algorithm neonx4 gen() 15757 MB/s Oct 27 23:23:02.354072 kernel: raid6: .... xor() 12345 MB/s, rmw enabled Oct 27 23:23:02.354134 kernel: raid6: using neon recovery algorithm Oct 27 23:23:02.360325 kernel: xor: measuring software checksum speed Oct 27 23:23:02.360353 kernel: 8regs : 21471 MB/sec Oct 27 23:23:02.360362 kernel: 32regs : 21664 MB/sec Oct 27 23:23:02.361027 kernel: arm64_neon : 27813 MB/sec Oct 27 23:23:02.361040 kernel: xor: using function: arm64_neon (27813 MB/sec) Oct 27 23:23:02.408870 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 23:23:02.420010 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 23:23:02.432035 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 23:23:02.452635 systemd-udevd[464]: Using default interface naming scheme 'v255'. Oct 27 23:23:02.456982 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 23:23:02.472032 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 23:23:02.485022 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Oct 27 23:23:02.514890 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 23:23:02.523056 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 23:23:02.568028 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 23:23:02.577265 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 23:23:02.590079 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 23:23:02.592317 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 23:23:02.595057 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 23:23:02.598133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 23:23:02.608063 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 23:23:02.619261 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 23:23:02.631958 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 27 23:23:02.634211 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 27 23:23:02.645175 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 23:23:02.645303 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 23:23:02.650151 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 23:23:02.656807 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 27 23:23:02.656854 kernel: GPT:9289727 != 19775487 Oct 27 23:23:02.656864 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 27 23:23:02.656873 kernel: GPT:9289727 != 19775487 Oct 27 23:23:02.656884 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 27 23:23:02.656893 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 23:23:02.652466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 23:23:02.652623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:23:02.658390 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:23:02.671131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:23:02.680842 kernel: BTRFS: device fsid 723df9de-b44a-4541-8b84-1b67589aa78f devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (525) Oct 27 23:23:02.682844 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (518) Oct 27 23:23:02.686008 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:23:02.694287 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 27 23:23:02.702219 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 27 23:23:02.718509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 23:23:02.724985 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 27 23:23:02.726281 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 27 23:23:02.741004 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 23:23:02.745990 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 23:23:02.749350 disk-uuid[553]: Primary Header is updated. Oct 27 23:23:02.749350 disk-uuid[553]: Secondary Entries is updated. Oct 27 23:23:02.749350 disk-uuid[553]: Secondary Header is updated. Oct 27 23:23:02.752852 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 23:23:02.770046 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 23:23:03.760865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 23:23:03.761123 disk-uuid[554]: The operation has completed successfully. Oct 27 23:23:03.788527 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 23:23:03.788627 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 23:23:03.830022 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 27 23:23:03.833246 sh[574]: Success Oct 27 23:23:03.843850 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 27 23:23:03.876560 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 27 23:23:03.889792 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 27 23:23:03.891915 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 27 23:23:03.905796 kernel: BTRFS info (device dm-0): first mount of filesystem 723df9de-b44a-4541-8b84-1b67589aa78f Oct 27 23:23:03.905872 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:23:03.905883 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 27 23:23:03.908053 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 23:23:03.908080 kernel: BTRFS info (device dm-0): using free space tree Oct 27 23:23:03.914425 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 27 23:23:03.916037 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 27 23:23:03.925036 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 23:23:03.926793 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 23:23:03.946068 kernel: BTRFS info (device vda6): first mount of filesystem 232c0498-06c4-4cb2-9fe9-f3d47991f5ef Oct 27 23:23:03.946119 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:23:03.946131 kernel: BTRFS info (device vda6): using free space tree Oct 27 23:23:03.948851 kernel: BTRFS info (device vda6): auto enabling async discard Oct 27 23:23:03.953850 kernel: BTRFS info (device vda6): last unmount of filesystem 232c0498-06c4-4cb2-9fe9-f3d47991f5ef Oct 27 23:23:03.957455 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 23:23:03.964019 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 23:23:04.038188 ignition[652]: Ignition 2.20.0 Oct 27 23:23:04.038201 ignition[652]: Stage: fetch-offline Oct 27 23:23:04.038241 ignition[652]: no configs at "/usr/lib/ignition/base.d" Oct 27 23:23:04.038250 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:23:04.038414 ignition[652]: parsed url from cmdline: "" Oct 27 23:23:04.038417 ignition[652]: no config URL provided Oct 27 23:23:04.038422 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 23:23:04.038429 ignition[652]: no config at "/usr/lib/ignition/user.ign" Oct 27 23:23:04.038488 ignition[652]: op(1): [started] loading QEMU firmware config module Oct 27 23:23:04.038493 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 27 23:23:04.050327 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 23:23:04.050364 ignition[652]: op(1): [finished] loading QEMU firmware config module Oct 27 23:23:04.060019 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 23:23:04.084149 systemd-networkd[763]: lo: Link UP Oct 27 23:23:04.084160 systemd-networkd[763]: lo: Gained carrier Oct 27 23:23:04.084991 systemd-networkd[763]: Enumeration completed Oct 27 23:23:04.085326 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 23:23:04.085409 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:23:04.085413 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 23:23:04.086169 systemd-networkd[763]: eth0: Link UP Oct 27 23:23:04.086172 systemd-networkd[763]: eth0: Gained carrier Oct 27 23:23:04.086178 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:23:04.089028 systemd[1]: Reached target network.target - Network. Oct 27 23:23:04.102887 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 23:23:04.110362 ignition[652]: parsing config with SHA512: fbebceeed74ae8795b607191e405e1385d95a3cab65f10b3c1770e73e413c1ddef5f384a273e3f15cdce7a4d4583a5ccdbc0af1eea07dec0388796459092f529 Oct 27 23:23:04.115798 unknown[652]: fetched base config from "system" Oct 27 23:23:04.115811 unknown[652]: fetched user config from "qemu" Oct 27 23:23:04.116712 ignition[652]: fetch-offline: fetch-offline passed Oct 27 23:23:04.118846 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 23:23:04.116818 ignition[652]: Ignition finished successfully Oct 27 23:23:04.120236 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 27 23:23:04.125964 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 23:23:04.139516 ignition[768]: Ignition 2.20.0 Oct 27 23:23:04.139539 ignition[768]: Stage: kargs Oct 27 23:23:04.139718 ignition[768]: no configs at "/usr/lib/ignition/base.d" Oct 27 23:23:04.139728 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:23:04.142608 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 23:23:04.140648 ignition[768]: kargs: kargs passed Oct 27 23:23:04.140693 ignition[768]: Ignition finished successfully Oct 27 23:23:04.153025 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 23:23:04.163131 ignition[776]: Ignition 2.20.0 Oct 27 23:23:04.163142 ignition[776]: Stage: disks Oct 27 23:23:04.163312 ignition[776]: no configs at "/usr/lib/ignition/base.d" Oct 27 23:23:04.163322 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:23:04.165733 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 23:23:04.164237 ignition[776]: disks: disks passed Oct 27 23:23:04.168321 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 23:23:04.164283 ignition[776]: Ignition finished successfully Oct 27 23:23:04.170017 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 23:23:04.171898 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 23:23:04.173814 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 23:23:04.175599 systemd[1]: Reached target basic.target - Basic System. Oct 27 23:23:04.188902 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 23:23:04.205187 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 27 23:23:04.254901 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 23:23:04.265940 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 23:23:04.309838 kernel: EXT4-fs (vda9): mounted filesystem 14252103-6df9-4b3e-8ac7-75c6ad5090da r/w with ordered data mode. Quota mode: none. Oct 27 23:23:04.310416 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 23:23:04.311757 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 23:23:04.328936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 23:23:04.331593 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 23:23:04.332779 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 27 23:23:04.332838 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 23:23:04.332864 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 23:23:04.339290 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 23:23:04.341151 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 23:23:04.352568 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (795) Oct 27 23:23:04.352621 kernel: BTRFS info (device vda6): first mount of filesystem 232c0498-06c4-4cb2-9fe9-f3d47991f5ef Oct 27 23:23:04.354696 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:23:04.354736 kernel: BTRFS info (device vda6): using free space tree Oct 27 23:23:04.357836 kernel: BTRFS info (device vda6): auto enabling async discard Oct 27 23:23:04.359159 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 23:23:04.385849 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 23:23:04.389261 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Oct 27 23:23:04.393372 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 23:23:04.397002 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 23:23:04.469565 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 23:23:04.482949 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 23:23:04.485348 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 23:23:04.490857 kernel: BTRFS info (device vda6): last unmount of filesystem 232c0498-06c4-4cb2-9fe9-f3d47991f5ef Oct 27 23:23:04.507318 ignition[908]: INFO : Ignition 2.20.0 Oct 27 23:23:04.507318 ignition[908]: INFO : Stage: mount Oct 27 23:23:04.511659 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 23:23:04.511659 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:23:04.511659 ignition[908]: INFO : mount: mount passed Oct 27 23:23:04.511659 ignition[908]: INFO : Ignition finished successfully Oct 27 23:23:04.509350 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 23:23:04.510592 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 23:23:04.524005 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 23:23:04.903514 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 23:23:04.922077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 23:23:04.929979 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (922) Oct 27 23:23:04.930022 kernel: BTRFS info (device vda6): first mount of filesystem 232c0498-06c4-4cb2-9fe9-f3d47991f5ef Oct 27 23:23:04.930032 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:23:04.931836 kernel: BTRFS info (device vda6): using free space tree Oct 27 23:23:04.933841 kernel: BTRFS info (device vda6): auto enabling async discard Oct 27 23:23:04.935212 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 23:23:04.952420 ignition[939]: INFO : Ignition 2.20.0 Oct 27 23:23:04.952420 ignition[939]: INFO : Stage: files Oct 27 23:23:04.954311 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 23:23:04.954311 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:23:04.954311 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Oct 27 23:23:04.958137 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 23:23:04.958137 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 23:23:04.958137 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 23:23:04.958137 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 23:23:04.958137 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 23:23:04.957994 unknown[939]: wrote ssh authorized keys file for user: core Oct 27 23:23:04.966445 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 27 23:23:04.966445 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 27 23:23:05.012146 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 23:23:05.170704 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 27 23:23:05.170704 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 27 23:23:05.174931 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 27 23:23:05.425906 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 27 23:23:05.488001 systemd-networkd[763]: eth0: Gained IPv6LL Oct 27 23:23:05.592222 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 27 23:23:05.594347 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Oct 27 23:23:06.031222 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 27 23:23:06.891661 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 27 23:23:06.891661 ignition[939]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 27 23:23:06.895656 ignition[939]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 23:23:06.895656 ignition[939]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 23:23:06.895656 ignition[939]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 27 23:23:06.895656 ignition[939]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 27 23:23:06.895656 ignition[939]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 23:23:06.895656 ignition[939]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 23:23:06.895656 ignition[939]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 27 23:23:06.895656 ignition[939]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 27 23:23:06.910640 ignition[939]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 23:23:06.912331 ignition[939]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 23:23:06.912331 ignition[939]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 27 23:23:06.912331 ignition[939]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 27 23:23:06.912331 ignition[939]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 23:23:06.912331 ignition[939]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 23:23:06.924121 ignition[939]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 23:23:06.924121 ignition[939]: INFO : files: files passed Oct 27 23:23:06.924121 ignition[939]: INFO : Ignition finished successfully Oct 27 23:23:06.916869 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 23:23:06.931016 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 23:23:06.933698 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 23:23:06.935470 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 23:23:06.935549 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 23:23:06.941928 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory Oct 27 23:23:06.947721 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 23:23:06.947721 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 23:23:06.952583 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 23:23:06.953558 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 23:23:06.956437 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 23:23:06.962970 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 23:23:06.981049 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 23:23:06.981199 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 23:23:06.983557 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 23:23:06.985632 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 23:23:06.987741 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 23:23:06.988568 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 23:23:07.004892 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 23:23:07.007644 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 23:23:07.019404 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 23:23:07.020862 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 23:23:07.022281 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 23:23:07.024415 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 23:23:07.024554 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 23:23:07.027742 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 23:23:07.029018 systemd[1]: Stopped target basic.target - Basic System. Oct 27 23:23:07.030879 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 23:23:07.033099 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 23:23:07.035124 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 23:23:07.037006 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 23:23:07.038968 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 23:23:07.041381 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 23:23:07.043363 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 23:23:07.045207 systemd[1]: Stopped target swap.target - Swaps. Oct 27 23:23:07.047125 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 23:23:07.047272 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 23:23:07.050119 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 23:23:07.052159 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 23:23:07.054273 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 23:23:07.055130 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 23:23:07.056590 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 23:23:07.056726 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 23:23:07.059865 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 23:23:07.059996 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 23:23:07.062255 systemd[1]: Stopped target paths.target - Path Units. Oct 27 23:23:07.064471 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 23:23:07.065337 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 23:23:07.066910 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 23:23:07.068851 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 23:23:07.070789 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 23:23:07.070901 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 23:23:07.072849 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 23:23:07.072936 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 23:23:07.074663 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 23:23:07.074799 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 23:23:07.077037 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 23:23:07.077169 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 23:23:07.090054 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 23:23:07.091102 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 23:23:07.091263 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 23:23:07.097065 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 23:23:07.097993 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 23:23:07.098154 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 23:23:07.105072 ignition[994]: INFO : Ignition 2.20.0 Oct 27 23:23:07.105072 ignition[994]: INFO : Stage: umount Oct 27 23:23:07.105072 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 23:23:07.105072 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:23:07.105072 ignition[994]: INFO : umount: umount passed Oct 27 23:23:07.105072 ignition[994]: INFO : Ignition finished successfully Oct 27 23:23:07.101565 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 23:23:07.101677 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 23:23:07.105712 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 23:23:07.107842 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 23:23:07.111451 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 23:23:07.113459 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 23:23:07.115924 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 23:23:07.116729 systemd[1]: Stopped target network.target - Network. Oct 27 23:23:07.119265 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 23:23:07.119342 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 23:23:07.121458 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 23:23:07.121512 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 23:23:07.123509 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 23:23:07.123560 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 23:23:07.125532 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 23:23:07.125581 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 23:23:07.128124 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 23:23:07.130054 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 23:23:07.140743 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 23:23:07.140931 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 23:23:07.144417 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 27 23:23:07.144673 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 23:23:07.144715 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 23:23:07.149327 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 27 23:23:07.149567 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 23:23:07.149700 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 23:23:07.151803 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 27 23:23:07.152303 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 23:23:07.152368 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 23:23:07.160925 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 23:23:07.162170 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 23:23:07.162236 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 23:23:07.164375 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 23:23:07.164424 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:23:07.168219 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 23:23:07.168269 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 23:23:07.170546 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 23:23:07.174836 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 27 23:23:07.182878 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 23:23:07.183009 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 23:23:07.191534 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 23:23:07.191653 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 23:23:07.194125 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 23:23:07.194304 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 23:23:07.196698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 23:23:07.196767 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 23:23:07.198033 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 23:23:07.198069 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 23:23:07.202467 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 23:23:07.202524 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 23:23:07.205616 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 23:23:07.205671 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 23:23:07.208853 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 23:23:07.208906 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 23:23:07.212085 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 23:23:07.212135 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 23:23:07.223021 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 23:23:07.224189 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 23:23:07.224264 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 23:23:07.227629 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 23:23:07.227674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:23:07.231468 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 23:23:07.231546 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 23:23:07.233947 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 23:23:07.236442 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 23:23:07.246284 systemd[1]: Switching root. Oct 27 23:23:07.276589 systemd-journald[239]: Journal stopped Oct 27 23:23:08.122089 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Oct 27 23:23:08.122146 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 23:23:08.122161 kernel: SELinux: policy capability open_perms=1 Oct 27 23:23:08.122170 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 23:23:08.122183 kernel: SELinux: policy capability always_check_network=0 Oct 27 23:23:08.122200 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 23:23:08.122210 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 23:23:08.122221 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 23:23:08.122231 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 23:23:08.122240 kernel: audit: type=1403 audit(1761607387.456:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 23:23:08.122250 systemd[1]: Successfully loaded SELinux policy in 32.962ms. Oct 27 23:23:08.122268 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.781ms. Oct 27 23:23:08.122279 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 23:23:08.122290 systemd[1]: Detected virtualization kvm. Oct 27 23:23:08.122300 systemd[1]: Detected architecture arm64. Oct 27 23:23:08.122310 systemd[1]: Detected first boot. Oct 27 23:23:08.122320 systemd[1]: Initializing machine ID from VM UUID. Oct 27 23:23:08.122330 zram_generator::config[1042]: No configuration found. Oct 27 23:23:08.122341 kernel: NET: Registered PF_VSOCK protocol family Oct 27 23:23:08.122352 systemd[1]: Populated /etc with preset unit settings. Oct 27 23:23:08.122363 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 27 23:23:08.122374 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 23:23:08.122384 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 23:23:08.122394 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 23:23:08.122404 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 23:23:08.122414 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 23:23:08.122424 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 23:23:08.122434 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 23:23:08.122447 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 23:23:08.122458 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 23:23:08.122468 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 23:23:08.122479 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 23:23:08.122490 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 23:23:08.122503 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 23:23:08.122513 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 23:23:08.122524 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 23:23:08.122536 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 23:23:08.122546 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 23:23:08.122556 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 27 23:23:08.122566 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 23:23:08.122577 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 23:23:08.122587 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 23:23:08.122597 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 23:23:08.122608 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 23:23:08.122619 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 23:23:08.122630 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 23:23:08.122640 systemd[1]: Reached target slices.target - Slice Units. Oct 27 23:23:08.122650 systemd[1]: Reached target swap.target - Swaps. Oct 27 23:23:08.122660 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 23:23:08.122671 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 23:23:08.122681 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 23:23:08.122692 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 23:23:08.122703 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 23:23:08.122715 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 23:23:08.122726 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 23:23:08.122736 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 23:23:08.122747 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 23:23:08.122756 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 23:23:08.122767 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 23:23:08.122777 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 23:23:08.122787 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 23:23:08.122797 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 23:23:08.122809 systemd[1]: Reached target machines.target - Containers. Oct 27 23:23:08.122819 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 23:23:08.122855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:23:08.122865 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 23:23:08.122875 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 23:23:08.122885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 23:23:08.122895 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 23:23:08.122905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 23:23:08.122917 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 23:23:08.122927 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 23:23:08.122938 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 23:23:08.122948 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 23:23:08.122958 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 23:23:08.122968 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 23:23:08.122977 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 23:23:08.122987 kernel: loop: module loaded Oct 27 23:23:08.122998 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:23:08.123010 kernel: fuse: init (API version 7.39) Oct 27 23:23:08.123020 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 23:23:08.123031 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 23:23:08.123056 kernel: ACPI: bus type drm_connector registered Oct 27 23:23:08.123066 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 23:23:08.123082 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 23:23:08.123115 systemd-journald[1103]: Collecting audit messages is disabled. Oct 27 23:23:08.123141 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 23:23:08.123153 systemd-journald[1103]: Journal started Oct 27 23:23:08.123173 systemd-journald[1103]: Runtime Journal (/run/log/journal/d1aedafe66794da795034e7d9d07969f) is 5.9M, max 47.3M, 41.4M free. Oct 27 23:23:07.883645 systemd[1]: Queued start job for default target multi-user.target. Oct 27 23:23:07.900183 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 27 23:23:07.900632 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 23:23:08.130839 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 23:23:08.130896 systemd[1]: verity-setup.service: Deactivated successfully. Oct 27 23:23:08.130911 systemd[1]: Stopped verity-setup.service. Oct 27 23:23:08.137857 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 23:23:08.137848 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 23:23:08.139070 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 23:23:08.140383 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 23:23:08.141708 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 23:23:08.143104 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 23:23:08.144380 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 23:23:08.146843 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 23:23:08.148404 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 23:23:08.148589 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 23:23:08.152247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 23:23:08.152425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 23:23:08.153941 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 23:23:08.154120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 23:23:08.156196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 23:23:08.156369 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 23:23:08.158062 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 23:23:08.158255 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 23:23:08.159709 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 23:23:08.159913 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 23:23:08.161625 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 23:23:08.163136 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 23:23:08.164775 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 23:23:08.166558 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 23:23:08.179138 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 23:23:08.192005 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 23:23:08.194384 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 23:23:08.195671 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 23:23:08.195718 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 23:23:08.197943 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 23:23:08.200522 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 23:23:08.202929 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 23:23:08.204130 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:23:08.205253 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 23:23:08.209062 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 23:23:08.210453 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 23:23:08.211448 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 23:23:08.213047 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 23:23:08.217037 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 23:23:08.217340 systemd-journald[1103]: Time spent on flushing to /var/log/journal/d1aedafe66794da795034e7d9d07969f is 14.868ms for 864 entries. Oct 27 23:23:08.217340 systemd-journald[1103]: System Journal (/var/log/journal/d1aedafe66794da795034e7d9d07969f) is 8M, max 195.6M, 187.6M free. Oct 27 23:23:08.324061 systemd-journald[1103]: Received client request to flush runtime journal. Oct 27 23:23:08.324117 kernel: loop0: detected capacity change from 0 to 123192 Oct 27 23:23:08.324131 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 23:23:08.324142 kernel: loop1: detected capacity change from 0 to 211168 Oct 27 23:23:08.220965 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 23:23:08.224888 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 23:23:08.226716 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 23:23:08.229286 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 23:23:08.231068 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 23:23:08.248070 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 27 23:23:08.250505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:23:08.259709 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 27 23:23:08.310357 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 23:23:08.314846 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 23:23:08.328527 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 23:23:08.331231 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 23:23:08.334871 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 23:23:08.340864 kernel: loop2: detected capacity change from 0 to 113512 Oct 27 23:23:08.351302 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 23:23:08.356754 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 23:23:08.359710 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 23:23:08.370314 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 23:23:08.382159 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 23:23:08.390852 kernel: loop3: detected capacity change from 0 to 123192 Oct 27 23:23:08.396928 kernel: loop4: detected capacity change from 0 to 211168 Oct 27 23:23:08.398371 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Oct 27 23:23:08.398390 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Oct 27 23:23:08.403857 kernel: loop5: detected capacity change from 0 to 113512 Oct 27 23:23:08.403859 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 23:23:08.408362 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 27 23:23:08.408767 (sd-merge)[1184]: Merged extensions into '/usr'. Oct 27 23:23:08.416553 systemd[1]: Reload requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 23:23:08.416571 systemd[1]: Reloading... Oct 27 23:23:08.479220 zram_generator::config[1211]: No configuration found. Oct 27 23:23:08.543872 ldconfig[1146]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 23:23:08.583449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 27 23:23:08.644929 systemd[1]: Reloading finished in 227 ms. Oct 27 23:23:08.659858 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 23:23:08.662457 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 23:23:08.677321 systemd[1]: Starting ensure-sysext.service... Oct 27 23:23:08.679553 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 23:23:08.688002 systemd[1]: Reload requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Oct 27 23:23:08.688019 systemd[1]: Reloading... Oct 27 23:23:08.696684 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 23:23:08.697252 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 23:23:08.698011 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 23:23:08.698324 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Oct 27 23:23:08.698449 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Oct 27 23:23:08.701382 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 23:23:08.701524 systemd-tmpfiles[1248]: Skipping /boot Oct 27 23:23:08.710862 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 23:23:08.710991 systemd-tmpfiles[1248]: Skipping /boot Oct 27 23:23:08.746855 zram_generator::config[1280]: No configuration found. Oct 27 23:23:08.836177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 27 23:23:08.898263 systemd[1]: Reloading finished in 209 ms. Oct 27 23:23:08.915873 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 23:23:08.936863 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 23:23:08.945142 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 23:23:08.947974 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 23:23:08.950777 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 23:23:08.956185 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 23:23:08.960912 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 23:23:08.963351 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 23:23:08.967307 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:23:08.971176 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 23:23:08.974937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 23:23:08.979525 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 23:23:08.983005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:23:08.983164 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:23:08.986802 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 23:23:08.993936 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:23:08.994177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:23:08.994324 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:23:08.996527 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 23:23:08.999589 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 23:23:08.999709 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Oct 27 23:23:09.003868 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 23:23:09.006357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 23:23:09.006569 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 23:23:09.008705 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 23:23:09.008899 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 23:23:09.011277 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 23:23:09.011478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 23:23:09.017057 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 23:23:09.018107 augenrules[1345]: No rules Oct 27 23:23:09.019599 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 23:23:09.019805 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 23:23:09.022955 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 23:23:09.026743 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 23:23:09.032843 systemd[1]: Finished ensure-sysext.service. Oct 27 23:23:09.041435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:23:09.046028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 23:23:09.050940 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 23:23:09.058064 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 23:23:09.063017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 23:23:09.066009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:23:09.066063 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:23:09.068387 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 23:23:09.073731 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 23:23:09.076991 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 23:23:09.077394 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 23:23:09.078888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 23:23:09.079092 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 23:23:09.082984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 23:23:09.083173 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 23:23:09.093473 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 27 23:23:09.098513 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 23:23:09.099008 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 23:23:09.100784 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 23:23:09.101317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 23:23:09.111773 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1376) Oct 27 23:23:09.135289 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 23:23:09.150020 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 23:23:09.151406 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 23:23:09.151477 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 23:23:09.161251 systemd-resolved[1317]: Positive Trust Anchors: Oct 27 23:23:09.161585 systemd-resolved[1317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 23:23:09.161674 systemd-resolved[1317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 23:23:09.164218 systemd-networkd[1381]: lo: Link UP Oct 27 23:23:09.164229 systemd-networkd[1381]: lo: Gained carrier Oct 27 23:23:09.165445 systemd-networkd[1381]: Enumeration completed Oct 27 23:23:09.166176 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:23:09.166184 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 23:23:09.166883 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 23:23:09.167099 systemd-networkd[1381]: eth0: Link UP Oct 27 23:23:09.167102 systemd-networkd[1381]: eth0: Gained carrier Oct 27 23:23:09.167116 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:23:09.169143 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 23:23:09.169204 systemd-resolved[1317]: Defaulting to hostname 'linux'. Oct 27 23:23:09.171263 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 23:23:09.173164 systemd[1]: Reached target network.target - Network. Oct 27 23:23:09.174983 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 23:23:09.183994 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 23:23:09.184124 systemd-networkd[1381]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 23:23:09.186652 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 23:23:09.189153 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 23:23:09.190699 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 23:23:09.624245 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 27 23:23:09.624310 systemd-resolved[1317]: Clock change detected. Flushing caches. Oct 27 23:23:09.624324 systemd-timesyncd[1388]: Initial clock synchronization to Mon 2025-10-27 23:23:09.624080 UTC. Oct 27 23:23:09.633310 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 23:23:09.673430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:23:09.683618 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 27 23:23:09.686996 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 27 23:23:09.697893 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 27 23:23:09.713089 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:23:09.731783 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 27 23:23:09.733539 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 23:23:09.734801 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 23:23:09.736130 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 23:23:09.737640 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 23:23:09.739246 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 23:23:09.740470 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 23:23:09.741866 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 23:23:09.743260 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 23:23:09.743308 systemd[1]: Reached target paths.target - Path Units. Oct 27 23:23:09.744242 systemd[1]: Reached target timers.target - Timer Units. Oct 27 23:23:09.746150 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 23:23:09.748982 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 23:23:09.752606 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 23:23:09.754099 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 23:23:09.755465 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 23:23:09.760161 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 23:23:09.762035 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 23:23:09.764687 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 27 23:23:09.766550 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 23:23:09.767903 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 23:23:09.769010 systemd[1]: Reached target basic.target - Basic System. Oct 27 23:23:09.770151 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 23:23:09.770184 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 23:23:09.771228 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 23:23:09.772898 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 27 23:23:09.773603 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 23:23:09.776496 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 23:23:09.781515 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 23:23:09.783438 jq[1424]: false Oct 27 23:23:09.783938 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 23:23:09.785120 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 23:23:09.787598 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 23:23:09.792965 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 23:23:09.798371 extend-filesystems[1425]: Found loop3 Oct 27 23:23:09.798371 extend-filesystems[1425]: Found loop4 Oct 27 23:23:09.798371 extend-filesystems[1425]: Found loop5 Oct 27 23:23:09.798371 extend-filesystems[1425]: Found vda Oct 27 23:23:09.798371 extend-filesystems[1425]: Found vda1 Oct 27 23:23:09.798371 extend-filesystems[1425]: Found vda2 Oct 27 23:23:09.798371 extend-filesystems[1425]: Found vda3 Oct 27 23:23:09.798371 extend-filesystems[1425]: Found usr Oct 27 23:23:09.798371 extend-filesystems[1425]: Found vda4 Oct 27 23:23:09.798371 extend-filesystems[1425]: Found vda6 Oct 27 23:23:09.798371 extend-filesystems[1425]: Found vda7 Oct 27 23:23:09.798371 extend-filesystems[1425]: Found vda9 Oct 27 23:23:09.798371 extend-filesystems[1425]: Checking size of /dev/vda9 Oct 27 23:23:09.797637 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 23:23:09.802444 dbus-daemon[1423]: [system] SELinux support is enabled Oct 27 23:23:09.804479 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 23:23:09.806533 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 23:23:09.806996 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 23:23:09.811050 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 23:23:09.813467 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 23:23:09.823665 jq[1443]: true Oct 27 23:23:09.816757 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 23:23:09.820838 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 27 23:23:09.832113 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1357) Oct 27 23:23:09.832209 extend-filesystems[1425]: Resized partition /dev/vda9 Oct 27 23:23:09.836395 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Oct 27 23:23:09.833641 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 23:23:09.833851 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 23:23:09.834141 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 23:23:09.834327 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 23:23:09.840762 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 23:23:09.842382 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 27 23:23:09.842391 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 23:23:09.853381 jq[1449]: true Oct 27 23:23:09.864634 (ntainerd)[1450]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 27 23:23:09.872495 tar[1448]: linux-arm64/LICENSE Oct 27 23:23:09.873330 tar[1448]: linux-arm64/helm Oct 27 23:23:09.872565 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 23:23:09.872590 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 23:23:09.875474 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 23:23:09.877197 update_engine[1440]: I20251027 23:23:09.875447 1440 main.cc:92] Flatcar Update Engine starting Oct 27 23:23:09.875500 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 23:23:09.885461 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 27 23:23:09.885533 update_engine[1440]: I20251027 23:23:09.884601 1440 update_check_scheduler.cc:74] Next update check in 9m26s Oct 27 23:23:09.884479 systemd[1]: Started update-engine.service - Update Engine. Oct 27 23:23:09.895465 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 23:23:09.900122 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 27 23:23:09.900122 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 27 23:23:09.900122 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 27 23:23:09.910083 extend-filesystems[1425]: Resized filesystem in /dev/vda9 Oct 27 23:23:09.912597 bash[1475]: Updated "/home/core/.ssh/authorized_keys" Oct 27 23:23:09.907181 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 23:23:09.907398 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 23:23:09.912412 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (Power Button) Oct 27 23:23:09.913153 systemd-logind[1437]: New seat seat0. Oct 27 23:23:09.916894 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 23:23:09.918457 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 23:23:09.923200 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 27 23:23:09.956587 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 23:23:10.035863 containerd[1450]: time="2025-10-27T23:23:10.035725321Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Oct 27 23:23:10.060296 containerd[1450]: time="2025-10-27T23:23:10.060228041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 27 23:23:10.061845 containerd[1450]: time="2025-10-27T23:23:10.061789601Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 27 23:23:10.061845 containerd[1450]: time="2025-10-27T23:23:10.061827881Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 27 23:23:10.061947 containerd[1450]: time="2025-10-27T23:23:10.061862401Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 27 23:23:10.062097 containerd[1450]: time="2025-10-27T23:23:10.062066281Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 27 23:23:10.062097 containerd[1450]: time="2025-10-27T23:23:10.062096521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 27 23:23:10.062210 containerd[1450]: time="2025-10-27T23:23:10.062177081Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 27 23:23:10.062210 containerd[1450]: time="2025-10-27T23:23:10.062199081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 27 23:23:10.062465 containerd[1450]: time="2025-10-27T23:23:10.062439801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 27 23:23:10.062465 containerd[1450]: time="2025-10-27T23:23:10.062460921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 27 23:23:10.062514 containerd[1450]: time="2025-10-27T23:23:10.062474241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 27 23:23:10.062514 containerd[1450]: time="2025-10-27T23:23:10.062484801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 27 23:23:10.062591 containerd[1450]: time="2025-10-27T23:23:10.062573361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 27 23:23:10.062815 containerd[1450]: time="2025-10-27T23:23:10.062792121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 27 23:23:10.062940 containerd[1450]: time="2025-10-27T23:23:10.062920761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 27 23:23:10.062940 containerd[1450]: time="2025-10-27T23:23:10.062938521Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 27 23:23:10.063034 containerd[1450]: time="2025-10-27T23:23:10.063019641Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 27 23:23:10.063078 containerd[1450]: time="2025-10-27T23:23:10.063064441Z" level=info msg="metadata content store policy set" policy=shared Oct 27 23:23:10.069408 containerd[1450]: time="2025-10-27T23:23:10.069359241Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 27 23:23:10.069502 containerd[1450]: time="2025-10-27T23:23:10.069424801Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 27 23:23:10.069502 containerd[1450]: time="2025-10-27T23:23:10.069445361Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 27 23:23:10.069502 containerd[1450]: time="2025-10-27T23:23:10.069462841Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 27 23:23:10.069502 containerd[1450]: time="2025-10-27T23:23:10.069482561Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 27 23:23:10.069672 containerd[1450]: time="2025-10-27T23:23:10.069652521Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 27 23:23:10.069909 containerd[1450]: time="2025-10-27T23:23:10.069890081Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 27 23:23:10.070007 containerd[1450]: time="2025-10-27T23:23:10.069983201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 27 23:23:10.070007 containerd[1450]: time="2025-10-27T23:23:10.070003201Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 27 23:23:10.070054 containerd[1450]: time="2025-10-27T23:23:10.070017441Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 27 23:23:10.070054 containerd[1450]: time="2025-10-27T23:23:10.070033401Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 27 23:23:10.070054 containerd[1450]: time="2025-10-27T23:23:10.070046561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 27 23:23:10.070108 containerd[1450]: time="2025-10-27T23:23:10.070060081Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 27 23:23:10.070108 containerd[1450]: time="2025-10-27T23:23:10.070074441Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 27 23:23:10.070108 containerd[1450]: time="2025-10-27T23:23:10.070089761Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 27 23:23:10.070108 containerd[1450]: time="2025-10-27T23:23:10.070103441Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 27 23:23:10.070186 containerd[1450]: time="2025-10-27T23:23:10.070115321Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 27 23:23:10.070186 containerd[1450]: time="2025-10-27T23:23:10.070126521Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 27 23:23:10.070186 containerd[1450]: time="2025-10-27T23:23:10.070148921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070186 containerd[1450]: time="2025-10-27T23:23:10.070172641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070251 containerd[1450]: time="2025-10-27T23:23:10.070202001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070251 containerd[1450]: time="2025-10-27T23:23:10.070214201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070251 containerd[1450]: time="2025-10-27T23:23:10.070225841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070251 containerd[1450]: time="2025-10-27T23:23:10.070238401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070342 containerd[1450]: time="2025-10-27T23:23:10.070249681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070342 containerd[1450]: time="2025-10-27T23:23:10.070282441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070342 containerd[1450]: time="2025-10-27T23:23:10.070295921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070342 containerd[1450]: time="2025-10-27T23:23:10.070309681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070342 containerd[1450]: time="2025-10-27T23:23:10.070321761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070342 containerd[1450]: time="2025-10-27T23:23:10.070336441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070437 containerd[1450]: time="2025-10-27T23:23:10.070348761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070437 containerd[1450]: time="2025-10-27T23:23:10.070363401Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 27 23:23:10.070437 containerd[1450]: time="2025-10-27T23:23:10.070384201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070437 containerd[1450]: time="2025-10-27T23:23:10.070396881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070437 containerd[1450]: time="2025-10-27T23:23:10.070407521Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 27 23:23:10.070618 containerd[1450]: time="2025-10-27T23:23:10.070599561Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 27 23:23:10.070652 containerd[1450]: time="2025-10-27T23:23:10.070623401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 27 23:23:10.070652 containerd[1450]: time="2025-10-27T23:23:10.070634521Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 27 23:23:10.070652 containerd[1450]: time="2025-10-27T23:23:10.070647241Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 27 23:23:10.070702 containerd[1450]: time="2025-10-27T23:23:10.070657401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.070702 containerd[1450]: time="2025-10-27T23:23:10.070670321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 27 23:23:10.070702 containerd[1450]: time="2025-10-27T23:23:10.070680401Z" level=info msg="NRI interface is disabled by configuration." Oct 27 23:23:10.070702 containerd[1450]: time="2025-10-27T23:23:10.070690001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 27 23:23:10.071075 containerd[1450]: time="2025-10-27T23:23:10.071025481Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 27 23:23:10.071075 containerd[1450]: time="2025-10-27T23:23:10.071075641Z" level=info msg="Connect containerd service" Oct 27 23:23:10.071209 containerd[1450]: time="2025-10-27T23:23:10.071109601Z" level=info msg="using legacy CRI server" Oct 27 23:23:10.071209 containerd[1450]: time="2025-10-27T23:23:10.071116281Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 23:23:10.071383 containerd[1450]: time="2025-10-27T23:23:10.071364521Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 27 23:23:10.072137 containerd[1450]: time="2025-10-27T23:23:10.072102441Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 23:23:10.072361 containerd[1450]: time="2025-10-27T23:23:10.072319401Z" level=info msg="Start subscribing containerd event" Oct 27 23:23:10.072410 containerd[1450]: time="2025-10-27T23:23:10.072395361Z" level=info msg="Start recovering state" Oct 27 23:23:10.072498 containerd[1450]: time="2025-10-27T23:23:10.072479561Z" level=info msg="Start event monitor" Oct 27 23:23:10.072556 containerd[1450]: time="2025-10-27T23:23:10.072539361Z" level=info msg="Start snapshots syncer" Oct 27 23:23:10.072593 containerd[1450]: time="2025-10-27T23:23:10.072560921Z" level=info msg="Start cni network conf syncer for default" Oct 27 23:23:10.072593 containerd[1450]: time="2025-10-27T23:23:10.072570001Z" level=info msg="Start streaming server" Oct 27 23:23:10.072627 containerd[1450]: time="2025-10-27T23:23:10.072608201Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 23:23:10.072667 containerd[1450]: time="2025-10-27T23:23:10.072651681Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 23:23:10.072848 containerd[1450]: time="2025-10-27T23:23:10.072831481Z" level=info msg="containerd successfully booted in 0.039610s" Oct 27 23:23:10.072920 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 23:23:10.266053 tar[1448]: linux-arm64/README.md Oct 27 23:23:10.278421 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 23:23:10.486677 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 23:23:10.505352 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 23:23:10.515577 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 23:23:10.521105 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 23:23:10.521352 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 23:23:10.524262 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 23:23:10.536352 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 23:23:10.539427 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 23:23:10.541768 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 27 23:23:10.543231 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 23:23:11.423488 systemd-networkd[1381]: eth0: Gained IPv6LL Oct 27 23:23:11.428362 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 23:23:11.430236 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 23:23:11.439574 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 27 23:23:11.442148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:23:11.444679 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 23:23:11.459429 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 27 23:23:11.459652 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 27 23:23:11.461589 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 23:23:11.463806 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 23:23:12.012434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:23:12.016485 (kubelet)[1537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 23:23:12.016781 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 23:23:12.024032 systemd[1]: Startup finished in 608ms (kernel) + 5.774s (initrd) + 4.169s (userspace) = 10.551s. Oct 27 23:23:12.387584 kubelet[1537]: E1027 23:23:12.387447 1537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 23:23:12.390709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 23:23:12.390864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 23:23:12.391327 systemd[1]: kubelet.service: Consumed 751ms CPU time, 259.4M memory peak. Oct 27 23:23:14.703110 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 23:23:14.718607 systemd[1]: Started sshd@0-10.0.0.25:22-10.0.0.1:44498.service - OpenSSH per-connection server daemon (10.0.0.1:44498). Oct 27 23:23:14.772180 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 44498 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:23:14.774065 sshd-session[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:23:14.786074 systemd-logind[1437]: New session 1 of user core. Oct 27 23:23:14.787127 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 23:23:14.797607 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 23:23:14.807050 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 23:23:14.809096 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 23:23:14.816186 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 23:23:14.818468 systemd-logind[1437]: New session c1 of user core. Oct 27 23:23:14.925224 systemd[1555]: Queued start job for default target default.target. Oct 27 23:23:14.934214 systemd[1555]: Created slice app.slice - User Application Slice. Oct 27 23:23:14.934244 systemd[1555]: Reached target paths.target - Paths. Oct 27 23:23:14.934304 systemd[1555]: Reached target timers.target - Timers. Oct 27 23:23:14.935591 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 23:23:14.944595 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 23:23:14.944666 systemd[1555]: Reached target sockets.target - Sockets. Oct 27 23:23:14.944707 systemd[1555]: Reached target basic.target - Basic System. Oct 27 23:23:14.944737 systemd[1555]: Reached target default.target - Main User Target. Oct 27 23:23:14.944765 systemd[1555]: Startup finished in 120ms. Oct 27 23:23:14.944871 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 23:23:14.946537 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 23:23:15.009957 systemd[1]: Started sshd@1-10.0.0.25:22-10.0.0.1:44506.service - OpenSSH per-connection server daemon (10.0.0.1:44506). Oct 27 23:23:15.050635 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 44506 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:23:15.051950 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:23:15.055889 systemd-logind[1437]: New session 2 of user core. Oct 27 23:23:15.065461 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 23:23:15.116469 sshd[1568]: Connection closed by 10.0.0.1 port 44506 Oct 27 23:23:15.117398 sshd-session[1566]: pam_unix(sshd:session): session closed for user core Oct 27 23:23:15.130818 systemd[1]: sshd@1-10.0.0.25:22-10.0.0.1:44506.service: Deactivated successfully. Oct 27 23:23:15.132620 systemd[1]: session-2.scope: Deactivated successfully. Oct 27 23:23:15.133597 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Oct 27 23:23:15.135398 systemd[1]: Started sshd@2-10.0.0.25:22-10.0.0.1:44520.service - OpenSSH per-connection server daemon (10.0.0.1:44520). Oct 27 23:23:15.136771 systemd-logind[1437]: Removed session 2. Oct 27 23:23:15.186883 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 44520 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:23:15.188198 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:23:15.192177 systemd-logind[1437]: New session 3 of user core. Oct 27 23:23:15.199462 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 23:23:15.249535 sshd[1576]: Connection closed by 10.0.0.1 port 44520 Oct 27 23:23:15.249856 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Oct 27 23:23:15.261721 systemd[1]: sshd@2-10.0.0.25:22-10.0.0.1:44520.service: Deactivated successfully. Oct 27 23:23:15.263447 systemd[1]: session-3.scope: Deactivated successfully. Oct 27 23:23:15.264923 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Oct 27 23:23:15.266155 systemd[1]: Started sshd@3-10.0.0.25:22-10.0.0.1:44532.service - OpenSSH per-connection server daemon (10.0.0.1:44532). Oct 27 23:23:15.266999 systemd-logind[1437]: Removed session 3. Oct 27 23:23:15.308790 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 44532 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:23:15.310034 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:23:15.314373 systemd-logind[1437]: New session 4 of user core. Oct 27 23:23:15.326460 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 23:23:15.379322 sshd[1584]: Connection closed by 10.0.0.1 port 44532 Oct 27 23:23:15.379341 sshd-session[1581]: pam_unix(sshd:session): session closed for user core Oct 27 23:23:15.392018 systemd[1]: sshd@3-10.0.0.25:22-10.0.0.1:44532.service: Deactivated successfully. Oct 27 23:23:15.393876 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 23:23:15.395391 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Oct 27 23:23:15.396627 systemd[1]: Started sshd@4-10.0.0.25:22-10.0.0.1:44536.service - OpenSSH per-connection server daemon (10.0.0.1:44536). Oct 27 23:23:15.397922 systemd-logind[1437]: Removed session 4. Oct 27 23:23:15.440562 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 44536 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:23:15.441952 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:23:15.446341 systemd-logind[1437]: New session 5 of user core. Oct 27 23:23:15.460449 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 23:23:15.516491 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 23:23:15.516786 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:23:15.536230 sudo[1593]: pam_unix(sudo:session): session closed for user root Oct 27 23:23:15.537846 sshd[1592]: Connection closed by 10.0.0.1 port 44536 Oct 27 23:23:15.538250 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Oct 27 23:23:15.551869 systemd[1]: sshd@4-10.0.0.25:22-10.0.0.1:44536.service: Deactivated successfully. Oct 27 23:23:15.553488 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 23:23:15.555026 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Oct 27 23:23:15.565640 systemd[1]: Started sshd@5-10.0.0.25:22-10.0.0.1:44544.service - OpenSSH per-connection server daemon (10.0.0.1:44544). Oct 27 23:23:15.566692 systemd-logind[1437]: Removed session 5. Oct 27 23:23:15.606627 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 44544 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:23:15.608013 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:23:15.612656 systemd-logind[1437]: New session 6 of user core. Oct 27 23:23:15.622485 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 23:23:15.674902 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 23:23:15.675218 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:23:15.679469 sudo[1603]: pam_unix(sudo:session): session closed for user root Oct 27 23:23:15.689585 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 23:23:15.691562 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:23:15.713741 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 23:23:15.743479 augenrules[1625]: No rules Oct 27 23:23:15.744943 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 23:23:15.746341 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 23:23:15.748551 sudo[1602]: pam_unix(sudo:session): session closed for user root Oct 27 23:23:15.751974 sshd[1601]: Connection closed by 10.0.0.1 port 44544 Oct 27 23:23:15.752564 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Oct 27 23:23:15.767290 systemd[1]: sshd@5-10.0.0.25:22-10.0.0.1:44544.service: Deactivated successfully. Oct 27 23:23:15.769571 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 23:23:15.771375 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Oct 27 23:23:15.777660 systemd[1]: Started sshd@6-10.0.0.25:22-10.0.0.1:44546.service - OpenSSH per-connection server daemon (10.0.0.1:44546). Oct 27 23:23:15.778797 systemd-logind[1437]: Removed session 6. Oct 27 23:23:15.823611 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 44546 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:23:15.825014 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:23:15.829939 systemd-logind[1437]: New session 7 of user core. Oct 27 23:23:15.841501 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 23:23:15.894537 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 23:23:15.894831 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:23:16.220598 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 23:23:16.220687 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 23:23:16.441404 dockerd[1657]: time="2025-10-27T23:23:16.441328521Z" level=info msg="Starting up" Oct 27 23:23:16.659778 dockerd[1657]: time="2025-10-27T23:23:16.659631321Z" level=info msg="Loading containers: start." Oct 27 23:23:16.809042 kernel: Initializing XFRM netlink socket Oct 27 23:23:16.889748 systemd-networkd[1381]: docker0: Link UP Oct 27 23:23:16.924562 dockerd[1657]: time="2025-10-27T23:23:16.924434001Z" level=info msg="Loading containers: done." Oct 27 23:23:16.936489 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3464175657-merged.mount: Deactivated successfully. Oct 27 23:23:16.938369 dockerd[1657]: time="2025-10-27T23:23:16.938323041Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 23:23:16.938448 dockerd[1657]: time="2025-10-27T23:23:16.938433921Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Oct 27 23:23:16.938650 dockerd[1657]: time="2025-10-27T23:23:16.938628521Z" level=info msg="Daemon has completed initialization" Oct 27 23:23:16.967705 dockerd[1657]: time="2025-10-27T23:23:16.967641521Z" level=info msg="API listen on /run/docker.sock" Oct 27 23:23:16.967816 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 23:23:17.648001 containerd[1450]: time="2025-10-27T23:23:17.647963681Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 27 23:23:18.308044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1112784035.mount: Deactivated successfully. Oct 27 23:23:19.287609 containerd[1450]: time="2025-10-27T23:23:19.287546761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:19.288868 containerd[1450]: time="2025-10-27T23:23:19.288578961Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Oct 27 23:23:19.289727 containerd[1450]: time="2025-10-27T23:23:19.289689401Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:19.292931 containerd[1450]: time="2025-10-27T23:23:19.292889961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:19.294123 containerd[1450]: time="2025-10-27T23:23:19.294093561Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.64609144s" Oct 27 23:23:19.294173 containerd[1450]: time="2025-10-27T23:23:19.294134681Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Oct 27 23:23:19.295475 containerd[1450]: time="2025-10-27T23:23:19.295386721Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 27 23:23:20.932645 containerd[1450]: time="2025-10-27T23:23:20.932561881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:20.933386 containerd[1450]: time="2025-10-27T23:23:20.933342321Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Oct 27 23:23:20.934470 containerd[1450]: time="2025-10-27T23:23:20.934443761Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:20.937673 containerd[1450]: time="2025-10-27T23:23:20.937621881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:20.939445 containerd[1450]: time="2025-10-27T23:23:20.938905321Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.64348364s" Oct 27 23:23:20.939445 containerd[1450]: time="2025-10-27T23:23:20.938941521Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Oct 27 23:23:20.939762 containerd[1450]: time="2025-10-27T23:23:20.939736921Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 27 23:23:22.406358 containerd[1450]: time="2025-10-27T23:23:22.405558081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:22.406745 containerd[1450]: time="2025-10-27T23:23:22.406373521Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Oct 27 23:23:22.408515 containerd[1450]: time="2025-10-27T23:23:22.408457601Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:22.411409 containerd[1450]: time="2025-10-27T23:23:22.411355761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:22.412635 containerd[1450]: time="2025-10-27T23:23:22.412587241Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.47281108s" Oct 27 23:23:22.412635 containerd[1450]: time="2025-10-27T23:23:22.412631921Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Oct 27 23:23:22.413474 containerd[1450]: time="2025-10-27T23:23:22.413237121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 27 23:23:22.428819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 23:23:22.438568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:23:22.553404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:23:22.557885 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 23:23:22.608484 kubelet[1929]: E1027 23:23:22.608394 1929 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 23:23:22.611905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 23:23:22.612065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 23:23:22.612383 systemd[1]: kubelet.service: Consumed 147ms CPU time, 106.5M memory peak. Oct 27 23:23:23.997097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030261763.mount: Deactivated successfully. Oct 27 23:23:25.095563 containerd[1450]: time="2025-10-27T23:23:25.095502121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:25.096636 containerd[1450]: time="2025-10-27T23:23:25.096419681Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Oct 27 23:23:25.097633 containerd[1450]: time="2025-10-27T23:23:25.097568001Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:25.100323 containerd[1450]: time="2025-10-27T23:23:25.100121361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:25.100759 containerd[1450]: time="2025-10-27T23:23:25.100732041Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 2.68746232s" Oct 27 23:23:25.100813 containerd[1450]: time="2025-10-27T23:23:25.100763001Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Oct 27 23:23:25.101624 containerd[1450]: time="2025-10-27T23:23:25.101594681Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 27 23:23:25.666756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165766356.mount: Deactivated successfully. Oct 27 23:23:27.029259 containerd[1450]: time="2025-10-27T23:23:27.029208441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:27.030379 containerd[1450]: time="2025-10-27T23:23:27.030339001Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Oct 27 23:23:27.030883 containerd[1450]: time="2025-10-27T23:23:27.030857881Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:27.034497 containerd[1450]: time="2025-10-27T23:23:27.034461601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:27.035246 containerd[1450]: time="2025-10-27T23:23:27.035209641Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.93357904s" Oct 27 23:23:27.035246 containerd[1450]: time="2025-10-27T23:23:27.035240721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Oct 27 23:23:27.036242 containerd[1450]: time="2025-10-27T23:23:27.035740401Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 27 23:23:27.452815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2472030382.mount: Deactivated successfully. Oct 27 23:23:27.457726 containerd[1450]: time="2025-10-27T23:23:27.457682401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:27.458671 containerd[1450]: time="2025-10-27T23:23:27.458632561Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 27 23:23:27.461029 containerd[1450]: time="2025-10-27T23:23:27.461006521Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:27.463160 containerd[1450]: time="2025-10-27T23:23:27.463086681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:27.466154 containerd[1450]: time="2025-10-27T23:23:27.466094201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 430.31268ms" Oct 27 23:23:27.466154 containerd[1450]: time="2025-10-27T23:23:27.466137081Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 27 23:23:27.466840 containerd[1450]: time="2025-10-27T23:23:27.466818561Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 27 23:23:27.907465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2636704295.mount: Deactivated successfully. Oct 27 23:23:30.917431 containerd[1450]: time="2025-10-27T23:23:30.917362441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:30.926626 containerd[1450]: time="2025-10-27T23:23:30.926556081Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Oct 27 23:23:30.933679 containerd[1450]: time="2025-10-27T23:23:30.933599441Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:30.944675 containerd[1450]: time="2025-10-27T23:23:30.944611321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:30.945954 containerd[1450]: time="2025-10-27T23:23:30.945921121Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.47907464s" Oct 27 23:23:30.946034 containerd[1450]: time="2025-10-27T23:23:30.945959721Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Oct 27 23:23:32.679025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 27 23:23:32.690545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:23:32.786695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:23:32.790318 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 23:23:32.821675 kubelet[2090]: E1027 23:23:32.821617 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 23:23:32.824400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 23:23:32.824567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 23:23:32.824875 systemd[1]: kubelet.service: Consumed 124ms CPU time, 107.6M memory peak. Oct 27 23:23:35.723827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:23:35.723993 systemd[1]: kubelet.service: Consumed 124ms CPU time, 107.6M memory peak. Oct 27 23:23:35.739872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:23:35.765678 systemd[1]: Reload requested from client PID 2106 ('systemctl') (unit session-7.scope)... Oct 27 23:23:35.765695 systemd[1]: Reloading... Oct 27 23:23:35.852305 zram_generator::config[2156]: No configuration found. Oct 27 23:23:36.188298 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 27 23:23:36.286697 systemd[1]: Reloading finished in 520 ms. Oct 27 23:23:36.330449 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:23:36.332567 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 23:23:36.332775 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:23:36.332833 systemd[1]: kubelet.service: Consumed 90ms CPU time, 94.9M memory peak. Oct 27 23:23:36.334409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:23:36.435694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:23:36.439246 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 23:23:36.470252 kubelet[2197]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:23:36.470252 kubelet[2197]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 23:23:36.470252 kubelet[2197]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:23:36.471505 kubelet[2197]: I1027 23:23:36.471441 2197 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 23:23:37.049580 kubelet[2197]: I1027 23:23:37.049534 2197 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 27 23:23:37.049580 kubelet[2197]: I1027 23:23:37.049565 2197 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 23:23:37.049797 kubelet[2197]: I1027 23:23:37.049782 2197 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 23:23:37.069388 kubelet[2197]: E1027 23:23:37.069344 2197 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 27 23:23:37.070339 kubelet[2197]: I1027 23:23:37.070197 2197 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 23:23:37.079424 kubelet[2197]: E1027 23:23:37.079364 2197 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 27 23:23:37.079424 kubelet[2197]: I1027 23:23:37.079421 2197 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 27 23:23:37.082295 kubelet[2197]: I1027 23:23:37.082276 2197 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 23:23:37.082629 kubelet[2197]: I1027 23:23:37.082587 2197 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 23:23:37.082788 kubelet[2197]: I1027 23:23:37.082617 2197 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 23:23:37.082876 kubelet[2197]: I1027 23:23:37.082846 2197 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 23:23:37.082876 kubelet[2197]: I1027 23:23:37.082855 2197 container_manager_linux.go:303] "Creating device plugin manager" Oct 27 23:23:37.083074 kubelet[2197]: I1027 23:23:37.083044 2197 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:23:37.085554 kubelet[2197]: I1027 23:23:37.085523 2197 kubelet.go:480] "Attempting to sync node with API server" Oct 27 23:23:37.085554 kubelet[2197]: I1027 23:23:37.085547 2197 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 23:23:37.085627 kubelet[2197]: I1027 23:23:37.085584 2197 kubelet.go:386] "Adding apiserver pod source" Oct 27 23:23:37.087505 kubelet[2197]: I1027 23:23:37.086955 2197 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 23:23:37.089183 kubelet[2197]: I1027 23:23:37.088446 2197 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Oct 27 23:23:37.089183 kubelet[2197]: E1027 23:23:37.088924 2197 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 23:23:37.089183 kubelet[2197]: E1027 23:23:37.089127 2197 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 23:23:37.089183 kubelet[2197]: I1027 23:23:37.089143 2197 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 23:23:37.089405 kubelet[2197]: W1027 23:23:37.089255 2197 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 23:23:37.092028 kubelet[2197]: I1027 23:23:37.091996 2197 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 23:23:37.092106 kubelet[2197]: I1027 23:23:37.092047 2197 server.go:1289] "Started kubelet" Oct 27 23:23:37.094518 kubelet[2197]: I1027 23:23:37.093255 2197 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 23:23:37.094518 kubelet[2197]: I1027 23:23:37.093367 2197 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 23:23:37.095259 kubelet[2197]: I1027 23:23:37.095202 2197 server.go:317] "Adding debug handlers to kubelet server" Oct 27 23:23:37.096186 kubelet[2197]: I1027 23:23:37.096164 2197 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 23:23:37.098326 kubelet[2197]: E1027 23:23:37.097359 2197 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18727c9cc9768ca9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 23:23:37.092017321 +0000 UTC m=+0.649629281,LastTimestamp:2025-10-27 23:23:37.092017321 +0000 UTC m=+0.649629281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 23:23:37.098435 kubelet[2197]: E1027 23:23:37.098397 2197 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 23:23:37.098435 kubelet[2197]: I1027 23:23:37.098431 2197 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 23:23:37.098733 kubelet[2197]: I1027 23:23:37.098702 2197 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 23:23:37.098780 kubelet[2197]: I1027 23:23:37.098754 2197 reconciler.go:26] "Reconciler: start to sync state" Oct 27 23:23:37.098780 kubelet[2197]: I1027 23:23:37.096181 2197 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 23:23:37.098958 kubelet[2197]: I1027 23:23:37.098935 2197 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 23:23:37.099293 kubelet[2197]: I1027 23:23:37.099262 2197 factory.go:223] Registration of the systemd container factory successfully Oct 27 23:23:37.099360 kubelet[2197]: E1027 23:23:37.099260 2197 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 23:23:37.099360 kubelet[2197]: I1027 23:23:37.099345 2197 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 23:23:37.099531 kubelet[2197]: E1027 23:23:37.099454 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="200ms" Oct 27 23:23:37.099985 kubelet[2197]: E1027 23:23:37.099958 2197 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 23:23:37.100650 kubelet[2197]: I1027 23:23:37.100632 2197 factory.go:223] Registration of the containerd container factory successfully Oct 27 23:23:37.112568 kubelet[2197]: I1027 23:23:37.112547 2197 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 23:23:37.112692 kubelet[2197]: I1027 23:23:37.112682 2197 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 23:23:37.112757 kubelet[2197]: I1027 23:23:37.112748 2197 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:23:37.114109 kubelet[2197]: I1027 23:23:37.114076 2197 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 27 23:23:37.115404 kubelet[2197]: I1027 23:23:37.115379 2197 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 27 23:23:37.115528 kubelet[2197]: I1027 23:23:37.115518 2197 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 27 23:23:37.115607 kubelet[2197]: I1027 23:23:37.115597 2197 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 23:23:37.115665 kubelet[2197]: I1027 23:23:37.115657 2197 kubelet.go:2436] "Starting kubelet main sync loop" Oct 27 23:23:37.115772 kubelet[2197]: E1027 23:23:37.115746 2197 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 23:23:37.194039 kubelet[2197]: I1027 23:23:37.193992 2197 policy_none.go:49] "None policy: Start" Oct 27 23:23:37.194039 kubelet[2197]: I1027 23:23:37.194032 2197 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 23:23:37.194039 kubelet[2197]: I1027 23:23:37.194046 2197 state_mem.go:35] "Initializing new in-memory state store" Oct 27 23:23:37.194192 kubelet[2197]: E1027 23:23:37.194120 2197 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 23:23:37.199142 kubelet[2197]: E1027 23:23:37.199122 2197 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 23:23:37.199731 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 23:23:37.213167 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 23:23:37.215954 kubelet[2197]: E1027 23:23:37.215922 2197 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 27 23:23:37.216020 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 23:23:37.233183 kubelet[2197]: E1027 23:23:37.233156 2197 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 23:23:37.233700 kubelet[2197]: I1027 23:23:37.233474 2197 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 23:23:37.233700 kubelet[2197]: I1027 23:23:37.233511 2197 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 23:23:37.233794 kubelet[2197]: I1027 23:23:37.233744 2197 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 23:23:37.234874 kubelet[2197]: E1027 23:23:37.234740 2197 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 23:23:37.234874 kubelet[2197]: E1027 23:23:37.234784 2197 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 27 23:23:37.300084 kubelet[2197]: E1027 23:23:37.299954 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="400ms" Oct 27 23:23:37.336108 kubelet[2197]: I1027 23:23:37.336061 2197 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:23:37.336614 kubelet[2197]: E1027 23:23:37.336519 2197 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Oct 27 23:23:37.428064 systemd[1]: Created slice kubepods-burstable-pod706441d0f6a40403a62d60739622b718.slice - libcontainer container kubepods-burstable-pod706441d0f6a40403a62d60739622b718.slice. Oct 27 23:23:37.445234 kubelet[2197]: E1027 23:23:37.445196 2197 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:23:37.453150 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 27 23:23:37.457182 kubelet[2197]: E1027 23:23:37.457155 2197 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:23:37.458964 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 27 23:23:37.461596 kubelet[2197]: E1027 23:23:37.461512 2197 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:23:37.501367 kubelet[2197]: I1027 23:23:37.501299 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/706441d0f6a40403a62d60739622b718-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"706441d0f6a40403a62d60739622b718\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:37.501367 kubelet[2197]: I1027 23:23:37.501338 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:37.501762 kubelet[2197]: I1027 23:23:37.501388 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:37.501762 kubelet[2197]: I1027 23:23:37.501407 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:37.501762 kubelet[2197]: I1027 23:23:37.501422 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:37.501762 kubelet[2197]: I1027 23:23:37.501435 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 27 23:23:37.501762 kubelet[2197]: I1027 23:23:37.501452 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/706441d0f6a40403a62d60739622b718-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"706441d0f6a40403a62d60739622b718\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:37.501857 kubelet[2197]: I1027 23:23:37.501468 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/706441d0f6a40403a62d60739622b718-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"706441d0f6a40403a62d60739622b718\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:37.501857 kubelet[2197]: I1027 23:23:37.501484 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:37.538064 kubelet[2197]: I1027 23:23:37.537983 2197 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:23:37.538379 kubelet[2197]: E1027 23:23:37.538335 2197 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Oct 27 23:23:37.700712 kubelet[2197]: E1027 23:23:37.700674 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="800ms" Oct 27 23:23:37.746300 kubelet[2197]: E1027 23:23:37.746247 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:37.747298 containerd[1450]: time="2025-10-27T23:23:37.746994241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:706441d0f6a40403a62d60739622b718,Namespace:kube-system,Attempt:0,}" Oct 27 23:23:37.757947 kubelet[2197]: E1027 23:23:37.757903 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:37.758482 containerd[1450]: time="2025-10-27T23:23:37.758429721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 27 23:23:37.762360 kubelet[2197]: E1027 23:23:37.762105 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:37.762620 containerd[1450]: time="2025-10-27T23:23:37.762580841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 27 23:23:37.893364 kubelet[2197]: E1027 23:23:37.893312 2197 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 23:23:37.940308 kubelet[2197]: I1027 23:23:37.940192 2197 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:23:37.940576 kubelet[2197]: E1027 23:23:37.940551 2197 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Oct 27 23:23:38.076894 kubelet[2197]: E1027 23:23:38.076789 2197 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 23:23:38.233086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1162252994.mount: Deactivated successfully. Oct 27 23:23:38.238569 containerd[1450]: time="2025-10-27T23:23:38.238510601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:23:38.239176 containerd[1450]: time="2025-10-27T23:23:38.239082721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 27 23:23:38.241424 containerd[1450]: time="2025-10-27T23:23:38.241386721Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:23:38.243287 containerd[1450]: time="2025-10-27T23:23:38.243114561Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:23:38.244119 containerd[1450]: time="2025-10-27T23:23:38.244057401Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:23:38.244396 containerd[1450]: time="2025-10-27T23:23:38.244247641Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 27 23:23:38.245044 containerd[1450]: time="2025-10-27T23:23:38.245002241Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 27 23:23:38.246955 containerd[1450]: time="2025-10-27T23:23:38.246736641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 499.66308ms" Oct 27 23:23:38.247360 containerd[1450]: time="2025-10-27T23:23:38.247327521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:23:38.251924 containerd[1450]: time="2025-10-27T23:23:38.251881401Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.36052ms" Oct 27 23:23:38.252663 containerd[1450]: time="2025-10-27T23:23:38.252635761Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 489.98052ms" Oct 27 23:23:38.344444 containerd[1450]: time="2025-10-27T23:23:38.344176881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:23:38.344444 containerd[1450]: time="2025-10-27T23:23:38.344249921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:23:38.344887 containerd[1450]: time="2025-10-27T23:23:38.344700001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:23:38.344887 containerd[1450]: time="2025-10-27T23:23:38.344775041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:23:38.344887 containerd[1450]: time="2025-10-27T23:23:38.344790721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:38.345026 containerd[1450]: time="2025-10-27T23:23:38.344862241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:38.345125 containerd[1450]: time="2025-10-27T23:23:38.344262281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:38.345323 containerd[1450]: time="2025-10-27T23:23:38.345242481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:38.347621 containerd[1450]: time="2025-10-27T23:23:38.346189641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:23:38.347621 containerd[1450]: time="2025-10-27T23:23:38.346724121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:23:38.347621 containerd[1450]: time="2025-10-27T23:23:38.346738681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:38.347621 containerd[1450]: time="2025-10-27T23:23:38.346825281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:38.371493 systemd[1]: Started cri-containerd-ee3a7423e9451e181d16cd22646524b198301cee45b2b8d05ca1af124f878520.scope - libcontainer container ee3a7423e9451e181d16cd22646524b198301cee45b2b8d05ca1af124f878520. Oct 27 23:23:38.376867 systemd[1]: Started cri-containerd-313b014eac199fa2d83126a7e70143ef2fa2a16c5f9ca87240505ab44a8a8d73.scope - libcontainer container 313b014eac199fa2d83126a7e70143ef2fa2a16c5f9ca87240505ab44a8a8d73. Oct 27 23:23:38.379135 systemd[1]: Started cri-containerd-8b83c14974b34404ed5b244017505397ff3be749a65fa5a2d6ef7e9e66809146.scope - libcontainer container 8b83c14974b34404ed5b244017505397ff3be749a65fa5a2d6ef7e9e66809146. Oct 27 23:23:38.416444 containerd[1450]: time="2025-10-27T23:23:38.416209121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee3a7423e9451e181d16cd22646524b198301cee45b2b8d05ca1af124f878520\"" Oct 27 23:23:38.418506 containerd[1450]: time="2025-10-27T23:23:38.418468841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:706441d0f6a40403a62d60739622b718,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b83c14974b34404ed5b244017505397ff3be749a65fa5a2d6ef7e9e66809146\"" Oct 27 23:23:38.419098 kubelet[2197]: E1027 23:23:38.419069 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:38.419721 kubelet[2197]: E1027 23:23:38.419697 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:38.424549 containerd[1450]: time="2025-10-27T23:23:38.424510961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"313b014eac199fa2d83126a7e70143ef2fa2a16c5f9ca87240505ab44a8a8d73\"" Oct 27 23:23:38.425821 containerd[1450]: time="2025-10-27T23:23:38.425783281Z" level=info msg="CreateContainer within sandbox \"8b83c14974b34404ed5b244017505397ff3be749a65fa5a2d6ef7e9e66809146\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 23:23:38.426094 kubelet[2197]: E1027 23:23:38.426064 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:38.426988 containerd[1450]: time="2025-10-27T23:23:38.426950521Z" level=info msg="CreateContainer within sandbox \"ee3a7423e9451e181d16cd22646524b198301cee45b2b8d05ca1af124f878520\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 23:23:38.430241 containerd[1450]: time="2025-10-27T23:23:38.429821241Z" level=info msg="CreateContainer within sandbox \"313b014eac199fa2d83126a7e70143ef2fa2a16c5f9ca87240505ab44a8a8d73\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 23:23:38.444034 containerd[1450]: time="2025-10-27T23:23:38.443962361Z" level=info msg="CreateContainer within sandbox \"8b83c14974b34404ed5b244017505397ff3be749a65fa5a2d6ef7e9e66809146\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"146b667843349fe7103ab31df25f904041e095677101f8739fd36dfecb2cbe32\"" Oct 27 23:23:38.445146 containerd[1450]: time="2025-10-27T23:23:38.445102921Z" level=info msg="StartContainer for \"146b667843349fe7103ab31df25f904041e095677101f8739fd36dfecb2cbe32\"" Oct 27 23:23:38.448497 containerd[1450]: time="2025-10-27T23:23:38.448456001Z" level=info msg="CreateContainer within sandbox \"ee3a7423e9451e181d16cd22646524b198301cee45b2b8d05ca1af124f878520\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3a2d16f9877694cc4a324ed3b66a65d23f69fc2f5d556ade51243898c19acdaa\"" Oct 27 23:23:38.450178 containerd[1450]: time="2025-10-27T23:23:38.449017161Z" level=info msg="StartContainer for \"3a2d16f9877694cc4a324ed3b66a65d23f69fc2f5d556ade51243898c19acdaa\"" Oct 27 23:23:38.456515 containerd[1450]: time="2025-10-27T23:23:38.455737441Z" level=info msg="CreateContainer within sandbox \"313b014eac199fa2d83126a7e70143ef2fa2a16c5f9ca87240505ab44a8a8d73\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"931c905023ced662cf231ad6eb9e066ee01934fcfacf365027106ae21baa6d4d\"" Oct 27 23:23:38.456515 containerd[1450]: time="2025-10-27T23:23:38.456197721Z" level=info msg="StartContainer for \"931c905023ced662cf231ad6eb9e066ee01934fcfacf365027106ae21baa6d4d\"" Oct 27 23:23:38.474452 systemd[1]: Started cri-containerd-146b667843349fe7103ab31df25f904041e095677101f8739fd36dfecb2cbe32.scope - libcontainer container 146b667843349fe7103ab31df25f904041e095677101f8739fd36dfecb2cbe32. Oct 27 23:23:38.478127 systemd[1]: Started cri-containerd-3a2d16f9877694cc4a324ed3b66a65d23f69fc2f5d556ade51243898c19acdaa.scope - libcontainer container 3a2d16f9877694cc4a324ed3b66a65d23f69fc2f5d556ade51243898c19acdaa. Oct 27 23:23:38.484522 systemd[1]: Started cri-containerd-931c905023ced662cf231ad6eb9e066ee01934fcfacf365027106ae21baa6d4d.scope - libcontainer container 931c905023ced662cf231ad6eb9e066ee01934fcfacf365027106ae21baa6d4d. Oct 27 23:23:38.503180 kubelet[2197]: E1027 23:23:38.503140 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="1.6s" Oct 27 23:23:38.527633 containerd[1450]: time="2025-10-27T23:23:38.527532481Z" level=info msg="StartContainer for \"146b667843349fe7103ab31df25f904041e095677101f8739fd36dfecb2cbe32\" returns successfully" Oct 27 23:23:38.527967 containerd[1450]: time="2025-10-27T23:23:38.527859721Z" level=info msg="StartContainer for \"3a2d16f9877694cc4a324ed3b66a65d23f69fc2f5d556ade51243898c19acdaa\" returns successfully" Oct 27 23:23:38.528345 containerd[1450]: time="2025-10-27T23:23:38.528191361Z" level=info msg="StartContainer for \"931c905023ced662cf231ad6eb9e066ee01934fcfacf365027106ae21baa6d4d\" returns successfully" Oct 27 23:23:38.744866 kubelet[2197]: I1027 23:23:38.744833 2197 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:23:39.125332 kubelet[2197]: E1027 23:23:39.125221 2197 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:23:39.125458 kubelet[2197]: E1027 23:23:39.125382 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:39.127847 kubelet[2197]: E1027 23:23:39.127822 2197 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:23:39.127954 kubelet[2197]: E1027 23:23:39.127936 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:39.130163 kubelet[2197]: E1027 23:23:39.130139 2197 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:23:39.130276 kubelet[2197]: E1027 23:23:39.130250 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:40.089368 kubelet[2197]: I1027 23:23:40.089329 2197 apiserver.go:52] "Watching apiserver" Oct 27 23:23:40.090355 kubelet[2197]: I1027 23:23:40.089723 2197 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 23:23:40.100360 kubelet[2197]: I1027 23:23:40.100324 2197 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:40.100360 kubelet[2197]: I1027 23:23:40.100347 2197 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 23:23:40.121049 kubelet[2197]: E1027 23:23:40.120995 2197 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:40.121049 kubelet[2197]: I1027 23:23:40.121035 2197 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 23:23:40.125087 kubelet[2197]: E1027 23:23:40.125050 2197 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 23:23:40.125087 kubelet[2197]: I1027 23:23:40.125082 2197 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:40.128395 kubelet[2197]: E1027 23:23:40.128368 2197 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:40.130667 kubelet[2197]: I1027 23:23:40.130639 2197 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:40.130908 kubelet[2197]: I1027 23:23:40.130882 2197 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 23:23:40.132936 kubelet[2197]: E1027 23:23:40.132893 2197 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:40.133072 kubelet[2197]: E1027 23:23:40.133049 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:40.134240 kubelet[2197]: E1027 23:23:40.134194 2197 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 23:23:40.134407 kubelet[2197]: E1027 23:23:40.134384 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:41.520968 kubelet[2197]: I1027 23:23:41.520864 2197 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:41.527748 kubelet[2197]: E1027 23:23:41.527716 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:41.878319 systemd[1]: Reload requested from client PID 2488 ('systemctl') (unit session-7.scope)... Oct 27 23:23:41.878334 systemd[1]: Reloading... Oct 27 23:23:41.942334 zram_generator::config[2535]: No configuration found. Oct 27 23:23:42.036974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 27 23:23:42.134852 kubelet[2197]: E1027 23:23:42.134738 2197 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:42.138598 systemd[1]: Reloading finished in 259 ms. Oct 27 23:23:42.156770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:23:42.172773 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 23:23:42.172984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:23:42.173027 systemd[1]: kubelet.service: Consumed 1.005s CPU time, 129.2M memory peak. Oct 27 23:23:42.183568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:23:42.288668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:23:42.291935 (kubelet)[2574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 23:23:42.328544 kubelet[2574]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:23:42.328544 kubelet[2574]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 23:23:42.328544 kubelet[2574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:23:42.328884 kubelet[2574]: I1027 23:23:42.328570 2574 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 23:23:42.336300 kubelet[2574]: I1027 23:23:42.335642 2574 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 27 23:23:42.336300 kubelet[2574]: I1027 23:23:42.335668 2574 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 23:23:42.336300 kubelet[2574]: I1027 23:23:42.335874 2574 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 23:23:42.337251 kubelet[2574]: I1027 23:23:42.337215 2574 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 27 23:23:42.339474 kubelet[2574]: I1027 23:23:42.339441 2574 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 23:23:42.342399 kubelet[2574]: E1027 23:23:42.342358 2574 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 27 23:23:42.342399 kubelet[2574]: I1027 23:23:42.342394 2574 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 27 23:23:42.344854 kubelet[2574]: I1027 23:23:42.344825 2574 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 23:23:42.345051 kubelet[2574]: I1027 23:23:42.345018 2574 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 23:23:42.345186 kubelet[2574]: I1027 23:23:42.345041 2574 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 23:23:42.345263 kubelet[2574]: I1027 23:23:42.345188 2574 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 23:23:42.345263 kubelet[2574]: I1027 23:23:42.345198 2574 container_manager_linux.go:303] "Creating device plugin manager" Oct 27 23:23:42.345263 kubelet[2574]: I1027 23:23:42.345242 2574 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:23:42.345402 kubelet[2574]: I1027 23:23:42.345390 2574 kubelet.go:480] "Attempting to sync node with API server" Oct 27 23:23:42.345429 kubelet[2574]: I1027 23:23:42.345404 2574 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 23:23:42.345448 kubelet[2574]: I1027 23:23:42.345435 2574 kubelet.go:386] "Adding apiserver pod source" Oct 27 23:23:42.345469 kubelet[2574]: I1027 23:23:42.345449 2574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 23:23:42.346432 kubelet[2574]: I1027 23:23:42.346413 2574 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Oct 27 23:23:42.349851 kubelet[2574]: I1027 23:23:42.349753 2574 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 23:23:42.355133 kubelet[2574]: I1027 23:23:42.355091 2574 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 23:23:42.355204 kubelet[2574]: I1027 23:23:42.355154 2574 server.go:1289] "Started kubelet" Oct 27 23:23:42.355698 kubelet[2574]: I1027 23:23:42.355651 2574 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 23:23:42.355859 kubelet[2574]: I1027 23:23:42.355812 2574 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 23:23:42.356385 kubelet[2574]: I1027 23:23:42.356054 2574 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 23:23:42.359452 kubelet[2574]: I1027 23:23:42.359416 2574 server.go:317] "Adding debug handlers to kubelet server" Oct 27 23:23:42.360145 kubelet[2574]: I1027 23:23:42.360008 2574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 23:23:42.366079 kubelet[2574]: I1027 23:23:42.365834 2574 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 23:23:42.366079 kubelet[2574]: I1027 23:23:42.366183 2574 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 23:23:42.366079 kubelet[2574]: I1027 23:23:42.366325 2574 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 23:23:42.366079 kubelet[2574]: I1027 23:23:42.366492 2574 reconciler.go:26] "Reconciler: start to sync state" Oct 27 23:23:42.373195 kubelet[2574]: I1027 23:23:42.369958 2574 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 23:23:42.373195 kubelet[2574]: E1027 23:23:42.371094 2574 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 23:23:42.374650 kubelet[2574]: I1027 23:23:42.374619 2574 factory.go:223] Registration of the containerd container factory successfully Oct 27 23:23:42.374650 kubelet[2574]: I1027 23:23:42.374642 2574 factory.go:223] Registration of the systemd container factory successfully Oct 27 23:23:42.378792 kubelet[2574]: I1027 23:23:42.378765 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 27 23:23:42.380417 kubelet[2574]: I1027 23:23:42.380388 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 27 23:23:42.380417 kubelet[2574]: I1027 23:23:42.380416 2574 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 27 23:23:42.380511 kubelet[2574]: I1027 23:23:42.380435 2574 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 23:23:42.380511 kubelet[2574]: I1027 23:23:42.380442 2574 kubelet.go:2436] "Starting kubelet main sync loop" Oct 27 23:23:42.380511 kubelet[2574]: E1027 23:23:42.380490 2574 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 23:23:42.405559 kubelet[2574]: I1027 23:23:42.404457 2574 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 23:23:42.405559 kubelet[2574]: I1027 23:23:42.404478 2574 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 23:23:42.405559 kubelet[2574]: I1027 23:23:42.404508 2574 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:23:42.405559 kubelet[2574]: I1027 23:23:42.404626 2574 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 23:23:42.405559 kubelet[2574]: I1027 23:23:42.404636 2574 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 23:23:42.405559 kubelet[2574]: I1027 23:23:42.404651 2574 policy_none.go:49] "None policy: Start" Oct 27 23:23:42.405559 kubelet[2574]: I1027 23:23:42.404660 2574 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 23:23:42.405559 kubelet[2574]: I1027 23:23:42.404668 2574 state_mem.go:35] "Initializing new in-memory state store" Oct 27 23:23:42.405559 kubelet[2574]: I1027 23:23:42.404741 2574 state_mem.go:75] "Updated machine memory state" Oct 27 23:23:42.408681 kubelet[2574]: E1027 23:23:42.408649 2574 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 23:23:42.409064 kubelet[2574]: I1027 23:23:42.408807 2574 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 23:23:42.409064 kubelet[2574]: I1027 23:23:42.408825 2574 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 23:23:42.409064 kubelet[2574]: I1027 23:23:42.408960 2574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 23:23:42.412138 kubelet[2574]: E1027 23:23:42.412098 2574 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 23:23:42.481256 kubelet[2574]: I1027 23:23:42.481224 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 23:23:42.481422 kubelet[2574]: I1027 23:23:42.481306 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:42.481422 kubelet[2574]: I1027 23:23:42.481366 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:42.487382 kubelet[2574]: E1027 23:23:42.487323 2574 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:42.516492 kubelet[2574]: I1027 23:23:42.516461 2574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:23:42.528843 kubelet[2574]: I1027 23:23:42.528786 2574 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 27 23:23:42.529003 kubelet[2574]: I1027 23:23:42.528899 2574 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 23:23:42.568280 kubelet[2574]: I1027 23:23:42.568229 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/706441d0f6a40403a62d60739622b718-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"706441d0f6a40403a62d60739622b718\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:42.568410 kubelet[2574]: I1027 23:23:42.568288 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/706441d0f6a40403a62d60739622b718-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"706441d0f6a40403a62d60739622b718\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:42.568410 kubelet[2574]: I1027 23:23:42.568318 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:42.568410 kubelet[2574]: I1027 23:23:42.568337 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:42.568410 kubelet[2574]: I1027 23:23:42.568353 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:42.568410 kubelet[2574]: I1027 23:23:42.568391 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:42.568580 kubelet[2574]: I1027 23:23:42.568426 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:23:42.568580 kubelet[2574]: I1027 23:23:42.568470 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 27 23:23:42.568580 kubelet[2574]: I1027 23:23:42.568509 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/706441d0f6a40403a62d60739622b718-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"706441d0f6a40403a62d60739622b718\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:42.786688 kubelet[2574]: E1027 23:23:42.786643 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:42.787829 kubelet[2574]: E1027 23:23:42.787742 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:42.787829 kubelet[2574]: E1027 23:23:42.787818 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:42.880397 sudo[2614]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 27 23:23:42.880706 sudo[2614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 27 23:23:43.320470 sudo[2614]: pam_unix(sudo:session): session closed for user root Oct 27 23:23:43.346912 kubelet[2574]: I1027 23:23:43.346842 2574 apiserver.go:52] "Watching apiserver" Oct 27 23:23:43.366760 kubelet[2574]: I1027 23:23:43.366706 2574 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 23:23:43.391985 kubelet[2574]: E1027 23:23:43.391949 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:43.392629 kubelet[2574]: I1027 23:23:43.392607 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 23:23:43.392920 kubelet[2574]: I1027 23:23:43.392901 2574 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:43.401512 kubelet[2574]: E1027 23:23:43.401433 2574 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 23:23:43.401735 kubelet[2574]: E1027 23:23:43.401700 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:43.403999 kubelet[2574]: E1027 23:23:43.403819 2574 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 23:23:43.404218 kubelet[2574]: E1027 23:23:43.404197 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:43.423462 kubelet[2574]: I1027 23:23:43.423380 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.423361983 podStartE2EDuration="1.423361983s" podCreationTimestamp="2025-10-27 23:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:23:43.423244663 +0000 UTC m=+1.128295040" watchObservedRunningTime="2025-10-27 23:23:43.423361983 +0000 UTC m=+1.128412320" Oct 27 23:23:43.440149 kubelet[2574]: I1027 23:23:43.440072 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.440056258 podStartE2EDuration="2.440056258s" podCreationTimestamp="2025-10-27 23:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:23:43.439905138 +0000 UTC m=+1.144955475" watchObservedRunningTime="2025-10-27 23:23:43.440056258 +0000 UTC m=+1.145106635" Oct 27 23:23:43.440328 kubelet[2574]: I1027 23:23:43.440176 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.440171017 podStartE2EDuration="1.440171017s" podCreationTimestamp="2025-10-27 23:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:23:43.43229486 +0000 UTC m=+1.137345237" watchObservedRunningTime="2025-10-27 23:23:43.440171017 +0000 UTC m=+1.145221394" Oct 27 23:23:44.394187 kubelet[2574]: E1027 23:23:44.394150 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:44.394584 kubelet[2574]: E1027 23:23:44.394144 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:44.800413 sudo[1637]: pam_unix(sudo:session): session closed for user root Oct 27 23:23:44.801930 sshd[1636]: Connection closed by 10.0.0.1 port 44546 Oct 27 23:23:44.802645 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Oct 27 23:23:44.805709 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Oct 27 23:23:44.806052 systemd[1]: sshd@6-10.0.0.25:22-10.0.0.1:44546.service: Deactivated successfully. Oct 27 23:23:44.808707 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 23:23:44.808889 systemd[1]: session-7.scope: Consumed 6.576s CPU time, 257.8M memory peak. Oct 27 23:23:44.810932 systemd-logind[1437]: Removed session 7. Oct 27 23:23:45.394334 kubelet[2574]: E1027 23:23:45.394307 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:47.754663 kubelet[2574]: E1027 23:23:47.754596 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:48.890402 kubelet[2574]: I1027 23:23:48.890373 2574 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 23:23:48.896923 containerd[1450]: time="2025-10-27T23:23:48.896476517Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 23:23:48.897257 kubelet[2574]: I1027 23:23:48.897025 2574 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 23:23:49.363125 systemd[1]: Created slice kubepods-burstable-pod0e13702b_8eac_49ae_b850_f40e3278254e.slice - libcontainer container kubepods-burstable-pod0e13702b_8eac_49ae_b850_f40e3278254e.slice. Oct 27 23:23:49.371720 systemd[1]: Created slice kubepods-besteffort-pod9857df72_277e_4865_9555_b6e54bb4ae1b.slice - libcontainer container kubepods-besteffort-pod9857df72_277e_4865_9555_b6e54bb4ae1b.slice. Oct 27 23:23:49.414877 kubelet[2574]: I1027 23:23:49.414711 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-config-path\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.414877 kubelet[2574]: I1027 23:23:49.414753 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9857df72-277e-4865-9555-b6e54bb4ae1b-kube-proxy\") pod \"kube-proxy-kmqnt\" (UID: \"9857df72-277e-4865-9555-b6e54bb4ae1b\") " pod="kube-system/kube-proxy-kmqnt" Oct 27 23:23:49.414877 kubelet[2574]: I1027 23:23:49.414775 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkdnh\" (UniqueName: \"kubernetes.io/projected/9857df72-277e-4865-9555-b6e54bb4ae1b-kube-api-access-gkdnh\") pod \"kube-proxy-kmqnt\" (UID: \"9857df72-277e-4865-9555-b6e54bb4ae1b\") " pod="kube-system/kube-proxy-kmqnt" Oct 27 23:23:49.414877 kubelet[2574]: I1027 23:23:49.414795 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-bpf-maps\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.414877 kubelet[2574]: I1027 23:23:49.414810 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cni-path\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.414877 kubelet[2574]: I1027 23:23:49.414825 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-host-proc-sys-net\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.415142 kubelet[2574]: I1027 23:23:49.414839 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-hubble-tls\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.415142 kubelet[2574]: I1027 23:23:49.414857 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9857df72-277e-4865-9555-b6e54bb4ae1b-lib-modules\") pod \"kube-proxy-kmqnt\" (UID: \"9857df72-277e-4865-9555-b6e54bb4ae1b\") " pod="kube-system/kube-proxy-kmqnt" Oct 27 23:23:49.415142 kubelet[2574]: I1027 23:23:49.414871 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-run\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.415142 kubelet[2574]: I1027 23:23:49.414886 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9857df72-277e-4865-9555-b6e54bb4ae1b-xtables-lock\") pod \"kube-proxy-kmqnt\" (UID: \"9857df72-277e-4865-9555-b6e54bb4ae1b\") " pod="kube-system/kube-proxy-kmqnt" Oct 27 23:23:49.415142 kubelet[2574]: I1027 23:23:49.414899 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-hostproc\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.415142 kubelet[2574]: I1027 23:23:49.414913 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-cgroup\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.415462 kubelet[2574]: I1027 23:23:49.414927 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-etc-cni-netd\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.415462 kubelet[2574]: I1027 23:23:49.414951 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-host-proc-sys-kernel\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.415462 kubelet[2574]: I1027 23:23:49.414965 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhfx8\" (UniqueName: \"kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-kube-api-access-nhfx8\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.419316 kubelet[2574]: I1027 23:23:49.418833 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-lib-modules\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.419316 kubelet[2574]: I1027 23:23:49.418901 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-xtables-lock\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.419316 kubelet[2574]: I1027 23:23:49.418931 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e13702b-8eac-49ae-b850-f40e3278254e-clustermesh-secrets\") pod \"cilium-6ng6r\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " pod="kube-system/cilium-6ng6r" Oct 27 23:23:49.531150 kubelet[2574]: E1027 23:23:49.529807 2574 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 27 23:23:49.531150 kubelet[2574]: E1027 23:23:49.529840 2574 projected.go:194] Error preparing data for projected volume kube-api-access-gkdnh for pod kube-system/kube-proxy-kmqnt: configmap "kube-root-ca.crt" not found Oct 27 23:23:49.531150 kubelet[2574]: E1027 23:23:49.529930 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9857df72-277e-4865-9555-b6e54bb4ae1b-kube-api-access-gkdnh podName:9857df72-277e-4865-9555-b6e54bb4ae1b nodeName:}" failed. No retries permitted until 2025-10-27 23:23:50.029910343 +0000 UTC m=+7.734960720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gkdnh" (UniqueName: "kubernetes.io/projected/9857df72-277e-4865-9555-b6e54bb4ae1b-kube-api-access-gkdnh") pod "kube-proxy-kmqnt" (UID: "9857df72-277e-4865-9555-b6e54bb4ae1b") : configmap "kube-root-ca.crt" not found Oct 27 23:23:49.533909 kubelet[2574]: E1027 23:23:49.533877 2574 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 27 23:23:49.533909 kubelet[2574]: E1027 23:23:49.533905 2574 projected.go:194] Error preparing data for projected volume kube-api-access-nhfx8 for pod kube-system/cilium-6ng6r: configmap "kube-root-ca.crt" not found Oct 27 23:23:49.534032 kubelet[2574]: E1027 23:23:49.533949 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-kube-api-access-nhfx8 podName:0e13702b-8eac-49ae-b850-f40e3278254e nodeName:}" failed. No retries permitted until 2025-10-27 23:23:50.033933463 +0000 UTC m=+7.738983800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nhfx8" (UniqueName: "kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-kube-api-access-nhfx8") pod "cilium-6ng6r" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e") : configmap "kube-root-ca.crt" not found Oct 27 23:23:50.124748 kubelet[2574]: E1027 23:23:50.124638 2574 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 27 23:23:50.124748 kubelet[2574]: E1027 23:23:50.124683 2574 projected.go:194] Error preparing data for projected volume kube-api-access-nhfx8 for pod kube-system/cilium-6ng6r: configmap "kube-root-ca.crt" not found Oct 27 23:23:50.124748 kubelet[2574]: E1027 23:23:50.124693 2574 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 27 23:23:50.124748 kubelet[2574]: E1027 23:23:50.124717 2574 projected.go:194] Error preparing data for projected volume kube-api-access-gkdnh for pod kube-system/kube-proxy-kmqnt: configmap "kube-root-ca.crt" not found Oct 27 23:23:50.124748 kubelet[2574]: E1027 23:23:50.124735 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-kube-api-access-nhfx8 podName:0e13702b-8eac-49ae-b850-f40e3278254e nodeName:}" failed. No retries permitted until 2025-10-27 23:23:51.124719101 +0000 UTC m=+8.829769438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nhfx8" (UniqueName: "kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-kube-api-access-nhfx8") pod "cilium-6ng6r" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e") : configmap "kube-root-ca.crt" not found Oct 27 23:23:50.124748 kubelet[2574]: E1027 23:23:50.124760 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9857df72-277e-4865-9555-b6e54bb4ae1b-kube-api-access-gkdnh podName:9857df72-277e-4865-9555-b6e54bb4ae1b nodeName:}" failed. No retries permitted until 2025-10-27 23:23:51.124744901 +0000 UTC m=+8.829795278 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gkdnh" (UniqueName: "kubernetes.io/projected/9857df72-277e-4865-9555-b6e54bb4ae1b-kube-api-access-gkdnh") pod "kube-proxy-kmqnt" (UID: "9857df72-277e-4865-9555-b6e54bb4ae1b") : configmap "kube-root-ca.crt" not found Oct 27 23:23:50.177192 systemd[1]: Created slice kubepods-besteffort-pod49fa36a4_22b6_495e_9a92_1f06464d9fc3.slice - libcontainer container kubepods-besteffort-pod49fa36a4_22b6_495e_9a92_1f06464d9fc3.slice. Oct 27 23:23:50.224787 kubelet[2574]: I1027 23:23:50.224711 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m82kp\" (UniqueName: \"kubernetes.io/projected/49fa36a4-22b6-495e-9a92-1f06464d9fc3-kube-api-access-m82kp\") pod \"cilium-operator-6c4d7847fc-bd7cm\" (UID: \"49fa36a4-22b6-495e-9a92-1f06464d9fc3\") " pod="kube-system/cilium-operator-6c4d7847fc-bd7cm" Oct 27 23:23:50.224787 kubelet[2574]: I1027 23:23:50.224759 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49fa36a4-22b6-495e-9a92-1f06464d9fc3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bd7cm\" (UID: \"49fa36a4-22b6-495e-9a92-1f06464d9fc3\") " pod="kube-system/cilium-operator-6c4d7847fc-bd7cm" Oct 27 23:23:50.480958 kubelet[2574]: E1027 23:23:50.480913 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:50.481673 containerd[1450]: time="2025-10-27T23:23:50.481631071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bd7cm,Uid:49fa36a4-22b6-495e-9a92-1f06464d9fc3,Namespace:kube-system,Attempt:0,}" Oct 27 23:23:50.511728 containerd[1450]: time="2025-10-27T23:23:50.511597025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:23:50.511728 containerd[1450]: time="2025-10-27T23:23:50.511671665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:23:50.511728 containerd[1450]: time="2025-10-27T23:23:50.511686145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:50.511962 containerd[1450]: time="2025-10-27T23:23:50.511785545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:50.534529 systemd[1]: Started cri-containerd-07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56.scope - libcontainer container 07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56. Oct 27 23:23:50.567332 containerd[1450]: time="2025-10-27T23:23:50.567293334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bd7cm,Uid:49fa36a4-22b6-495e-9a92-1f06464d9fc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56\"" Oct 27 23:23:50.568790 kubelet[2574]: E1027 23:23:50.568257 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:50.569926 containerd[1450]: time="2025-10-27T23:23:50.569883214Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 27 23:23:51.170138 kubelet[2574]: E1027 23:23:51.170092 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:51.171644 containerd[1450]: time="2025-10-27T23:23:51.171610018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ng6r,Uid:0e13702b-8eac-49ae-b850-f40e3278254e,Namespace:kube-system,Attempt:0,}" Oct 27 23:23:51.184175 kubelet[2574]: E1027 23:23:51.184143 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:51.185185 containerd[1450]: time="2025-10-27T23:23:51.184710256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kmqnt,Uid:9857df72-277e-4865-9555-b6e54bb4ae1b,Namespace:kube-system,Attempt:0,}" Oct 27 23:23:51.194489 containerd[1450]: time="2025-10-27T23:23:51.194327214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:23:51.194489 containerd[1450]: time="2025-10-27T23:23:51.194444134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:23:51.194489 containerd[1450]: time="2025-10-27T23:23:51.194460334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:51.195094 containerd[1450]: time="2025-10-27T23:23:51.195002494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:51.217490 containerd[1450]: time="2025-10-27T23:23:51.217344130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:23:51.217597 containerd[1450]: time="2025-10-27T23:23:51.217496250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:23:51.217597 containerd[1450]: time="2025-10-27T23:23:51.217530250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:51.217726 containerd[1450]: time="2025-10-27T23:23:51.217680570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:23:51.220487 systemd[1]: Started cri-containerd-a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a.scope - libcontainer container a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a. Oct 27 23:23:51.241630 systemd[1]: Started cri-containerd-1b5a11eeb32ee423b8b6f9fc3c80cdbb28f6933eea6793ab70204c03e933bc90.scope - libcontainer container 1b5a11eeb32ee423b8b6f9fc3c80cdbb28f6933eea6793ab70204c03e933bc90. Oct 27 23:23:51.252325 containerd[1450]: time="2025-10-27T23:23:51.251816364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ng6r,Uid:0e13702b-8eac-49ae-b850-f40e3278254e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\"" Oct 27 23:23:51.253613 kubelet[2574]: E1027 23:23:51.252527 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:51.270039 containerd[1450]: time="2025-10-27T23:23:51.270002480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kmqnt,Uid:9857df72-277e-4865-9555-b6e54bb4ae1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b5a11eeb32ee423b8b6f9fc3c80cdbb28f6933eea6793ab70204c03e933bc90\"" Oct 27 23:23:51.271068 kubelet[2574]: E1027 23:23:51.270820 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:51.278086 containerd[1450]: time="2025-10-27T23:23:51.278031719Z" level=info msg="CreateContainer within sandbox \"1b5a11eeb32ee423b8b6f9fc3c80cdbb28f6933eea6793ab70204c03e933bc90\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 23:23:51.297424 containerd[1450]: time="2025-10-27T23:23:51.297354155Z" level=info msg="CreateContainer within sandbox \"1b5a11eeb32ee423b8b6f9fc3c80cdbb28f6933eea6793ab70204c03e933bc90\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"26ac0175ee9a72a5d85d91ee05669eca89fa05a311496fdb4198f7d60834c419\"" Oct 27 23:23:51.300226 containerd[1450]: time="2025-10-27T23:23:51.300194075Z" level=info msg="StartContainer for \"26ac0175ee9a72a5d85d91ee05669eca89fa05a311496fdb4198f7d60834c419\"" Oct 27 23:23:51.329502 systemd[1]: Started cri-containerd-26ac0175ee9a72a5d85d91ee05669eca89fa05a311496fdb4198f7d60834c419.scope - libcontainer container 26ac0175ee9a72a5d85d91ee05669eca89fa05a311496fdb4198f7d60834c419. Oct 27 23:23:51.359751 containerd[1450]: time="2025-10-27T23:23:51.359599344Z" level=info msg="StartContainer for \"26ac0175ee9a72a5d85d91ee05669eca89fa05a311496fdb4198f7d60834c419\" returns successfully" Oct 27 23:23:51.407722 kubelet[2574]: E1027 23:23:51.406651 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:53.239688 kubelet[2574]: E1027 23:23:53.239650 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:53.254976 kubelet[2574]: I1027 23:23:53.254898 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kmqnt" podStartSLOduration=4.254880933 podStartE2EDuration="4.254880933s" podCreationTimestamp="2025-10-27 23:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:23:51.420675173 +0000 UTC m=+9.125725550" watchObservedRunningTime="2025-10-27 23:23:53.254880933 +0000 UTC m=+10.959931310" Oct 27 23:23:53.261089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766785933.mount: Deactivated successfully. Oct 27 23:23:53.415681 kubelet[2574]: E1027 23:23:53.415648 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:54.557931 kubelet[2574]: E1027 23:23:54.557897 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:55.112702 containerd[1450]: time="2025-10-27T23:23:55.112658606Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:55.114084 containerd[1450]: time="2025-10-27T23:23:55.114049966Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Oct 27 23:23:55.114945 containerd[1450]: time="2025-10-27T23:23:55.114905766Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:23:55.116138 containerd[1450]: time="2025-10-27T23:23:55.116108046Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.546182592s" Oct 27 23:23:55.116138 containerd[1450]: time="2025-10-27T23:23:55.116136966Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 27 23:23:55.125186 containerd[1450]: time="2025-10-27T23:23:55.125160164Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 27 23:23:55.127902 containerd[1450]: time="2025-10-27T23:23:55.127870964Z" level=info msg="CreateContainer within sandbox \"07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 27 23:23:55.153175 containerd[1450]: time="2025-10-27T23:23:55.153133840Z" level=info msg="CreateContainer within sandbox \"07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\"" Oct 27 23:23:55.153610 containerd[1450]: time="2025-10-27T23:23:55.153587560Z" level=info msg="StartContainer for \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\"" Oct 27 23:23:55.177451 systemd[1]: Started cri-containerd-1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96.scope - libcontainer container 1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96. Oct 27 23:23:55.205138 containerd[1450]: time="2025-10-27T23:23:55.205088113Z" level=info msg="StartContainer for \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\" returns successfully" Oct 27 23:23:55.426404 kubelet[2574]: E1027 23:23:55.426285 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:55.447535 kubelet[2574]: I1027 23:23:55.447407 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bd7cm" podStartSLOduration=0.892274649 podStartE2EDuration="5.44166716s" podCreationTimestamp="2025-10-27 23:23:50 +0000 UTC" firstStartedPulling="2025-10-27 23:23:50.569508414 +0000 UTC m=+8.274558791" lastFinishedPulling="2025-10-27 23:23:55.118900925 +0000 UTC m=+12.823951302" observedRunningTime="2025-10-27 23:23:55.44163896 +0000 UTC m=+13.146689297" watchObservedRunningTime="2025-10-27 23:23:55.44166716 +0000 UTC m=+13.146717577" Oct 27 23:23:55.473328 update_engine[1440]: I20251027 23:23:55.473182 1440 update_attempter.cc:509] Updating boot flags... Oct 27 23:23:55.503302 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3015) Oct 27 23:23:55.550614 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3018) Oct 27 23:23:55.617301 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3018) Oct 27 23:23:56.425360 kubelet[2574]: E1027 23:23:56.425221 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:23:57.763431 kubelet[2574]: E1027 23:23:57.763397 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:09.008600 systemd[1]: Started sshd@7-10.0.0.25:22-10.0.0.1:56926.service - OpenSSH per-connection server daemon (10.0.0.1:56926). Oct 27 23:24:09.055954 sshd[3030]: Accepted publickey for core from 10.0.0.1 port 56926 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:09.056843 sshd-session[3030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:09.062701 systemd-logind[1437]: New session 8 of user core. Oct 27 23:24:09.067478 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 23:24:09.214964 sshd[3032]: Connection closed by 10.0.0.1 port 56926 Oct 27 23:24:09.215409 sshd-session[3030]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:09.219404 systemd[1]: sshd@7-10.0.0.25:22-10.0.0.1:56926.service: Deactivated successfully. Oct 27 23:24:09.221612 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 23:24:09.223460 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Oct 27 23:24:09.224897 systemd-logind[1437]: Removed session 8. Oct 27 23:24:09.521536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount442376256.mount: Deactivated successfully. Oct 27 23:24:10.718402 containerd[1450]: time="2025-10-27T23:24:10.718319178Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Oct 27 23:24:10.721118 containerd[1450]: time="2025-10-27T23:24:10.721075338Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 15.595877694s" Oct 27 23:24:10.721118 containerd[1450]: time="2025-10-27T23:24:10.721118938Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 27 23:24:10.734800 containerd[1450]: time="2025-10-27T23:24:10.734626217Z" level=info msg="CreateContainer within sandbox \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 27 23:24:10.749118 containerd[1450]: time="2025-10-27T23:24:10.749049976Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:24:10.750018 containerd[1450]: time="2025-10-27T23:24:10.749976216Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:24:10.754873 containerd[1450]: time="2025-10-27T23:24:10.754827056Z" level=info msg="CreateContainer within sandbox \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5\"" Oct 27 23:24:10.756253 containerd[1450]: time="2025-10-27T23:24:10.755531736Z" level=info msg="StartContainer for \"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5\"" Oct 27 23:24:10.785483 systemd[1]: Started cri-containerd-f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5.scope - libcontainer container f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5. Oct 27 23:24:10.808003 containerd[1450]: time="2025-10-27T23:24:10.807878973Z" level=info msg="StartContainer for \"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5\" returns successfully" Oct 27 23:24:10.819404 systemd[1]: cri-containerd-f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5.scope: Deactivated successfully. Oct 27 23:24:11.195412 containerd[1450]: time="2025-10-27T23:24:11.190543193Z" level=info msg="shim disconnected" id=f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5 namespace=k8s.io Oct 27 23:24:11.195412 containerd[1450]: time="2025-10-27T23:24:11.195404913Z" level=warning msg="cleaning up after shim disconnected" id=f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5 namespace=k8s.io Oct 27 23:24:11.195412 containerd[1450]: time="2025-10-27T23:24:11.195419953Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:24:11.453654 kubelet[2574]: E1027 23:24:11.453547 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:11.459133 containerd[1450]: time="2025-10-27T23:24:11.459041259Z" level=info msg="CreateContainer within sandbox \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 27 23:24:11.480380 containerd[1450]: time="2025-10-27T23:24:11.480310978Z" level=info msg="CreateContainer within sandbox \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264\"" Oct 27 23:24:11.481201 containerd[1450]: time="2025-10-27T23:24:11.480880978Z" level=info msg="StartContainer for \"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264\"" Oct 27 23:24:11.509556 systemd[1]: Started cri-containerd-41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264.scope - libcontainer container 41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264. Oct 27 23:24:11.531078 containerd[1450]: time="2025-10-27T23:24:11.531008056Z" level=info msg="StartContainer for \"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264\" returns successfully" Oct 27 23:24:11.543664 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 23:24:11.544051 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:24:11.544264 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 27 23:24:11.551650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 23:24:11.551846 systemd[1]: cri-containerd-41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264.scope: Deactivated successfully. Oct 27 23:24:11.563892 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:24:11.582938 containerd[1450]: time="2025-10-27T23:24:11.582878773Z" level=info msg="shim disconnected" id=41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264 namespace=k8s.io Oct 27 23:24:11.582938 containerd[1450]: time="2025-10-27T23:24:11.582931053Z" level=warning msg="cleaning up after shim disconnected" id=41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264 namespace=k8s.io Oct 27 23:24:11.582938 containerd[1450]: time="2025-10-27T23:24:11.582939533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:24:11.750708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5-rootfs.mount: Deactivated successfully. Oct 27 23:24:12.457712 kubelet[2574]: E1027 23:24:12.457629 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:12.465987 containerd[1450]: time="2025-10-27T23:24:12.465830050Z" level=info msg="CreateContainer within sandbox \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 27 23:24:12.486809 containerd[1450]: time="2025-10-27T23:24:12.486711769Z" level=info msg="CreateContainer within sandbox \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff\"" Oct 27 23:24:12.489198 containerd[1450]: time="2025-10-27T23:24:12.487459249Z" level=info msg="StartContainer for \"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff\"" Oct 27 23:24:12.517517 systemd[1]: Started cri-containerd-066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff.scope - libcontainer container 066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff. Oct 27 23:24:12.544297 containerd[1450]: time="2025-10-27T23:24:12.543837526Z" level=info msg="StartContainer for \"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff\" returns successfully" Oct 27 23:24:12.547599 systemd[1]: cri-containerd-066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff.scope: Deactivated successfully. Oct 27 23:24:12.575256 containerd[1450]: time="2025-10-27T23:24:12.575195565Z" level=info msg="shim disconnected" id=066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff namespace=k8s.io Oct 27 23:24:12.575256 containerd[1450]: time="2025-10-27T23:24:12.575249365Z" level=warning msg="cleaning up after shim disconnected" id=066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff namespace=k8s.io Oct 27 23:24:12.575256 containerd[1450]: time="2025-10-27T23:24:12.575257685Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:24:12.750225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff-rootfs.mount: Deactivated successfully. Oct 27 23:24:13.461794 kubelet[2574]: E1027 23:24:13.461764 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:13.472337 containerd[1450]: time="2025-10-27T23:24:13.472289204Z" level=info msg="CreateContainer within sandbox \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 27 23:24:13.487646 containerd[1450]: time="2025-10-27T23:24:13.487582723Z" level=info msg="CreateContainer within sandbox \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50\"" Oct 27 23:24:13.488401 containerd[1450]: time="2025-10-27T23:24:13.488367923Z" level=info msg="StartContainer for \"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50\"" Oct 27 23:24:13.520518 systemd[1]: Started cri-containerd-d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50.scope - libcontainer container d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50. Oct 27 23:24:13.545870 systemd[1]: cri-containerd-d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50.scope: Deactivated successfully. Oct 27 23:24:13.562345 containerd[1450]: time="2025-10-27T23:24:13.562289800Z" level=info msg="StartContainer for \"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50\" returns successfully" Oct 27 23:24:13.585661 containerd[1450]: time="2025-10-27T23:24:13.585588399Z" level=info msg="shim disconnected" id=d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50 namespace=k8s.io Oct 27 23:24:13.585661 containerd[1450]: time="2025-10-27T23:24:13.585645199Z" level=warning msg="cleaning up after shim disconnected" id=d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50 namespace=k8s.io Oct 27 23:24:13.585661 containerd[1450]: time="2025-10-27T23:24:13.585652919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:24:13.750527 systemd[1]: run-containerd-runc-k8s.io-d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50-runc.Q3fcZx.mount: Deactivated successfully. Oct 27 23:24:13.750635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50-rootfs.mount: Deactivated successfully. Oct 27 23:24:14.230781 systemd[1]: Started sshd@8-10.0.0.25:22-10.0.0.1:60028.service - OpenSSH per-connection server daemon (10.0.0.1:60028). Oct 27 23:24:14.282795 sshd[3310]: Accepted publickey for core from 10.0.0.1 port 60028 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:14.284106 sshd-session[3310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:14.288360 systemd-logind[1437]: New session 9 of user core. Oct 27 23:24:14.300507 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 23:24:14.423688 sshd[3312]: Connection closed by 10.0.0.1 port 60028 Oct 27 23:24:14.424085 sshd-session[3310]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:14.428325 systemd[1]: sshd@8-10.0.0.25:22-10.0.0.1:60028.service: Deactivated successfully. Oct 27 23:24:14.431356 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 23:24:14.432396 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Oct 27 23:24:14.433444 systemd-logind[1437]: Removed session 9. Oct 27 23:24:14.466038 kubelet[2574]: E1027 23:24:14.465556 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:14.474616 containerd[1450]: time="2025-10-27T23:24:14.474565201Z" level=info msg="CreateContainer within sandbox \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 27 23:24:14.489937 containerd[1450]: time="2025-10-27T23:24:14.489767280Z" level=info msg="CreateContainer within sandbox \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\"" Oct 27 23:24:14.490982 containerd[1450]: time="2025-10-27T23:24:14.490946680Z" level=info msg="StartContainer for \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\"" Oct 27 23:24:14.527513 systemd[1]: Started cri-containerd-ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158.scope - libcontainer container ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158. Oct 27 23:24:14.552836 containerd[1450]: time="2025-10-27T23:24:14.552790678Z" level=info msg="StartContainer for \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\" returns successfully" Oct 27 23:24:14.642493 kubelet[2574]: I1027 23:24:14.642232 2574 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 27 23:24:14.696233 systemd[1]: Created slice kubepods-burstable-podfb10cfd3_48d4_4fc8_a941_fc3f0b7e1b73.slice - libcontainer container kubepods-burstable-podfb10cfd3_48d4_4fc8_a941_fc3f0b7e1b73.slice. Oct 27 23:24:14.703739 systemd[1]: Created slice kubepods-burstable-pod72bb525d_3d56_440e_80e6_121fe9a16e24.slice - libcontainer container kubepods-burstable-pod72bb525d_3d56_440e_80e6_121fe9a16e24.slice. Oct 27 23:24:14.846991 kubelet[2574]: I1027 23:24:14.846858 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tc4w\" (UniqueName: \"kubernetes.io/projected/72bb525d-3d56-440e-80e6-121fe9a16e24-kube-api-access-7tc4w\") pod \"coredns-674b8bbfcf-mvcrr\" (UID: \"72bb525d-3d56-440e-80e6-121fe9a16e24\") " pod="kube-system/coredns-674b8bbfcf-mvcrr" Oct 27 23:24:14.846991 kubelet[2574]: I1027 23:24:14.846912 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb10cfd3-48d4-4fc8-a941-fc3f0b7e1b73-config-volume\") pod \"coredns-674b8bbfcf-86z8k\" (UID: \"fb10cfd3-48d4-4fc8-a941-fc3f0b7e1b73\") " pod="kube-system/coredns-674b8bbfcf-86z8k" Oct 27 23:24:14.846991 kubelet[2574]: I1027 23:24:14.846932 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72bb525d-3d56-440e-80e6-121fe9a16e24-config-volume\") pod \"coredns-674b8bbfcf-mvcrr\" (UID: \"72bb525d-3d56-440e-80e6-121fe9a16e24\") " pod="kube-system/coredns-674b8bbfcf-mvcrr" Oct 27 23:24:14.846991 kubelet[2574]: I1027 23:24:14.846957 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnwk5\" (UniqueName: \"kubernetes.io/projected/fb10cfd3-48d4-4fc8-a941-fc3f0b7e1b73-kube-api-access-qnwk5\") pod \"coredns-674b8bbfcf-86z8k\" (UID: \"fb10cfd3-48d4-4fc8-a941-fc3f0b7e1b73\") " pod="kube-system/coredns-674b8bbfcf-86z8k" Oct 27 23:24:15.001958 kubelet[2574]: E1027 23:24:15.001911 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:15.002757 containerd[1450]: time="2025-10-27T23:24:15.002705699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-86z8k,Uid:fb10cfd3-48d4-4fc8-a941-fc3f0b7e1b73,Namespace:kube-system,Attempt:0,}" Oct 27 23:24:15.006160 kubelet[2574]: E1027 23:24:15.006116 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:15.008617 containerd[1450]: time="2025-10-27T23:24:15.006803699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mvcrr,Uid:72bb525d-3d56-440e-80e6-121fe9a16e24,Namespace:kube-system,Attempt:0,}" Oct 27 23:24:15.469887 kubelet[2574]: E1027 23:24:15.469826 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:15.486959 kubelet[2574]: I1027 23:24:15.486867 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6ng6r" podStartSLOduration=7.018474385 podStartE2EDuration="26.48685136s" podCreationTimestamp="2025-10-27 23:23:49 +0000 UTC" firstStartedPulling="2025-10-27 23:23:51.253500643 +0000 UTC m=+8.958551020" lastFinishedPulling="2025-10-27 23:24:10.721877658 +0000 UTC m=+28.426927995" observedRunningTime="2025-10-27 23:24:15.48637756 +0000 UTC m=+33.191427937" watchObservedRunningTime="2025-10-27 23:24:15.48685136 +0000 UTC m=+33.191901697" Oct 27 23:24:16.472630 kubelet[2574]: E1027 23:24:16.472079 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:16.622596 systemd-networkd[1381]: cilium_host: Link UP Oct 27 23:24:16.623664 systemd-networkd[1381]: cilium_net: Link UP Oct 27 23:24:16.623745 systemd-networkd[1381]: cilium_net: Gained carrier Oct 27 23:24:16.624011 systemd-networkd[1381]: cilium_host: Gained carrier Oct 27 23:24:16.722534 systemd-networkd[1381]: cilium_vxlan: Link UP Oct 27 23:24:16.722544 systemd-networkd[1381]: cilium_vxlan: Gained carrier Oct 27 23:24:16.928426 systemd-networkd[1381]: cilium_host: Gained IPv6LL Oct 27 23:24:16.985294 kernel: NET: Registered PF_ALG protocol family Oct 27 23:24:17.407407 systemd-networkd[1381]: cilium_net: Gained IPv6LL Oct 27 23:24:17.473870 kubelet[2574]: E1027 23:24:17.473836 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:17.614651 systemd-networkd[1381]: lxc_health: Link UP Oct 27 23:24:17.625459 systemd-networkd[1381]: lxc_health: Gained carrier Oct 27 23:24:18.059169 systemd-networkd[1381]: lxc5e45fe8a0748: Link UP Oct 27 23:24:18.075310 kernel: eth0: renamed from tmp1fa75 Oct 27 23:24:18.096110 systemd-networkd[1381]: lxcc80aa7d2144b: Link UP Oct 27 23:24:18.097718 systemd-networkd[1381]: lxc5e45fe8a0748: Gained carrier Oct 27 23:24:18.100295 kernel: eth0: renamed from tmp7606c Oct 27 23:24:18.109852 systemd-networkd[1381]: lxcc80aa7d2144b: Gained carrier Oct 27 23:24:18.495471 systemd-networkd[1381]: cilium_vxlan: Gained IPv6LL Oct 27 23:24:19.186230 kubelet[2574]: E1027 23:24:19.186178 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:19.452853 systemd[1]: Started sshd@9-10.0.0.25:22-10.0.0.1:32894.service - OpenSSH per-connection server daemon (10.0.0.1:32894). Oct 27 23:24:19.455488 systemd-networkd[1381]: lxc_health: Gained IPv6LL Oct 27 23:24:19.476560 kubelet[2574]: E1027 23:24:19.476357 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:19.517352 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 32894 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:19.519249 sshd-session[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:19.524130 systemd-logind[1437]: New session 10 of user core. Oct 27 23:24:19.536470 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 23:24:19.584420 systemd-networkd[1381]: lxcc80aa7d2144b: Gained IPv6LL Oct 27 23:24:19.670905 sshd[3861]: Connection closed by 10.0.0.1 port 32894 Oct 27 23:24:19.671484 sshd-session[3859]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:19.675680 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Oct 27 23:24:19.675898 systemd[1]: sshd@9-10.0.0.25:22-10.0.0.1:32894.service: Deactivated successfully. Oct 27 23:24:19.678098 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 23:24:19.681067 systemd-logind[1437]: Removed session 10. Oct 27 23:24:20.095461 systemd-networkd[1381]: lxc5e45fe8a0748: Gained IPv6LL Oct 27 23:24:20.477966 kubelet[2574]: E1027 23:24:20.477907 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:21.751880 containerd[1450]: time="2025-10-27T23:24:21.751778679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:24:21.751880 containerd[1450]: time="2025-10-27T23:24:21.751859239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:24:21.751880 containerd[1450]: time="2025-10-27T23:24:21.751871999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:24:21.752339 containerd[1450]: time="2025-10-27T23:24:21.751947959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:24:21.758472 containerd[1450]: time="2025-10-27T23:24:21.758201279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:24:21.758472 containerd[1450]: time="2025-10-27T23:24:21.758293879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:24:21.758472 containerd[1450]: time="2025-10-27T23:24:21.758311039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:24:21.758976 containerd[1450]: time="2025-10-27T23:24:21.758858239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:24:21.781506 systemd[1]: Started cri-containerd-1fa750ccead417e55d368552ad5aa91591d4df08827c09c78a5646232785a058.scope - libcontainer container 1fa750ccead417e55d368552ad5aa91591d4df08827c09c78a5646232785a058. Oct 27 23:24:21.785366 systemd[1]: Started cri-containerd-7606ccdb0d7176b22dd3545f708ddce6de5f79fcf5bcc43f5b344db6db6eff39.scope - libcontainer container 7606ccdb0d7176b22dd3545f708ddce6de5f79fcf5bcc43f5b344db6db6eff39. Oct 27 23:24:21.793639 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:24:21.798654 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:24:21.813656 containerd[1450]: time="2025-10-27T23:24:21.813612597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mvcrr,Uid:72bb525d-3d56-440e-80e6-121fe9a16e24,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fa750ccead417e55d368552ad5aa91591d4df08827c09c78a5646232785a058\"" Oct 27 23:24:21.814519 kubelet[2574]: E1027 23:24:21.814491 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:21.820082 containerd[1450]: time="2025-10-27T23:24:21.819939837Z" level=info msg="CreateContainer within sandbox \"1fa750ccead417e55d368552ad5aa91591d4df08827c09c78a5646232785a058\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 23:24:21.820548 containerd[1450]: time="2025-10-27T23:24:21.820521997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-86z8k,Uid:fb10cfd3-48d4-4fc8-a941-fc3f0b7e1b73,Namespace:kube-system,Attempt:0,} returns sandbox id \"7606ccdb0d7176b22dd3545f708ddce6de5f79fcf5bcc43f5b344db6db6eff39\"" Oct 27 23:24:21.821298 kubelet[2574]: E1027 23:24:21.821072 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:21.825827 containerd[1450]: time="2025-10-27T23:24:21.825738637Z" level=info msg="CreateContainer within sandbox \"7606ccdb0d7176b22dd3545f708ddce6de5f79fcf5bcc43f5b344db6db6eff39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 23:24:21.840177 containerd[1450]: time="2025-10-27T23:24:21.840129037Z" level=info msg="CreateContainer within sandbox \"1fa750ccead417e55d368552ad5aa91591d4df08827c09c78a5646232785a058\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"212cf33c39e1153236233b0d2679411310d7a71cff71a19fc9a37f63eee60a5c\"" Oct 27 23:24:21.840981 containerd[1450]: time="2025-10-27T23:24:21.840943877Z" level=info msg="StartContainer for \"212cf33c39e1153236233b0d2679411310d7a71cff71a19fc9a37f63eee60a5c\"" Oct 27 23:24:21.851462 containerd[1450]: time="2025-10-27T23:24:21.850918516Z" level=info msg="CreateContainer within sandbox \"7606ccdb0d7176b22dd3545f708ddce6de5f79fcf5bcc43f5b344db6db6eff39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"342881142e7bc9344f17ab8a5ca1986696d33cf37fbfe541e9b451346288b280\"" Oct 27 23:24:21.852904 containerd[1450]: time="2025-10-27T23:24:21.852222076Z" level=info msg="StartContainer for \"342881142e7bc9344f17ab8a5ca1986696d33cf37fbfe541e9b451346288b280\"" Oct 27 23:24:21.883559 systemd[1]: Started cri-containerd-212cf33c39e1153236233b0d2679411310d7a71cff71a19fc9a37f63eee60a5c.scope - libcontainer container 212cf33c39e1153236233b0d2679411310d7a71cff71a19fc9a37f63eee60a5c. Oct 27 23:24:21.887087 systemd[1]: Started cri-containerd-342881142e7bc9344f17ab8a5ca1986696d33cf37fbfe541e9b451346288b280.scope - libcontainer container 342881142e7bc9344f17ab8a5ca1986696d33cf37fbfe541e9b451346288b280. Oct 27 23:24:21.913529 containerd[1450]: time="2025-10-27T23:24:21.913466915Z" level=info msg="StartContainer for \"212cf33c39e1153236233b0d2679411310d7a71cff71a19fc9a37f63eee60a5c\" returns successfully" Oct 27 23:24:21.918831 containerd[1450]: time="2025-10-27T23:24:21.918787154Z" level=info msg="StartContainer for \"342881142e7bc9344f17ab8a5ca1986696d33cf37fbfe541e9b451346288b280\" returns successfully" Oct 27 23:24:22.482171 kubelet[2574]: E1027 23:24:22.482130 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:22.487432 kubelet[2574]: E1027 23:24:22.487160 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:22.493734 kubelet[2574]: I1027 23:24:22.493450 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mvcrr" podStartSLOduration=32.4934343 podStartE2EDuration="32.4934343s" podCreationTimestamp="2025-10-27 23:23:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:24:22.49294182 +0000 UTC m=+40.197992197" watchObservedRunningTime="2025-10-27 23:24:22.4934343 +0000 UTC m=+40.198484677" Oct 27 23:24:22.505330 kubelet[2574]: I1027 23:24:22.505192 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-86z8k" podStartSLOduration=32.50517622 podStartE2EDuration="32.50517622s" podCreationTimestamp="2025-10-27 23:23:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:24:22.50471486 +0000 UTC m=+40.209765197" watchObservedRunningTime="2025-10-27 23:24:22.50517622 +0000 UTC m=+40.210226557" Oct 27 23:24:23.488428 kubelet[2574]: E1027 23:24:23.488159 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:23.488873 kubelet[2574]: E1027 23:24:23.488823 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:24.489885 kubelet[2574]: E1027 23:24:24.489726 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:24.489885 kubelet[2574]: E1027 23:24:24.489817 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:24.691069 systemd[1]: Started sshd@10-10.0.0.25:22-10.0.0.1:32908.service - OpenSSH per-connection server daemon (10.0.0.1:32908). Oct 27 23:24:24.762081 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 32908 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:24.766606 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:24.777225 systemd-logind[1437]: New session 11 of user core. Oct 27 23:24:24.786486 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 23:24:24.922174 sshd[4057]: Connection closed by 10.0.0.1 port 32908 Oct 27 23:24:24.922769 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:24.932753 systemd[1]: sshd@10-10.0.0.25:22-10.0.0.1:32908.service: Deactivated successfully. Oct 27 23:24:24.934777 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 23:24:24.936234 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Oct 27 23:24:24.941565 systemd[1]: Started sshd@11-10.0.0.25:22-10.0.0.1:32912.service - OpenSSH per-connection server daemon (10.0.0.1:32912). Oct 27 23:24:24.943726 systemd-logind[1437]: Removed session 11. Oct 27 23:24:24.982807 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 32912 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:24.984143 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:24.989970 systemd-logind[1437]: New session 12 of user core. Oct 27 23:24:24.999464 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 23:24:25.160612 sshd[4077]: Connection closed by 10.0.0.1 port 32912 Oct 27 23:24:25.160891 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:25.170741 systemd[1]: sshd@11-10.0.0.25:22-10.0.0.1:32912.service: Deactivated successfully. Oct 27 23:24:25.172897 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 23:24:25.173775 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Oct 27 23:24:25.185280 systemd[1]: Started sshd@12-10.0.0.25:22-10.0.0.1:32914.service - OpenSSH per-connection server daemon (10.0.0.1:32914). Oct 27 23:24:25.186807 systemd-logind[1437]: Removed session 12. Oct 27 23:24:25.236407 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 32914 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:25.237730 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:25.244143 systemd-logind[1437]: New session 13 of user core. Oct 27 23:24:25.253447 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 23:24:25.369618 sshd[4090]: Connection closed by 10.0.0.1 port 32914 Oct 27 23:24:25.369949 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:25.373090 systemd[1]: sshd@12-10.0.0.25:22-10.0.0.1:32914.service: Deactivated successfully. Oct 27 23:24:25.375096 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 23:24:25.379927 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Oct 27 23:24:25.381016 systemd-logind[1437]: Removed session 13. Oct 27 23:24:30.391825 systemd[1]: Started sshd@13-10.0.0.25:22-10.0.0.1:58230.service - OpenSSH per-connection server daemon (10.0.0.1:58230). Oct 27 23:24:30.436973 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 58230 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:30.438479 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:30.445258 systemd-logind[1437]: New session 14 of user core. Oct 27 23:24:30.458792 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 23:24:30.577905 sshd[4106]: Connection closed by 10.0.0.1 port 58230 Oct 27 23:24:30.578263 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:30.581683 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Oct 27 23:24:30.581926 systemd[1]: sshd@13-10.0.0.25:22-10.0.0.1:58230.service: Deactivated successfully. Oct 27 23:24:30.584698 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 23:24:30.587462 systemd-logind[1437]: Removed session 14. Oct 27 23:24:35.595425 systemd[1]: Started sshd@14-10.0.0.25:22-10.0.0.1:58246.service - OpenSSH per-connection server daemon (10.0.0.1:58246). Oct 27 23:24:35.649505 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 58246 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:35.650907 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:35.658222 systemd-logind[1437]: New session 15 of user core. Oct 27 23:24:35.664494 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 23:24:35.812094 sshd[4121]: Connection closed by 10.0.0.1 port 58246 Oct 27 23:24:35.812910 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:35.825667 systemd[1]: sshd@14-10.0.0.25:22-10.0.0.1:58246.service: Deactivated successfully. Oct 27 23:24:35.828695 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 23:24:35.829869 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Oct 27 23:24:35.849707 systemd[1]: Started sshd@15-10.0.0.25:22-10.0.0.1:58256.service - OpenSSH per-connection server daemon (10.0.0.1:58256). Oct 27 23:24:35.850845 systemd-logind[1437]: Removed session 15. Oct 27 23:24:35.896125 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 58256 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:35.897599 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:35.901910 systemd-logind[1437]: New session 16 of user core. Oct 27 23:24:35.912497 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 23:24:36.134115 sshd[4136]: Connection closed by 10.0.0.1 port 58256 Oct 27 23:24:36.134717 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:36.147747 systemd[1]: sshd@15-10.0.0.25:22-10.0.0.1:58256.service: Deactivated successfully. Oct 27 23:24:36.150378 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 23:24:36.153763 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Oct 27 23:24:36.169699 systemd[1]: Started sshd@16-10.0.0.25:22-10.0.0.1:58258.service - OpenSSH per-connection server daemon (10.0.0.1:58258). Oct 27 23:24:36.171104 systemd-logind[1437]: Removed session 16. Oct 27 23:24:36.215495 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 58258 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:36.216983 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:36.221405 systemd-logind[1437]: New session 17 of user core. Oct 27 23:24:36.231478 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 23:24:36.928487 sshd[4151]: Connection closed by 10.0.0.1 port 58258 Oct 27 23:24:36.929427 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:36.940518 systemd[1]: sshd@16-10.0.0.25:22-10.0.0.1:58258.service: Deactivated successfully. Oct 27 23:24:36.943006 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 23:24:36.944823 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Oct 27 23:24:36.954870 systemd[1]: Started sshd@17-10.0.0.25:22-10.0.0.1:58260.service - OpenSSH per-connection server daemon (10.0.0.1:58260). Oct 27 23:24:36.957428 systemd-logind[1437]: Removed session 17. Oct 27 23:24:37.000707 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 58260 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:37.002103 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:37.007193 systemd-logind[1437]: New session 18 of user core. Oct 27 23:24:37.017526 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 23:24:37.297394 sshd[4174]: Connection closed by 10.0.0.1 port 58260 Oct 27 23:24:37.298550 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:37.309044 systemd[1]: sshd@17-10.0.0.25:22-10.0.0.1:58260.service: Deactivated successfully. Oct 27 23:24:37.312314 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 23:24:37.314850 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Oct 27 23:24:37.327770 systemd[1]: Started sshd@18-10.0.0.25:22-10.0.0.1:58270.service - OpenSSH per-connection server daemon (10.0.0.1:58270). Oct 27 23:24:37.329344 systemd-logind[1437]: Removed session 18. Oct 27 23:24:37.369300 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 58270 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:37.371035 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:37.378189 systemd-logind[1437]: New session 19 of user core. Oct 27 23:24:37.385495 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 23:24:37.502288 sshd[4188]: Connection closed by 10.0.0.1 port 58270 Oct 27 23:24:37.502690 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:37.506459 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. Oct 27 23:24:37.506794 systemd[1]: sshd@18-10.0.0.25:22-10.0.0.1:58270.service: Deactivated successfully. Oct 27 23:24:37.508989 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 23:24:37.510056 systemd-logind[1437]: Removed session 19. Oct 27 23:24:42.514236 systemd[1]: Started sshd@19-10.0.0.25:22-10.0.0.1:35976.service - OpenSSH per-connection server daemon (10.0.0.1:35976). Oct 27 23:24:42.564665 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 35976 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:42.566165 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:42.571995 systemd-logind[1437]: New session 20 of user core. Oct 27 23:24:42.586540 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 27 23:24:42.697811 sshd[4208]: Connection closed by 10.0.0.1 port 35976 Oct 27 23:24:42.698199 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:42.701557 systemd[1]: sshd@19-10.0.0.25:22-10.0.0.1:35976.service: Deactivated successfully. Oct 27 23:24:42.703562 systemd[1]: session-20.scope: Deactivated successfully. Oct 27 23:24:42.705933 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. Oct 27 23:24:42.707029 systemd-logind[1437]: Removed session 20. Oct 27 23:24:47.710000 systemd[1]: Started sshd@20-10.0.0.25:22-10.0.0.1:35982.service - OpenSSH per-connection server daemon (10.0.0.1:35982). Oct 27 23:24:47.762121 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 35982 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:47.763491 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:47.768485 systemd-logind[1437]: New session 21 of user core. Oct 27 23:24:47.781524 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 27 23:24:47.914487 sshd[4223]: Connection closed by 10.0.0.1 port 35982 Oct 27 23:24:47.915000 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:47.918199 systemd[1]: sshd@20-10.0.0.25:22-10.0.0.1:35982.service: Deactivated successfully. Oct 27 23:24:47.919895 systemd[1]: session-21.scope: Deactivated successfully. Oct 27 23:24:47.921381 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. Oct 27 23:24:47.922574 systemd-logind[1437]: Removed session 21. Oct 27 23:24:52.930207 systemd[1]: Started sshd@21-10.0.0.25:22-10.0.0.1:59004.service - OpenSSH per-connection server daemon (10.0.0.1:59004). Oct 27 23:24:52.984410 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 59004 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:52.985778 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:52.990334 systemd-logind[1437]: New session 22 of user core. Oct 27 23:24:52.996537 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 27 23:24:53.127584 sshd[4243]: Connection closed by 10.0.0.1 port 59004 Oct 27 23:24:53.128099 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:53.141827 systemd[1]: sshd@21-10.0.0.25:22-10.0.0.1:59004.service: Deactivated successfully. Oct 27 23:24:53.143830 systemd[1]: session-22.scope: Deactivated successfully. Oct 27 23:24:53.144650 systemd-logind[1437]: Session 22 logged out. Waiting for processes to exit. Oct 27 23:24:53.163654 systemd[1]: Started sshd@22-10.0.0.25:22-10.0.0.1:59018.service - OpenSSH per-connection server daemon (10.0.0.1:59018). Oct 27 23:24:53.164767 systemd-logind[1437]: Removed session 22. Oct 27 23:24:53.202600 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 59018 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:53.203897 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:53.208166 systemd-logind[1437]: New session 23 of user core. Oct 27 23:24:53.215481 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 27 23:24:54.742590 containerd[1450]: time="2025-10-27T23:24:54.742546848Z" level=info msg="StopContainer for \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\" with timeout 30 (s)" Oct 27 23:24:54.743387 containerd[1450]: time="2025-10-27T23:24:54.743356170Z" level=info msg="Stop container \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\" with signal terminated" Oct 27 23:24:54.767445 systemd[1]: cri-containerd-1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96.scope: Deactivated successfully. Oct 27 23:24:54.785475 containerd[1450]: time="2025-10-27T23:24:54.785342395Z" level=info msg="StopContainer for \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\" with timeout 2 (s)" Oct 27 23:24:54.786382 containerd[1450]: time="2025-10-27T23:24:54.785671756Z" level=info msg="Stop container \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\" with signal terminated" Oct 27 23:24:54.788142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96-rootfs.mount: Deactivated successfully. Oct 27 23:24:54.788639 containerd[1450]: time="2025-10-27T23:24:54.788601883Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 23:24:54.792505 systemd-networkd[1381]: lxc_health: Link DOWN Oct 27 23:24:54.792513 systemd-networkd[1381]: lxc_health: Lost carrier Oct 27 23:24:54.794420 containerd[1450]: time="2025-10-27T23:24:54.794305098Z" level=info msg="shim disconnected" id=1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96 namespace=k8s.io Oct 27 23:24:54.794420 containerd[1450]: time="2025-10-27T23:24:54.794405098Z" level=warning msg="cleaning up after shim disconnected" id=1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96 namespace=k8s.io Oct 27 23:24:54.794420 containerd[1450]: time="2025-10-27T23:24:54.794415338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:24:54.809932 systemd[1]: cri-containerd-ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158.scope: Deactivated successfully. Oct 27 23:24:54.810621 systemd[1]: cri-containerd-ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158.scope: Consumed 6.468s CPU time, 123.8M memory peak, 156K read from disk, 12.9M written to disk. Oct 27 23:24:54.842698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158-rootfs.mount: Deactivated successfully. Oct 27 23:24:54.849281 containerd[1450]: time="2025-10-27T23:24:54.849198235Z" level=info msg="shim disconnected" id=ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158 namespace=k8s.io Oct 27 23:24:54.849281 containerd[1450]: time="2025-10-27T23:24:54.849253635Z" level=warning msg="cleaning up after shim disconnected" id=ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158 namespace=k8s.io Oct 27 23:24:54.849281 containerd[1450]: time="2025-10-27T23:24:54.849263235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:24:54.855879 containerd[1450]: time="2025-10-27T23:24:54.855838052Z" level=info msg="StopContainer for \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\" returns successfully" Oct 27 23:24:54.856629 containerd[1450]: time="2025-10-27T23:24:54.856595414Z" level=info msg="StopPodSandbox for \"07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56\"" Oct 27 23:24:54.857063 containerd[1450]: time="2025-10-27T23:24:54.856644934Z" level=info msg="Container to stop \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:24:54.858779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56-shm.mount: Deactivated successfully. Oct 27 23:24:54.865225 systemd[1]: cri-containerd-07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56.scope: Deactivated successfully. Oct 27 23:24:54.867856 containerd[1450]: time="2025-10-27T23:24:54.867581881Z" level=info msg="StopContainer for \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\" returns successfully" Oct 27 23:24:54.868376 containerd[1450]: time="2025-10-27T23:24:54.868137603Z" level=info msg="StopPodSandbox for \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\"" Oct 27 23:24:54.868376 containerd[1450]: time="2025-10-27T23:24:54.868172363Z" level=info msg="Container to stop \"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:24:54.868376 containerd[1450]: time="2025-10-27T23:24:54.868183923Z" level=info msg="Container to stop \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:24:54.868376 containerd[1450]: time="2025-10-27T23:24:54.868192043Z" level=info msg="Container to stop \"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:24:54.868376 containerd[1450]: time="2025-10-27T23:24:54.868209803Z" level=info msg="Container to stop \"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:24:54.868376 containerd[1450]: time="2025-10-27T23:24:54.868218763Z" level=info msg="Container to stop \"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:24:54.870739 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a-shm.mount: Deactivated successfully. Oct 27 23:24:54.884641 systemd[1]: cri-containerd-a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a.scope: Deactivated successfully. Oct 27 23:24:54.901054 containerd[1450]: time="2025-10-27T23:24:54.900841604Z" level=info msg="shim disconnected" id=07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56 namespace=k8s.io Oct 27 23:24:54.901054 containerd[1450]: time="2025-10-27T23:24:54.900902365Z" level=warning msg="cleaning up after shim disconnected" id=07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56 namespace=k8s.io Oct 27 23:24:54.901054 containerd[1450]: time="2025-10-27T23:24:54.900913245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:24:54.902520 containerd[1450]: time="2025-10-27T23:24:54.902465488Z" level=info msg="shim disconnected" id=a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a namespace=k8s.io Oct 27 23:24:54.902619 containerd[1450]: time="2025-10-27T23:24:54.902516169Z" level=warning msg="cleaning up after shim disconnected" id=a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a namespace=k8s.io Oct 27 23:24:54.902651 containerd[1450]: time="2025-10-27T23:24:54.902618609Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:24:54.915878 containerd[1450]: time="2025-10-27T23:24:54.915836762Z" level=info msg="TearDown network for sandbox \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" successfully" Oct 27 23:24:54.915878 containerd[1450]: time="2025-10-27T23:24:54.915869362Z" level=info msg="StopPodSandbox for \"a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a\" returns successfully" Oct 27 23:24:54.917451 containerd[1450]: time="2025-10-27T23:24:54.917308806Z" level=info msg="TearDown network for sandbox \"07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56\" successfully" Oct 27 23:24:54.917451 containerd[1450]: time="2025-10-27T23:24:54.917328806Z" level=info msg="StopPodSandbox for \"07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56\" returns successfully" Oct 27 23:24:55.022586 kubelet[2574]: I1027 23:24:55.021877 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49fa36a4-22b6-495e-9a92-1f06464d9fc3-cilium-config-path\") pod \"49fa36a4-22b6-495e-9a92-1f06464d9fc3\" (UID: \"49fa36a4-22b6-495e-9a92-1f06464d9fc3\") " Oct 27 23:24:55.022586 kubelet[2574]: I1027 23:24:55.021930 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m82kp\" (UniqueName: \"kubernetes.io/projected/49fa36a4-22b6-495e-9a92-1f06464d9fc3-kube-api-access-m82kp\") pod \"49fa36a4-22b6-495e-9a92-1f06464d9fc3\" (UID: \"49fa36a4-22b6-495e-9a92-1f06464d9fc3\") " Oct 27 23:24:55.034691 kubelet[2574]: I1027 23:24:55.034642 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49fa36a4-22b6-495e-9a92-1f06464d9fc3-kube-api-access-m82kp" (OuterVolumeSpecName: "kube-api-access-m82kp") pod "49fa36a4-22b6-495e-9a92-1f06464d9fc3" (UID: "49fa36a4-22b6-495e-9a92-1f06464d9fc3"). InnerVolumeSpecName "kube-api-access-m82kp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 23:24:55.043364 kubelet[2574]: I1027 23:24:55.043324 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49fa36a4-22b6-495e-9a92-1f06464d9fc3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "49fa36a4-22b6-495e-9a92-1f06464d9fc3" (UID: "49fa36a4-22b6-495e-9a92-1f06464d9fc3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 23:24:55.122884 kubelet[2574]: I1027 23:24:55.122832 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-hostproc\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.122884 kubelet[2574]: I1027 23:24:55.122879 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-xtables-lock\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.122884 kubelet[2574]: I1027 23:24:55.122896 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-run\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123084 kubelet[2574]: I1027 23:24:55.122918 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-config-path\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123084 kubelet[2574]: I1027 23:24:55.122932 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-bpf-maps\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123084 kubelet[2574]: I1027 23:24:55.122954 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-host-proc-sys-net\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123084 kubelet[2574]: I1027 23:24:55.122994 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-host-proc-sys-kernel\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123084 kubelet[2574]: I1027 23:24:55.123013 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-cgroup\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123084 kubelet[2574]: I1027 23:24:55.123029 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-etc-cni-netd\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123222 kubelet[2574]: I1027 23:24:55.123047 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e13702b-8eac-49ae-b850-f40e3278254e-clustermesh-secrets\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123222 kubelet[2574]: I1027 23:24:55.123067 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-hubble-tls\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123222 kubelet[2574]: I1027 23:24:55.123083 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-lib-modules\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123222 kubelet[2574]: I1027 23:24:55.123103 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhfx8\" (UniqueName: \"kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-kube-api-access-nhfx8\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123222 kubelet[2574]: I1027 23:24:55.123119 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cni-path\") pod \"0e13702b-8eac-49ae-b850-f40e3278254e\" (UID: \"0e13702b-8eac-49ae-b850-f40e3278254e\") " Oct 27 23:24:55.123222 kubelet[2574]: I1027 23:24:55.123153 2574 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49fa36a4-22b6-495e-9a92-1f06464d9fc3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.123387 kubelet[2574]: I1027 23:24:55.123162 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m82kp\" (UniqueName: \"kubernetes.io/projected/49fa36a4-22b6-495e-9a92-1f06464d9fc3-kube-api-access-m82kp\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.123387 kubelet[2574]: I1027 23:24:55.123221 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cni-path" (OuterVolumeSpecName: "cni-path") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:24:55.123387 kubelet[2574]: I1027 23:24:55.123254 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-hostproc" (OuterVolumeSpecName: "hostproc") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:24:55.123387 kubelet[2574]: I1027 23:24:55.123296 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:24:55.123387 kubelet[2574]: I1027 23:24:55.123313 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:24:55.123606 kubelet[2574]: I1027 23:24:55.123581 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:24:55.124264 kubelet[2574]: I1027 23:24:55.123979 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:24:55.124264 kubelet[2574]: I1027 23:24:55.124007 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:24:55.124264 kubelet[2574]: I1027 23:24:55.124022 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:24:55.124264 kubelet[2574]: I1027 23:24:55.124037 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:24:55.124546 kubelet[2574]: I1027 23:24:55.124522 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:24:55.125246 kubelet[2574]: I1027 23:24:55.125200 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 23:24:55.126064 kubelet[2574]: I1027 23:24:55.126029 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 23:24:55.126123 kubelet[2574]: I1027 23:24:55.126114 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e13702b-8eac-49ae-b850-f40e3278254e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 23:24:55.127008 kubelet[2574]: I1027 23:24:55.126978 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-kube-api-access-nhfx8" (OuterVolumeSpecName: "kube-api-access-nhfx8") pod "0e13702b-8eac-49ae-b850-f40e3278254e" (UID: "0e13702b-8eac-49ae-b850-f40e3278254e"). InnerVolumeSpecName "kube-api-access-nhfx8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 23:24:55.223502 kubelet[2574]: I1027 23:24:55.223313 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nhfx8\" (UniqueName: \"kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-kube-api-access-nhfx8\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223502 kubelet[2574]: I1027 23:24:55.223350 2574 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223502 kubelet[2574]: I1027 23:24:55.223364 2574 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223502 kubelet[2574]: I1027 23:24:55.223372 2574 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223502 kubelet[2574]: I1027 23:24:55.223382 2574 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223502 kubelet[2574]: I1027 23:24:55.223400 2574 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223502 kubelet[2574]: I1027 23:24:55.223412 2574 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223502 kubelet[2574]: I1027 23:24:55.223420 2574 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223776 kubelet[2574]: I1027 23:24:55.223428 2574 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223776 kubelet[2574]: I1027 23:24:55.223443 2574 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223776 kubelet[2574]: I1027 23:24:55.223452 2574 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223776 kubelet[2574]: I1027 23:24:55.223460 2574 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e13702b-8eac-49ae-b850-f40e3278254e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223776 kubelet[2574]: I1027 23:24:55.223469 2574 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e13702b-8eac-49ae-b850-f40e3278254e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.223776 kubelet[2574]: I1027 23:24:55.223477 2574 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e13702b-8eac-49ae-b850-f40e3278254e-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 27 23:24:55.569629 kubelet[2574]: I1027 23:24:55.569576 2574 scope.go:117] "RemoveContainer" containerID="ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158" Oct 27 23:24:55.571856 containerd[1450]: time="2025-10-27T23:24:55.571535525Z" level=info msg="RemoveContainer for \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\"" Oct 27 23:24:55.577797 systemd[1]: Removed slice kubepods-besteffort-pod49fa36a4_22b6_495e_9a92_1f06464d9fc3.slice - libcontainer container kubepods-besteffort-pod49fa36a4_22b6_495e_9a92_1f06464d9fc3.slice. Oct 27 23:24:55.579297 systemd[1]: Removed slice kubepods-burstable-pod0e13702b_8eac_49ae_b850_f40e3278254e.slice - libcontainer container kubepods-burstable-pod0e13702b_8eac_49ae_b850_f40e3278254e.slice. Oct 27 23:24:55.579390 systemd[1]: kubepods-burstable-pod0e13702b_8eac_49ae_b850_f40e3278254e.slice: Consumed 6.548s CPU time, 124.1M memory peak, 176K read from disk, 12.9M written to disk. Oct 27 23:24:55.598120 containerd[1450]: time="2025-10-27T23:24:55.598025589Z" level=info msg="RemoveContainer for \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\" returns successfully" Oct 27 23:24:55.599224 kubelet[2574]: I1027 23:24:55.599091 2574 scope.go:117] "RemoveContainer" containerID="d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50" Oct 27 23:24:55.608996 containerd[1450]: time="2025-10-27T23:24:55.608872816Z" level=info msg="RemoveContainer for \"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50\"" Oct 27 23:24:55.611969 containerd[1450]: time="2025-10-27T23:24:55.611911703Z" level=info msg="RemoveContainer for \"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50\" returns successfully" Oct 27 23:24:55.612246 kubelet[2574]: I1027 23:24:55.612196 2574 scope.go:117] "RemoveContainer" containerID="066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff" Oct 27 23:24:55.613257 containerd[1450]: time="2025-10-27T23:24:55.613230946Z" level=info msg="RemoveContainer for \"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff\"" Oct 27 23:24:55.620055 containerd[1450]: time="2025-10-27T23:24:55.620009243Z" level=info msg="RemoveContainer for \"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff\" returns successfully" Oct 27 23:24:55.620293 kubelet[2574]: I1027 23:24:55.620243 2574 scope.go:117] "RemoveContainer" containerID="41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264" Oct 27 23:24:55.621465 containerd[1450]: time="2025-10-27T23:24:55.621367846Z" level=info msg="RemoveContainer for \"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264\"" Oct 27 23:24:55.624674 containerd[1450]: time="2025-10-27T23:24:55.624563854Z" level=info msg="RemoveContainer for \"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264\" returns successfully" Oct 27 23:24:55.624794 kubelet[2574]: I1027 23:24:55.624765 2574 scope.go:117] "RemoveContainer" containerID="f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5" Oct 27 23:24:55.625766 containerd[1450]: time="2025-10-27T23:24:55.625739697Z" level=info msg="RemoveContainer for \"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5\"" Oct 27 23:24:55.628444 containerd[1450]: time="2025-10-27T23:24:55.628415703Z" level=info msg="RemoveContainer for \"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5\" returns successfully" Oct 27 23:24:55.628645 kubelet[2574]: I1027 23:24:55.628621 2574 scope.go:117] "RemoveContainer" containerID="ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158" Oct 27 23:24:55.628865 containerd[1450]: time="2025-10-27T23:24:55.628832824Z" level=error msg="ContainerStatus for \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\": not found" Oct 27 23:24:55.634638 kubelet[2574]: E1027 23:24:55.634611 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\": not found" containerID="ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158" Oct 27 23:24:55.634700 kubelet[2574]: I1027 23:24:55.634650 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158"} err="failed to get container status \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac78d072625dbb637efef341a8475471f85b95e98bd5ddeeb031de851cb03158\": not found" Oct 27 23:24:55.634700 kubelet[2574]: I1027 23:24:55.634692 2574 scope.go:117] "RemoveContainer" containerID="d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50" Oct 27 23:24:55.635005 containerd[1450]: time="2025-10-27T23:24:55.634907759Z" level=error msg="ContainerStatus for \"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50\": not found" Oct 27 23:24:55.635103 kubelet[2574]: E1027 23:24:55.635047 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50\": not found" containerID="d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50" Oct 27 23:24:55.635103 kubelet[2574]: I1027 23:24:55.635072 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50"} err="failed to get container status \"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50\": rpc error: code = NotFound desc = an error occurred when try to find container \"d44fe846d8b54e9ba9ee75de1d718f9c0caf81f0f028003022827adbf5164f50\": not found" Oct 27 23:24:55.635103 kubelet[2574]: I1027 23:24:55.635089 2574 scope.go:117] "RemoveContainer" containerID="066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff" Oct 27 23:24:55.635379 containerd[1450]: time="2025-10-27T23:24:55.635315280Z" level=error msg="ContainerStatus for \"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff\": not found" Oct 27 23:24:55.635453 kubelet[2574]: E1027 23:24:55.635431 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff\": not found" containerID="066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff" Oct 27 23:24:55.635481 kubelet[2574]: I1027 23:24:55.635459 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff"} err="failed to get container status \"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"066d9d38f46e1e3a3c0d02c26aa647939a94b98222aeb3c058063c73a82463ff\": not found" Oct 27 23:24:55.635481 kubelet[2574]: I1027 23:24:55.635473 2574 scope.go:117] "RemoveContainer" containerID="41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264" Oct 27 23:24:55.635749 containerd[1450]: time="2025-10-27T23:24:55.635691441Z" level=error msg="ContainerStatus for \"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264\": not found" Oct 27 23:24:55.635815 kubelet[2574]: E1027 23:24:55.635794 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264\": not found" containerID="41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264" Oct 27 23:24:55.635815 kubelet[2574]: I1027 23:24:55.635807 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264"} err="failed to get container status \"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264\": rpc error: code = NotFound desc = an error occurred when try to find container \"41c096ade347d08dcda5d8b15012f14e593d0f628bbdd929fdd2e28df3c7d264\": not found" Oct 27 23:24:55.635860 kubelet[2574]: I1027 23:24:55.635819 2574 scope.go:117] "RemoveContainer" containerID="f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5" Oct 27 23:24:55.636085 containerd[1450]: time="2025-10-27T23:24:55.635989602Z" level=error msg="ContainerStatus for \"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5\": not found" Oct 27 23:24:55.636127 kubelet[2574]: E1027 23:24:55.636079 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5\": not found" containerID="f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5" Oct 27 23:24:55.636127 kubelet[2574]: I1027 23:24:55.636101 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5"} err="failed to get container status \"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1a2e23f698cb03125f70b26bc4bff8d0acfea527765d46c377595034e36fce5\": not found" Oct 27 23:24:55.636127 kubelet[2574]: I1027 23:24:55.636115 2574 scope.go:117] "RemoveContainer" containerID="1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96" Oct 27 23:24:55.637083 containerd[1450]: time="2025-10-27T23:24:55.637059164Z" level=info msg="RemoveContainer for \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\"" Oct 27 23:24:55.640054 containerd[1450]: time="2025-10-27T23:24:55.639948211Z" level=info msg="RemoveContainer for \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\" returns successfully" Oct 27 23:24:55.640178 kubelet[2574]: I1027 23:24:55.640151 2574 scope.go:117] "RemoveContainer" containerID="1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96" Oct 27 23:24:55.640432 containerd[1450]: time="2025-10-27T23:24:55.640387452Z" level=error msg="ContainerStatus for \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\": not found" Oct 27 23:24:55.640664 kubelet[2574]: E1027 23:24:55.640636 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\": not found" containerID="1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96" Oct 27 23:24:55.640719 kubelet[2574]: I1027 23:24:55.640670 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96"} err="failed to get container status \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\": rpc error: code = NotFound desc = an error occurred when try to find container \"1cac997c4736a0cce09f9a60d51d8f5be9bde1dd6de9187a78cd1e949ba60f96\": not found" Oct 27 23:24:55.763630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a48d3e91989c97119e86ee380e96928110c9aa768392d5def4161555e8e3d22a-rootfs.mount: Deactivated successfully. Oct 27 23:24:55.763738 systemd[1]: var-lib-kubelet-pods-0e13702b\x2d8eac\x2d49ae\x2db850\x2df40e3278254e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnhfx8.mount: Deactivated successfully. Oct 27 23:24:55.763801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07e6ea6265f2fc69bbfb1a84597723fa8a03f608ba034699808d8a4c197d5c56-rootfs.mount: Deactivated successfully. Oct 27 23:24:55.763864 systemd[1]: var-lib-kubelet-pods-49fa36a4\x2d22b6\x2d495e\x2d9a92\x2d1f06464d9fc3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm82kp.mount: Deactivated successfully. Oct 27 23:24:55.763920 systemd[1]: var-lib-kubelet-pods-0e13702b\x2d8eac\x2d49ae\x2db850\x2df40e3278254e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 27 23:24:55.763976 systemd[1]: var-lib-kubelet-pods-0e13702b\x2d8eac\x2d49ae\x2db850\x2df40e3278254e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 27 23:24:56.384672 kubelet[2574]: I1027 23:24:56.383861 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e13702b-8eac-49ae-b850-f40e3278254e" path="/var/lib/kubelet/pods/0e13702b-8eac-49ae-b850-f40e3278254e/volumes" Oct 27 23:24:56.384672 kubelet[2574]: I1027 23:24:56.384417 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fa36a4-22b6-495e-9a92-1f06464d9fc3" path="/var/lib/kubelet/pods/49fa36a4-22b6-495e-9a92-1f06464d9fc3/volumes" Oct 27 23:24:56.672075 sshd[4259]: Connection closed by 10.0.0.1 port 59018 Oct 27 23:24:56.673677 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:56.686945 systemd[1]: Started sshd@23-10.0.0.25:22-10.0.0.1:59022.service - OpenSSH per-connection server daemon (10.0.0.1:59022). Oct 27 23:24:56.687454 systemd[1]: sshd@22-10.0.0.25:22-10.0.0.1:59018.service: Deactivated successfully. Oct 27 23:24:56.692148 systemd[1]: session-23.scope: Deactivated successfully. Oct 27 23:24:56.694366 systemd-logind[1437]: Session 23 logged out. Waiting for processes to exit. Oct 27 23:24:56.698051 systemd-logind[1437]: Removed session 23. Oct 27 23:24:56.737586 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 59022 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:56.739009 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:56.746329 systemd-logind[1437]: New session 24 of user core. Oct 27 23:24:56.757473 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 27 23:24:57.427316 kubelet[2574]: E1027 23:24:57.427200 2574 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 27 23:24:57.980112 sshd[4422]: Connection closed by 10.0.0.1 port 59022 Oct 27 23:24:57.980809 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:57.998442 systemd[1]: sshd@23-10.0.0.25:22-10.0.0.1:59022.service: Deactivated successfully. Oct 27 23:24:58.000418 systemd[1]: session-24.scope: Deactivated successfully. Oct 27 23:24:58.000713 systemd[1]: session-24.scope: Consumed 1.125s CPU time, 26.3M memory peak. Oct 27 23:24:58.001598 systemd-logind[1437]: Session 24 logged out. Waiting for processes to exit. Oct 27 23:24:58.013767 systemd[1]: Started sshd@24-10.0.0.25:22-10.0.0.1:59032.service - OpenSSH per-connection server daemon (10.0.0.1:59032). Oct 27 23:24:58.015894 systemd-logind[1437]: Removed session 24. Oct 27 23:24:58.038618 systemd[1]: Created slice kubepods-burstable-pod35ebbe23_2d89_4180_96a9_e7c69ba062fa.slice - libcontainer container kubepods-burstable-pod35ebbe23_2d89_4180_96a9_e7c69ba062fa.slice. Oct 27 23:24:58.063065 sshd[4433]: Accepted publickey for core from 10.0.0.1 port 59032 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:58.064958 sshd-session[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:58.068949 systemd-logind[1437]: New session 25 of user core. Oct 27 23:24:58.082462 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 27 23:24:58.130885 sshd[4436]: Connection closed by 10.0.0.1 port 59032 Oct 27 23:24:58.131115 sshd-session[4433]: pam_unix(sshd:session): session closed for user core Oct 27 23:24:58.139507 systemd[1]: sshd@24-10.0.0.25:22-10.0.0.1:59032.service: Deactivated successfully. Oct 27 23:24:58.141211 systemd[1]: session-25.scope: Deactivated successfully. Oct 27 23:24:58.142027 systemd-logind[1437]: Session 25 logged out. Waiting for processes to exit. Oct 27 23:24:58.144047 kubelet[2574]: I1027 23:24:58.142301 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35ebbe23-2d89-4180-96a9-e7c69ba062fa-host-proc-sys-kernel\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144047 kubelet[2574]: I1027 23:24:58.142344 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67flh\" (UniqueName: \"kubernetes.io/projected/35ebbe23-2d89-4180-96a9-e7c69ba062fa-kube-api-access-67flh\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144047 kubelet[2574]: I1027 23:24:58.142367 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35ebbe23-2d89-4180-96a9-e7c69ba062fa-xtables-lock\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144047 kubelet[2574]: I1027 23:24:58.142385 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35ebbe23-2d89-4180-96a9-e7c69ba062fa-cilium-config-path\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144047 kubelet[2574]: I1027 23:24:58.142409 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35ebbe23-2d89-4180-96a9-e7c69ba062fa-hubble-tls\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144047 kubelet[2574]: I1027 23:24:58.142424 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35ebbe23-2d89-4180-96a9-e7c69ba062fa-bpf-maps\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144456 kubelet[2574]: I1027 23:24:58.142464 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35ebbe23-2d89-4180-96a9-e7c69ba062fa-cni-path\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144456 kubelet[2574]: I1027 23:24:58.142510 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35ebbe23-2d89-4180-96a9-e7c69ba062fa-lib-modules\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144456 kubelet[2574]: I1027 23:24:58.142537 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35ebbe23-2d89-4180-96a9-e7c69ba062fa-host-proc-sys-net\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144456 kubelet[2574]: I1027 23:24:58.142565 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35ebbe23-2d89-4180-96a9-e7c69ba062fa-etc-cni-netd\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144456 kubelet[2574]: I1027 23:24:58.142596 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35ebbe23-2d89-4180-96a9-e7c69ba062fa-cilium-run\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144456 kubelet[2574]: I1027 23:24:58.142612 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35ebbe23-2d89-4180-96a9-e7c69ba062fa-hostproc\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144574 kubelet[2574]: I1027 23:24:58.142626 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35ebbe23-2d89-4180-96a9-e7c69ba062fa-cilium-cgroup\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144574 kubelet[2574]: I1027 23:24:58.142643 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35ebbe23-2d89-4180-96a9-e7c69ba062fa-clustermesh-secrets\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.144574 kubelet[2574]: I1027 23:24:58.142658 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/35ebbe23-2d89-4180-96a9-e7c69ba062fa-cilium-ipsec-secrets\") pod \"cilium-wpsdr\" (UID: \"35ebbe23-2d89-4180-96a9-e7c69ba062fa\") " pod="kube-system/cilium-wpsdr" Oct 27 23:24:58.152906 systemd[1]: Started sshd@25-10.0.0.25:22-10.0.0.1:59036.service - OpenSSH per-connection server daemon (10.0.0.1:59036). Oct 27 23:24:58.154378 systemd-logind[1437]: Removed session 25. Oct 27 23:24:58.192218 sshd[4442]: Accepted publickey for core from 10.0.0.1 port 59036 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:24:58.193631 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:24:58.198351 systemd-logind[1437]: New session 26 of user core. Oct 27 23:24:58.207529 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 27 23:24:58.343768 kubelet[2574]: E1027 23:24:58.343645 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:58.345604 containerd[1450]: time="2025-10-27T23:24:58.345561662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wpsdr,Uid:35ebbe23-2d89-4180-96a9-e7c69ba062fa,Namespace:kube-system,Attempt:0,}" Oct 27 23:24:58.367707 containerd[1450]: time="2025-10-27T23:24:58.367458111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:24:58.367707 containerd[1450]: time="2025-10-27T23:24:58.367520072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:24:58.367707 containerd[1450]: time="2025-10-27T23:24:58.367535112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:24:58.368379 containerd[1450]: time="2025-10-27T23:24:58.368302953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:24:58.385499 systemd[1]: Started cri-containerd-942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb.scope - libcontainer container 942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb. Oct 27 23:24:58.414261 containerd[1450]: time="2025-10-27T23:24:58.414222577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wpsdr,Uid:35ebbe23-2d89-4180-96a9-e7c69ba062fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb\"" Oct 27 23:24:58.414949 kubelet[2574]: E1027 23:24:58.414927 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:58.419865 containerd[1450]: time="2025-10-27T23:24:58.419818309Z" level=info msg="CreateContainer within sandbox \"942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 27 23:24:58.429351 containerd[1450]: time="2025-10-27T23:24:58.428891769Z" level=info msg="CreateContainer within sandbox \"942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5544df465f4e02dc8525032f4f757e0413eff2f33c6769e6c562b2a44e5857d7\"" Oct 27 23:24:58.429454 containerd[1450]: time="2025-10-27T23:24:58.429435851Z" level=info msg="StartContainer for \"5544df465f4e02dc8525032f4f757e0413eff2f33c6769e6c562b2a44e5857d7\"" Oct 27 23:24:58.457667 systemd[1]: Started cri-containerd-5544df465f4e02dc8525032f4f757e0413eff2f33c6769e6c562b2a44e5857d7.scope - libcontainer container 5544df465f4e02dc8525032f4f757e0413eff2f33c6769e6c562b2a44e5857d7. Oct 27 23:24:58.486012 containerd[1450]: time="2025-10-27T23:24:58.485965098Z" level=info msg="StartContainer for \"5544df465f4e02dc8525032f4f757e0413eff2f33c6769e6c562b2a44e5857d7\" returns successfully" Oct 27 23:24:58.491503 systemd[1]: cri-containerd-5544df465f4e02dc8525032f4f757e0413eff2f33c6769e6c562b2a44e5857d7.scope: Deactivated successfully. Oct 27 23:24:58.519810 containerd[1450]: time="2025-10-27T23:24:58.519744534Z" level=info msg="shim disconnected" id=5544df465f4e02dc8525032f4f757e0413eff2f33c6769e6c562b2a44e5857d7 namespace=k8s.io Oct 27 23:24:58.520162 containerd[1450]: time="2025-10-27T23:24:58.520007894Z" level=warning msg="cleaning up after shim disconnected" id=5544df465f4e02dc8525032f4f757e0413eff2f33c6769e6c562b2a44e5857d7 namespace=k8s.io Oct 27 23:24:58.520162 containerd[1450]: time="2025-10-27T23:24:58.520025094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:24:58.568704 kubelet[2574]: E1027 23:24:58.568423 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:58.578171 containerd[1450]: time="2025-10-27T23:24:58.578118345Z" level=info msg="CreateContainer within sandbox \"942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 27 23:24:58.590465 containerd[1450]: time="2025-10-27T23:24:58.590328972Z" level=info msg="CreateContainer within sandbox \"942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c14aea5e272e1edd1ce57e2ed3a4487a38e2a478776176887a715594dfc0c0a6\"" Oct 27 23:24:58.591418 containerd[1450]: time="2025-10-27T23:24:58.591381374Z" level=info msg="StartContainer for \"c14aea5e272e1edd1ce57e2ed3a4487a38e2a478776176887a715594dfc0c0a6\"" Oct 27 23:24:58.614471 systemd[1]: Started cri-containerd-c14aea5e272e1edd1ce57e2ed3a4487a38e2a478776176887a715594dfc0c0a6.scope - libcontainer container c14aea5e272e1edd1ce57e2ed3a4487a38e2a478776176887a715594dfc0c0a6. Oct 27 23:24:58.636699 containerd[1450]: time="2025-10-27T23:24:58.636658836Z" level=info msg="StartContainer for \"c14aea5e272e1edd1ce57e2ed3a4487a38e2a478776176887a715594dfc0c0a6\" returns successfully" Oct 27 23:24:58.642815 systemd[1]: cri-containerd-c14aea5e272e1edd1ce57e2ed3a4487a38e2a478776176887a715594dfc0c0a6.scope: Deactivated successfully. Oct 27 23:24:58.667040 containerd[1450]: time="2025-10-27T23:24:58.666979024Z" level=info msg="shim disconnected" id=c14aea5e272e1edd1ce57e2ed3a4487a38e2a478776176887a715594dfc0c0a6 namespace=k8s.io Oct 27 23:24:58.667040 containerd[1450]: time="2025-10-27T23:24:58.667032744Z" level=warning msg="cleaning up after shim disconnected" id=c14aea5e272e1edd1ce57e2ed3a4487a38e2a478776176887a715594dfc0c0a6 namespace=k8s.io Oct 27 23:24:58.667040 containerd[1450]: time="2025-10-27T23:24:58.667040864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:24:59.571751 kubelet[2574]: E1027 23:24:59.571716 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:24:59.577869 containerd[1450]: time="2025-10-27T23:24:59.577822356Z" level=info msg="CreateContainer within sandbox \"942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 27 23:24:59.597873 containerd[1450]: time="2025-10-27T23:24:59.597741119Z" level=info msg="CreateContainer within sandbox \"942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c0c0161ff9b77b0f73bb9d71b06dbf8ef129e9207475ed57fa3fa560d932d49\"" Oct 27 23:24:59.598387 containerd[1450]: time="2025-10-27T23:24:59.598360201Z" level=info msg="StartContainer for \"4c0c0161ff9b77b0f73bb9d71b06dbf8ef129e9207475ed57fa3fa560d932d49\"" Oct 27 23:24:59.598414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317540099.mount: Deactivated successfully. Oct 27 23:24:59.634510 systemd[1]: Started cri-containerd-4c0c0161ff9b77b0f73bb9d71b06dbf8ef129e9207475ed57fa3fa560d932d49.scope - libcontainer container 4c0c0161ff9b77b0f73bb9d71b06dbf8ef129e9207475ed57fa3fa560d932d49. Oct 27 23:24:59.661475 containerd[1450]: time="2025-10-27T23:24:59.661418099Z" level=info msg="StartContainer for \"4c0c0161ff9b77b0f73bb9d71b06dbf8ef129e9207475ed57fa3fa560d932d49\" returns successfully" Oct 27 23:24:59.662607 systemd[1]: cri-containerd-4c0c0161ff9b77b0f73bb9d71b06dbf8ef129e9207475ed57fa3fa560d932d49.scope: Deactivated successfully. Oct 27 23:24:59.689168 containerd[1450]: time="2025-10-27T23:24:59.688947679Z" level=info msg="shim disconnected" id=4c0c0161ff9b77b0f73bb9d71b06dbf8ef129e9207475ed57fa3fa560d932d49 namespace=k8s.io Oct 27 23:24:59.689168 containerd[1450]: time="2025-10-27T23:24:59.689000119Z" level=warning msg="cleaning up after shim disconnected" id=4c0c0161ff9b77b0f73bb9d71b06dbf8ef129e9207475ed57fa3fa560d932d49 namespace=k8s.io Oct 27 23:24:59.689168 containerd[1450]: time="2025-10-27T23:24:59.689007719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:25:00.249298 systemd[1]: run-containerd-runc-k8s.io-4c0c0161ff9b77b0f73bb9d71b06dbf8ef129e9207475ed57fa3fa560d932d49-runc.KS8XIY.mount: Deactivated successfully. Oct 27 23:25:00.251164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c0c0161ff9b77b0f73bb9d71b06dbf8ef129e9207475ed57fa3fa560d932d49-rootfs.mount: Deactivated successfully. Oct 27 23:25:00.473315 kernel: hrtimer: interrupt took 10000301 ns Oct 27 23:25:00.576245 kubelet[2574]: E1027 23:25:00.576004 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:25:00.587477 containerd[1450]: time="2025-10-27T23:25:00.585501446Z" level=info msg="CreateContainer within sandbox \"942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 27 23:25:00.610360 containerd[1450]: time="2025-10-27T23:25:00.610311338Z" level=info msg="CreateContainer within sandbox \"942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fac1345d5455ad2c2068db08ac7c6d9c8b3731ccb01cf8965da1ae10ab2f2ba2\"" Oct 27 23:25:00.612739 containerd[1450]: time="2025-10-27T23:25:00.611736341Z" level=info msg="StartContainer for \"fac1345d5455ad2c2068db08ac7c6d9c8b3731ccb01cf8965da1ae10ab2f2ba2\"" Oct 27 23:25:00.647479 systemd[1]: Started cri-containerd-fac1345d5455ad2c2068db08ac7c6d9c8b3731ccb01cf8965da1ae10ab2f2ba2.scope - libcontainer container fac1345d5455ad2c2068db08ac7c6d9c8b3731ccb01cf8965da1ae10ab2f2ba2. Oct 27 23:25:00.668206 systemd[1]: cri-containerd-fac1345d5455ad2c2068db08ac7c6d9c8b3731ccb01cf8965da1ae10ab2f2ba2.scope: Deactivated successfully. Oct 27 23:25:00.670984 containerd[1450]: time="2025-10-27T23:25:00.670903587Z" level=info msg="StartContainer for \"fac1345d5455ad2c2068db08ac7c6d9c8b3731ccb01cf8965da1ae10ab2f2ba2\" returns successfully" Oct 27 23:25:00.690405 containerd[1450]: time="2025-10-27T23:25:00.690324789Z" level=info msg="shim disconnected" id=fac1345d5455ad2c2068db08ac7c6d9c8b3731ccb01cf8965da1ae10ab2f2ba2 namespace=k8s.io Oct 27 23:25:00.690405 containerd[1450]: time="2025-10-27T23:25:00.690402949Z" level=warning msg="cleaning up after shim disconnected" id=fac1345d5455ad2c2068db08ac7c6d9c8b3731ccb01cf8965da1ae10ab2f2ba2 namespace=k8s.io Oct 27 23:25:00.690405 containerd[1450]: time="2025-10-27T23:25:00.690413429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:25:01.249322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fac1345d5455ad2c2068db08ac7c6d9c8b3731ccb01cf8965da1ae10ab2f2ba2-rootfs.mount: Deactivated successfully. Oct 27 23:25:01.580516 kubelet[2574]: E1027 23:25:01.580418 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:25:01.587478 containerd[1450]: time="2025-10-27T23:25:01.587440906Z" level=info msg="CreateContainer within sandbox \"942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 27 23:25:01.600202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178569266.mount: Deactivated successfully. Oct 27 23:25:01.601387 containerd[1450]: time="2025-10-27T23:25:01.601308375Z" level=info msg="CreateContainer within sandbox \"942649b4f61341d5275138ebec07b614e137a3ef8bd27925a1651b7380719bbb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dd1d2ad1af0dad32e72684787d64dfae48832f738ccad74415c2f698f1414c61\"" Oct 27 23:25:01.603450 containerd[1450]: time="2025-10-27T23:25:01.602558458Z" level=info msg="StartContainer for \"dd1d2ad1af0dad32e72684787d64dfae48832f738ccad74415c2f698f1414c61\"" Oct 27 23:25:01.631534 systemd[1]: Started cri-containerd-dd1d2ad1af0dad32e72684787d64dfae48832f738ccad74415c2f698f1414c61.scope - libcontainer container dd1d2ad1af0dad32e72684787d64dfae48832f738ccad74415c2f698f1414c61. Oct 27 23:25:01.656249 containerd[1450]: time="2025-10-27T23:25:01.656140209Z" level=info msg="StartContainer for \"dd1d2ad1af0dad32e72684787d64dfae48832f738ccad74415c2f698f1414c61\" returns successfully" Oct 27 23:25:01.922345 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 27 23:25:02.584763 kubelet[2574]: E1027 23:25:02.584690 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:25:04.346039 kubelet[2574]: E1027 23:25:04.346000 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:25:04.727453 systemd-networkd[1381]: lxc_health: Link UP Oct 27 23:25:04.727705 systemd-networkd[1381]: lxc_health: Gained carrier Oct 27 23:25:05.382047 kubelet[2574]: E1027 23:25:05.381636 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:25:06.346367 kubelet[2574]: E1027 23:25:06.346320 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:25:06.367749 kubelet[2574]: I1027 23:25:06.367053 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wpsdr" podStartSLOduration=9.367038321 podStartE2EDuration="9.367038321s" podCreationTimestamp="2025-10-27 23:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:25:02.600206214 +0000 UTC m=+80.305256551" watchObservedRunningTime="2025-10-27 23:25:06.367038321 +0000 UTC m=+84.072088698" Oct 27 23:25:06.594260 kubelet[2574]: E1027 23:25:06.594223 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:25:06.752420 systemd-networkd[1381]: lxc_health: Gained IPv6LL Oct 27 23:25:07.594303 kubelet[2574]: E1027 23:25:07.594134 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:25:08.382009 kubelet[2574]: E1027 23:25:08.381961 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:25:11.005980 systemd[1]: run-containerd-runc-k8s.io-dd1d2ad1af0dad32e72684787d64dfae48832f738ccad74415c2f698f1414c61-runc.SeBiGD.mount: Deactivated successfully. Oct 27 23:25:11.081776 sshd[4445]: Connection closed by 10.0.0.1 port 59036 Oct 27 23:25:11.082316 sshd-session[4442]: pam_unix(sshd:session): session closed for user core Oct 27 23:25:11.084829 systemd-logind[1437]: Session 26 logged out. Waiting for processes to exit. Oct 27 23:25:11.085114 systemd[1]: sshd@25-10.0.0.25:22-10.0.0.1:59036.service: Deactivated successfully. Oct 27 23:25:11.087501 systemd[1]: session-26.scope: Deactivated successfully. Oct 27 23:25:11.090629 systemd-logind[1437]: Removed session 26.