Oct 13 04:51:59.365838 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 13 04:51:59.365861 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Mon Oct 13 03:30:16 -00 2025 Oct 13 04:51:59.365870 kernel: KASLR enabled Oct 13 04:51:59.365876 kernel: efi: EFI v2.7 by EDK II Oct 13 04:51:59.365882 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 13 04:51:59.365888 kernel: random: crng init done Oct 13 04:51:59.365895 kernel: secureboot: Secure boot disabled Oct 13 04:51:59.365901 kernel: ACPI: Early table checksum verification disabled Oct 13 04:51:59.365909 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 13 04:51:59.365915 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 13 04:51:59.365922 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:51:59.365928 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:51:59.365933 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:51:59.365940 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:51:59.365949 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:51:59.365956 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:51:59.365962 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:51:59.365969 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:51:59.365975 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:51:59.365982 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 13 04:51:59.365988 kernel: ACPI: Use ACPI SPCR as default console: No Oct 13 04:51:59.365995 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 04:51:59.366002 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 13 04:51:59.366009 kernel: Zone ranges: Oct 13 04:51:59.366015 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 04:51:59.366021 kernel: DMA32 empty Oct 13 04:51:59.366028 kernel: Normal empty Oct 13 04:51:59.366034 kernel: Device empty Oct 13 04:51:59.366040 kernel: Movable zone start for each node Oct 13 04:51:59.366046 kernel: Early memory node ranges Oct 13 04:51:59.366052 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 13 04:51:59.366059 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 13 04:51:59.366065 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 13 04:51:59.366071 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 13 04:51:59.366079 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 13 04:51:59.366085 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 13 04:51:59.366092 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 13 04:51:59.366098 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 13 04:51:59.366105 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 13 04:51:59.366111 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 13 04:51:59.366132 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 13 04:51:59.366139 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 13 04:51:59.366145 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 13 04:51:59.366152 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 04:51:59.366159 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 13 04:51:59.366166 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 13 04:51:59.366172 kernel: psci: probing for conduit method from ACPI. Oct 13 04:51:59.366180 kernel: psci: PSCIv1.1 detected in firmware. Oct 13 04:51:59.366191 kernel: psci: Using standard PSCI v0.2 function IDs Oct 13 04:51:59.366199 kernel: psci: Trusted OS migration not required Oct 13 04:51:59.366206 kernel: psci: SMC Calling Convention v1.1 Oct 13 04:51:59.366213 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 13 04:51:59.366220 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 13 04:51:59.366227 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 13 04:51:59.366233 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 13 04:51:59.366240 kernel: Detected PIPT I-cache on CPU0 Oct 13 04:51:59.366247 kernel: CPU features: detected: GIC system register CPU interface Oct 13 04:51:59.366254 kernel: CPU features: detected: Spectre-v4 Oct 13 04:51:59.366261 kernel: CPU features: detected: Spectre-BHB Oct 13 04:51:59.366269 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 13 04:51:59.366276 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 13 04:51:59.366283 kernel: CPU features: detected: ARM erratum 1418040 Oct 13 04:51:59.366290 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 13 04:51:59.366296 kernel: alternatives: applying boot alternatives Oct 13 04:51:59.366304 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1a81e36b39d22063d1d9b2ac3307af6d1e57cfd926c8fafd214fb74284e73d99 Oct 13 04:51:59.366311 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 04:51:59.366318 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 04:51:59.366325 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 04:51:59.366331 kernel: Fallback order for Node 0: 0 Oct 13 04:51:59.366339 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 13 04:51:59.366346 kernel: Policy zone: DMA Oct 13 04:51:59.366353 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 04:51:59.366359 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 13 04:51:59.366366 kernel: software IO TLB: area num 4. Oct 13 04:51:59.366373 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 13 04:51:59.366379 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 13 04:51:59.366395 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 04:51:59.366402 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 04:51:59.366409 kernel: rcu: RCU event tracing is enabled. Oct 13 04:51:59.366416 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 04:51:59.366425 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 04:51:59.366438 kernel: Tracing variant of Tasks RCU enabled. Oct 13 04:51:59.366445 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 04:51:59.366452 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 04:51:59.366459 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 04:51:59.366467 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 04:51:59.366474 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 13 04:51:59.366481 kernel: GICv3: 256 SPIs implemented Oct 13 04:51:59.366488 kernel: GICv3: 0 Extended SPIs implemented Oct 13 04:51:59.366495 kernel: Root IRQ handler: gic_handle_irq Oct 13 04:51:59.366502 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 13 04:51:59.366510 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 13 04:51:59.366517 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 13 04:51:59.366524 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 13 04:51:59.366530 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 13 04:51:59.366538 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 13 04:51:59.366545 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 13 04:51:59.366552 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 13 04:51:59.366559 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 04:51:59.366566 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 04:51:59.366574 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 13 04:51:59.366581 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 13 04:51:59.366590 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 13 04:51:59.366597 kernel: arm-pv: using stolen time PV Oct 13 04:51:59.366604 kernel: Console: colour dummy device 80x25 Oct 13 04:51:59.366612 kernel: ACPI: Core revision 20240827 Oct 13 04:51:59.366620 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 13 04:51:59.366627 kernel: pid_max: default: 32768 minimum: 301 Oct 13 04:51:59.366634 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 04:51:59.366641 kernel: landlock: Up and running. Oct 13 04:51:59.366650 kernel: SELinux: Initializing. Oct 13 04:51:59.366657 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 04:51:59.366664 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 04:51:59.366671 kernel: rcu: Hierarchical SRCU implementation. Oct 13 04:51:59.366679 kernel: rcu: Max phase no-delay instances is 400. Oct 13 04:51:59.366686 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 04:51:59.366693 kernel: Remapping and enabling EFI services. Oct 13 04:51:59.366702 kernel: smp: Bringing up secondary CPUs ... Oct 13 04:51:59.366713 kernel: Detected PIPT I-cache on CPU1 Oct 13 04:51:59.366722 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 13 04:51:59.366730 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 13 04:51:59.366737 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 04:51:59.366745 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 13 04:51:59.366752 kernel: Detected PIPT I-cache on CPU2 Oct 13 04:51:59.366760 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 13 04:51:59.366769 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 13 04:51:59.366777 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 04:51:59.366784 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 13 04:51:59.366796 kernel: Detected PIPT I-cache on CPU3 Oct 13 04:51:59.366804 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 13 04:51:59.366811 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 13 04:51:59.366820 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 04:51:59.366828 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 13 04:51:59.366835 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 04:51:59.366843 kernel: SMP: Total of 4 processors activated. Oct 13 04:51:59.366850 kernel: CPU: All CPU(s) started at EL1 Oct 13 04:51:59.366858 kernel: CPU features: detected: 32-bit EL0 Support Oct 13 04:51:59.366866 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 13 04:51:59.366876 kernel: CPU features: detected: Common not Private translations Oct 13 04:51:59.366884 kernel: CPU features: detected: CRC32 instructions Oct 13 04:51:59.366891 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 13 04:51:59.366899 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 13 04:51:59.366907 kernel: CPU features: detected: LSE atomic instructions Oct 13 04:51:59.366914 kernel: CPU features: detected: Privileged Access Never Oct 13 04:51:59.366921 kernel: CPU features: detected: RAS Extension Support Oct 13 04:51:59.366929 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 13 04:51:59.366938 kernel: alternatives: applying system-wide alternatives Oct 13 04:51:59.366945 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 13 04:51:59.366953 kernel: Memory: 2450400K/2572288K available (11200K kernel code, 2456K rwdata, 9080K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Oct 13 04:51:59.366961 kernel: devtmpfs: initialized Oct 13 04:51:59.366968 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 04:51:59.366976 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 04:51:59.366984 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 13 04:51:59.366993 kernel: 0 pages in range for non-PLT usage Oct 13 04:51:59.367001 kernel: 515040 pages in range for PLT usage Oct 13 04:51:59.367008 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 04:51:59.367016 kernel: SMBIOS 3.0.0 present. Oct 13 04:51:59.367024 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 13 04:51:59.367031 kernel: DMI: Memory slots populated: 1/1 Oct 13 04:51:59.367053 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 04:51:59.367064 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 13 04:51:59.367072 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 13 04:51:59.367080 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 13 04:51:59.367088 kernel: audit: initializing netlink subsys (disabled) Oct 13 04:51:59.367096 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Oct 13 04:51:59.367103 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 04:51:59.367111 kernel: cpuidle: using governor menu Oct 13 04:51:59.367119 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 13 04:51:59.367129 kernel: ASID allocator initialised with 32768 entries Oct 13 04:51:59.367137 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 04:51:59.367144 kernel: Serial: AMBA PL011 UART driver Oct 13 04:51:59.367152 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 04:51:59.367159 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 04:51:59.367167 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 13 04:51:59.367175 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 13 04:51:59.367184 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 04:51:59.367192 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 04:51:59.367200 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 13 04:51:59.367208 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 13 04:51:59.367215 kernel: ACPI: Added _OSI(Module Device) Oct 13 04:51:59.367222 kernel: ACPI: Added _OSI(Processor Device) Oct 13 04:51:59.367230 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 04:51:59.367239 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 04:51:59.367247 kernel: ACPI: Interpreter enabled Oct 13 04:51:59.367254 kernel: ACPI: Using GIC for interrupt routing Oct 13 04:51:59.367262 kernel: ACPI: MCFG table detected, 1 entries Oct 13 04:51:59.367269 kernel: ACPI: CPU0 has been hot-added Oct 13 04:51:59.367277 kernel: ACPI: CPU1 has been hot-added Oct 13 04:51:59.367284 kernel: ACPI: CPU2 has been hot-added Oct 13 04:51:59.367292 kernel: ACPI: CPU3 has been hot-added Oct 13 04:51:59.367301 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 13 04:51:59.367309 kernel: printk: legacy console [ttyAMA0] enabled Oct 13 04:51:59.367317 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 04:51:59.367484 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 04:51:59.367581 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 13 04:51:59.367669 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 13 04:51:59.367756 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 13 04:51:59.367851 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 13 04:51:59.367862 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 13 04:51:59.367870 kernel: PCI host bridge to bus 0000:00 Oct 13 04:51:59.367964 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 13 04:51:59.368041 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 13 04:51:59.368138 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 13 04:51:59.368219 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 04:51:59.368333 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 13 04:51:59.368446 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 04:51:59.368537 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 13 04:51:59.368622 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 13 04:51:59.368703 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 13 04:51:59.368784 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 13 04:51:59.368883 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 13 04:51:59.368964 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 13 04:51:59.369040 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 13 04:51:59.369126 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 13 04:51:59.369200 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 13 04:51:59.369209 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 13 04:51:59.369217 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 13 04:51:59.369224 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 13 04:51:59.369232 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 13 04:51:59.369241 kernel: iommu: Default domain type: Translated Oct 13 04:51:59.369249 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 13 04:51:59.369256 kernel: efivars: Registered efivars operations Oct 13 04:51:59.369264 kernel: vgaarb: loaded Oct 13 04:51:59.369271 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 13 04:51:59.369278 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 04:51:59.369286 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 04:51:59.369296 kernel: pnp: PnP ACPI init Oct 13 04:51:59.369411 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 13 04:51:59.369422 kernel: pnp: PnP ACPI: found 1 devices Oct 13 04:51:59.369430 kernel: NET: Registered PF_INET protocol family Oct 13 04:51:59.369437 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 04:51:59.369445 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 04:51:59.369453 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 04:51:59.369463 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 04:51:59.369470 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 04:51:59.369478 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 04:51:59.369486 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 04:51:59.369493 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 04:51:59.369500 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 04:51:59.369508 kernel: PCI: CLS 0 bytes, default 64 Oct 13 04:51:59.369517 kernel: kvm [1]: HYP mode not available Oct 13 04:51:59.369524 kernel: Initialise system trusted keyrings Oct 13 04:51:59.369532 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 04:51:59.369539 kernel: Key type asymmetric registered Oct 13 04:51:59.369546 kernel: Asymmetric key parser 'x509' registered Oct 13 04:51:59.369554 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 13 04:51:59.369562 kernel: io scheduler mq-deadline registered Oct 13 04:51:59.369570 kernel: io scheduler kyber registered Oct 13 04:51:59.369578 kernel: io scheduler bfq registered Oct 13 04:51:59.369585 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 13 04:51:59.369593 kernel: ACPI: button: Power Button [PWRB] Oct 13 04:51:59.369601 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 13 04:51:59.369684 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 13 04:51:59.369695 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 04:51:59.369704 kernel: thunder_xcv, ver 1.0 Oct 13 04:51:59.369711 kernel: thunder_bgx, ver 1.0 Oct 13 04:51:59.369719 kernel: nicpf, ver 1.0 Oct 13 04:51:59.369726 kernel: nicvf, ver 1.0 Oct 13 04:51:59.369826 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 13 04:51:59.369905 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-13T04:51:58 UTC (1760331118) Oct 13 04:51:59.369915 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 04:51:59.369925 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 13 04:51:59.369932 kernel: watchdog: NMI not fully supported Oct 13 04:51:59.369940 kernel: watchdog: Hard watchdog permanently disabled Oct 13 04:51:59.369947 kernel: NET: Registered PF_INET6 protocol family Oct 13 04:51:59.369955 kernel: Segment Routing with IPv6 Oct 13 04:51:59.369963 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 04:51:59.369970 kernel: NET: Registered PF_PACKET protocol family Oct 13 04:51:59.369979 kernel: Key type dns_resolver registered Oct 13 04:51:59.369987 kernel: registered taskstats version 1 Oct 13 04:51:59.369994 kernel: Loading compiled-in X.509 certificates Oct 13 04:51:59.370002 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 0d5be6bcdaeaf26c55e47d87e2567b03196058e4' Oct 13 04:51:59.370010 kernel: Demotion targets for Node 0: null Oct 13 04:51:59.370017 kernel: Key type .fscrypt registered Oct 13 04:51:59.370025 kernel: Key type fscrypt-provisioning registered Oct 13 04:51:59.370034 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 04:51:59.370041 kernel: ima: Allocated hash algorithm: sha1 Oct 13 04:51:59.370049 kernel: ima: No architecture policies found Oct 13 04:51:59.370056 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 13 04:51:59.370064 kernel: clk: Disabling unused clocks Oct 13 04:51:59.370071 kernel: PM: genpd: Disabling unused power domains Oct 13 04:51:59.370078 kernel: Freeing unused kernel memory: 12992K Oct 13 04:51:59.370087 kernel: Run /init as init process Oct 13 04:51:59.370095 kernel: with arguments: Oct 13 04:51:59.370103 kernel: /init Oct 13 04:51:59.370110 kernel: with environment: Oct 13 04:51:59.370117 kernel: HOME=/ Oct 13 04:51:59.370124 kernel: TERM=linux Oct 13 04:51:59.370132 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 04:51:59.370235 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 13 04:51:59.370318 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 13 04:51:59.370328 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 04:51:59.370335 kernel: GPT:16515071 != 27000831 Oct 13 04:51:59.370343 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 04:51:59.370351 kernel: GPT:16515071 != 27000831 Oct 13 04:51:59.370358 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 04:51:59.370367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 04:51:59.370375 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370396 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370403 kernel: SCSI subsystem initialized Oct 13 04:51:59.370411 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370419 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 04:51:59.370427 kernel: device-mapper: uevent: version 1.0.3 Oct 13 04:51:59.370436 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 04:51:59.370444 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 13 04:51:59.370452 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370459 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370467 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370475 kernel: raid6: neonx8 gen() 15780 MB/s Oct 13 04:51:59.370482 kernel: raid6: neonx4 gen() 15818 MB/s Oct 13 04:51:59.370491 kernel: raid6: neonx2 gen() 13220 MB/s Oct 13 04:51:59.370499 kernel: raid6: neonx1 gen() 10439 MB/s Oct 13 04:51:59.370506 kernel: raid6: int64x8 gen() 6906 MB/s Oct 13 04:51:59.370514 kernel: raid6: int64x4 gen() 7353 MB/s Oct 13 04:51:59.370522 kernel: raid6: int64x2 gen() 6105 MB/s Oct 13 04:51:59.370529 kernel: raid6: int64x1 gen() 5055 MB/s Oct 13 04:51:59.370537 kernel: raid6: using algorithm neonx4 gen() 15818 MB/s Oct 13 04:51:59.370546 kernel: raid6: .... xor() 12359 MB/s, rmw enabled Oct 13 04:51:59.370555 kernel: raid6: using neon recovery algorithm Oct 13 04:51:59.370562 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370570 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370577 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370585 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370592 kernel: xor: measuring software checksum speed Oct 13 04:51:59.370600 kernel: 8regs : 20428 MB/sec Oct 13 04:51:59.370608 kernel: 32regs : 21687 MB/sec Oct 13 04:51:59.370617 kernel: arm64_neon : 28138 MB/sec Oct 13 04:51:59.370624 kernel: xor: using function: arm64_neon (28138 MB/sec) Oct 13 04:51:59.370632 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370639 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 04:51:59.370647 kernel: BTRFS: device fsid 976d1a25-6e06-4ce9-b674-96d83e61f95d devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (203) Oct 13 04:51:59.370654 kernel: BTRFS info (device dm-0): first mount of filesystem 976d1a25-6e06-4ce9-b674-96d83e61f95d Oct 13 04:51:59.370662 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:51:59.370671 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 04:51:59.370679 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 04:51:59.370686 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:51:59.370693 kernel: loop: module loaded Oct 13 04:51:59.370701 kernel: loop0: detected capacity change from 0 to 91456 Oct 13 04:51:59.370709 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 04:51:59.370718 systemd[1]: Successfully made /usr/ read-only. Oct 13 04:51:59.370730 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 04:51:59.370739 systemd[1]: Detected virtualization kvm. Oct 13 04:51:59.370747 systemd[1]: Detected architecture arm64. Oct 13 04:51:59.370755 systemd[1]: Running in initrd. Oct 13 04:51:59.370763 systemd[1]: No hostname configured, using default hostname. Oct 13 04:51:59.370771 systemd[1]: Hostname set to . Oct 13 04:51:59.370781 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 04:51:59.370796 systemd[1]: Queued start job for default target initrd.target. Oct 13 04:51:59.370804 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 04:51:59.370812 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 04:51:59.370821 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 04:51:59.370829 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 04:51:59.370844 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 04:51:59.370854 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 04:51:59.370862 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 04:51:59.370871 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 04:51:59.370880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 04:51:59.370889 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 04:51:59.370898 systemd[1]: Reached target paths.target - Path Units. Oct 13 04:51:59.370906 systemd[1]: Reached target slices.target - Slice Units. Oct 13 04:51:59.370915 systemd[1]: Reached target swap.target - Swaps. Oct 13 04:51:59.370923 systemd[1]: Reached target timers.target - Timer Units. Oct 13 04:51:59.370932 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 04:51:59.370942 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 04:51:59.370951 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 04:51:59.370959 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 04:51:59.370967 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 04:51:59.370976 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 04:51:59.370984 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 04:51:59.370993 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 04:51:59.371003 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 04:51:59.371011 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 04:51:59.371020 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 04:51:59.371028 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 04:51:59.371037 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 04:51:59.371046 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 04:51:59.371055 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 04:51:59.371064 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 04:51:59.371072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:51:59.371082 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 04:51:59.371093 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 04:51:59.371102 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 04:51:59.371111 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 04:51:59.371120 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 04:51:59.371146 systemd-journald[343]: Collecting audit messages is disabled. Oct 13 04:51:59.371167 kernel: Bridge firewalling registered Oct 13 04:51:59.371175 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 04:51:59.371186 systemd-journald[343]: Journal started Oct 13 04:51:59.371206 systemd-journald[343]: Runtime Journal (/run/log/journal/3b70ff9cd4894c7daa654cf22cd571a7) is 6M, max 48.5M, 42.4M free. Oct 13 04:51:59.366141 systemd-modules-load[344]: Inserted module 'br_netfilter' Oct 13 04:51:59.374406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 04:51:59.378083 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 04:51:59.381675 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 04:51:59.384457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:51:59.388741 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 04:51:59.390447 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 04:51:59.402010 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 04:51:59.405231 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 04:51:59.408072 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 04:51:59.412218 systemd-tmpfiles[369]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 04:51:59.412506 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 04:51:59.417615 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 04:51:59.425540 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 04:51:59.428927 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 04:51:59.445061 dracut-cmdline[387]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1a81e36b39d22063d1d9b2ac3307af6d1e57cfd926c8fafd214fb74284e73d99 Oct 13 04:51:59.466565 systemd-resolved[375]: Positive Trust Anchors: Oct 13 04:51:59.466583 systemd-resolved[375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 04:51:59.466586 systemd-resolved[375]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 04:51:59.466617 systemd-resolved[375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 04:51:59.488289 systemd-resolved[375]: Defaulting to hostname 'linux'. Oct 13 04:51:59.489467 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 04:51:59.491736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 04:51:59.527419 kernel: Loading iSCSI transport class v2.0-870. Oct 13 04:51:59.536421 kernel: iscsi: registered transport (tcp) Oct 13 04:51:59.549635 kernel: iscsi: registered transport (qla4xxx) Oct 13 04:51:59.549693 kernel: QLogic iSCSI HBA Driver Oct 13 04:51:59.571000 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 04:51:59.592430 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 04:51:59.593999 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 04:51:59.641561 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 04:51:59.644063 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 04:51:59.645849 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 04:51:59.680040 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 04:51:59.682769 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 04:51:59.712060 systemd-udevd[624]: Using default interface naming scheme 'v257'. Oct 13 04:51:59.721067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 04:51:59.724250 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 04:51:59.752615 dracut-pre-trigger[695]: rd.md=0: removing MD RAID activation Oct 13 04:51:59.753450 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 04:51:59.757277 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 04:51:59.778451 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 04:51:59.780961 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 04:51:59.804817 systemd-networkd[740]: lo: Link UP Oct 13 04:51:59.804825 systemd-networkd[740]: lo: Gained carrier Oct 13 04:51:59.805300 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 04:51:59.806802 systemd[1]: Reached target network.target - Network. Oct 13 04:51:59.839610 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 04:51:59.842248 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 04:51:59.890784 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 04:51:59.897921 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 04:51:59.905240 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 04:51:59.912290 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 04:51:59.914239 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 04:51:59.928170 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 04:51:59.928295 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:51:59.934568 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:51:59.937877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:51:59.940689 systemd-networkd[740]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:51:59.944167 disk-uuid[802]: Primary Header is updated. Oct 13 04:51:59.944167 disk-uuid[802]: Secondary Entries is updated. Oct 13 04:51:59.944167 disk-uuid[802]: Secondary Header is updated. Oct 13 04:51:59.940693 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 04:51:59.943655 systemd-networkd[740]: eth0: Link UP Oct 13 04:51:59.943816 systemd-networkd[740]: eth0: Gained carrier Oct 13 04:51:59.943828 systemd-networkd[740]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:51:59.954463 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 04:51:59.971878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:52:00.008479 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 04:52:00.009799 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 04:52:00.011974 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 04:52:00.014359 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 04:52:00.017678 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 04:52:00.042304 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 04:52:00.968979 disk-uuid[805]: Warning: The kernel is still using the old partition table. Oct 13 04:52:00.968979 disk-uuid[805]: The new table will be used at the next reboot or after you Oct 13 04:52:00.968979 disk-uuid[805]: run partprobe(8) or kpartx(8) Oct 13 04:52:00.968979 disk-uuid[805]: The operation has completed successfully. Oct 13 04:52:00.979372 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 04:52:00.980221 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 04:52:00.983014 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 04:52:01.012400 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (833) Oct 13 04:52:01.014114 kernel: BTRFS info (device vda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:52:01.014158 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:52:01.016468 kernel: BTRFS info (device vda6): turning on async discard Oct 13 04:52:01.016515 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 04:52:01.021413 kernel: BTRFS info (device vda6): last unmount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:52:01.022256 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 04:52:01.024315 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 04:52:01.117593 ignition[852]: Ignition 2.22.0 Oct 13 04:52:01.117609 ignition[852]: Stage: fetch-offline Oct 13 04:52:01.117649 ignition[852]: no configs at "/usr/lib/ignition/base.d" Oct 13 04:52:01.117660 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:52:01.117740 ignition[852]: parsed url from cmdline: "" Oct 13 04:52:01.117743 ignition[852]: no config URL provided Oct 13 04:52:01.117748 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 04:52:01.117756 ignition[852]: no config at "/usr/lib/ignition/user.ign" Oct 13 04:52:01.117802 ignition[852]: op(1): [started] loading QEMU firmware config module Oct 13 04:52:01.117806 ignition[852]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 04:52:01.127618 ignition[852]: op(1): [finished] loading QEMU firmware config module Oct 13 04:52:01.167321 ignition[852]: parsing config with SHA512: fe731f56d6a1ab92ae7ed63c675f5fd1be459e78f8214a0ad24b2f3202b4a4eac1c8963310e0701233aff644f85aa919ac07b18555709228471d8cfefe66586d Oct 13 04:52:01.173270 unknown[852]: fetched base config from "system" Oct 13 04:52:01.173286 unknown[852]: fetched user config from "qemu" Oct 13 04:52:01.173804 ignition[852]: fetch-offline: fetch-offline passed Oct 13 04:52:01.173878 ignition[852]: Ignition finished successfully Oct 13 04:52:01.177434 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 04:52:01.179421 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 04:52:01.180256 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 04:52:01.215899 ignition[866]: Ignition 2.22.0 Oct 13 04:52:01.215914 ignition[866]: Stage: kargs Oct 13 04:52:01.216056 ignition[866]: no configs at "/usr/lib/ignition/base.d" Oct 13 04:52:01.216063 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:52:01.216808 ignition[866]: kargs: kargs passed Oct 13 04:52:01.219800 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 04:52:01.216851 ignition[866]: Ignition finished successfully Oct 13 04:52:01.221774 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 04:52:01.264607 ignition[874]: Ignition 2.22.0 Oct 13 04:52:01.264624 ignition[874]: Stage: disks Oct 13 04:52:01.264751 ignition[874]: no configs at "/usr/lib/ignition/base.d" Oct 13 04:52:01.268078 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 04:52:01.264758 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:52:01.269330 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 04:52:01.265548 ignition[874]: disks: disks passed Oct 13 04:52:01.271272 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 04:52:01.265593 ignition[874]: Ignition finished successfully Oct 13 04:52:01.273607 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 04:52:01.275546 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 04:52:01.277073 systemd[1]: Reached target basic.target - Basic System. Oct 13 04:52:01.280242 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 04:52:01.309125 systemd-fsck[884]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 13 04:52:01.313163 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 04:52:01.315523 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 04:52:01.381414 kernel: EXT4-fs (vda9): mounted filesystem a42694d5-feb9-4394-9ac1-a45818242d2d r/w with ordered data mode. Quota mode: none. Oct 13 04:52:01.382085 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 04:52:01.383375 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 04:52:01.385813 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 04:52:01.387539 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 04:52:01.388563 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 04:52:01.388596 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 04:52:01.388620 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 04:52:01.398071 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 04:52:01.401267 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 04:52:01.406147 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Oct 13 04:52:01.406167 kernel: BTRFS info (device vda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:52:01.406184 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:52:01.406194 kernel: BTRFS info (device vda6): turning on async discard Oct 13 04:52:01.407416 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 04:52:01.408254 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 04:52:01.438166 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 04:52:01.441525 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Oct 13 04:52:01.444609 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 04:52:01.448435 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 04:52:01.518788 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 04:52:01.521154 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 04:52:01.522854 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 04:52:01.536791 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 04:52:01.539419 kernel: BTRFS info (device vda6): last unmount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:52:01.556664 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 04:52:01.571236 ignition[1006]: INFO : Ignition 2.22.0 Oct 13 04:52:01.571236 ignition[1006]: INFO : Stage: mount Oct 13 04:52:01.572947 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 04:52:01.572947 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:52:01.572947 ignition[1006]: INFO : mount: mount passed Oct 13 04:52:01.572947 ignition[1006]: INFO : Ignition finished successfully Oct 13 04:52:01.573944 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 04:52:01.575970 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 04:52:01.886575 systemd-networkd[740]: eth0: Gained IPv6LL Oct 13 04:52:02.383747 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 04:52:02.413947 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1018) Oct 13 04:52:02.413988 kernel: BTRFS info (device vda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:52:02.413999 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:52:02.416913 kernel: BTRFS info (device vda6): turning on async discard Oct 13 04:52:02.416933 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 04:52:02.418505 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 04:52:02.452008 ignition[1036]: INFO : Ignition 2.22.0 Oct 13 04:52:02.452008 ignition[1036]: INFO : Stage: files Oct 13 04:52:02.453410 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 04:52:02.453410 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:52:02.453410 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Oct 13 04:52:02.456416 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 04:52:02.456416 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 04:52:02.458846 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 04:52:02.458846 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 04:52:02.458846 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 04:52:02.458846 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 13 04:52:02.458846 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 13 04:52:02.457274 unknown[1036]: wrote ssh authorized keys file for user: core Oct 13 04:52:03.193703 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 04:52:03.916192 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 13 04:52:03.917948 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 13 04:52:03.917948 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 13 04:52:04.125166 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 13 04:52:04.217307 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 13 04:52:04.217307 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 13 04:52:04.220348 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 04:52:04.220348 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 04:52:04.220348 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 04:52:04.220348 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 04:52:04.220348 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 04:52:04.220348 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 04:52:04.220348 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 04:52:04.220348 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 04:52:04.220348 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 04:52:04.220348 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 04:52:04.234398 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 04:52:04.234398 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 04:52:04.234398 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 13 04:52:04.524097 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 13 04:52:04.818164 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 04:52:04.818164 ignition[1036]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 13 04:52:04.821035 ignition[1036]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 04:52:04.822724 ignition[1036]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 04:52:04.822724 ignition[1036]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 13 04:52:04.822724 ignition[1036]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 13 04:52:04.822724 ignition[1036]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 04:52:04.822724 ignition[1036]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 04:52:04.822724 ignition[1036]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 13 04:52:04.822724 ignition[1036]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 04:52:04.837455 ignition[1036]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 04:52:04.841122 ignition[1036]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 04:52:04.843514 ignition[1036]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 04:52:04.843514 ignition[1036]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 13 04:52:04.843514 ignition[1036]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 04:52:04.843514 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 04:52:04.843514 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 04:52:04.843514 ignition[1036]: INFO : files: files passed Oct 13 04:52:04.843514 ignition[1036]: INFO : Ignition finished successfully Oct 13 04:52:04.844485 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 04:52:04.846317 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 04:52:04.848954 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 04:52:04.865945 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 04:52:04.866066 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 04:52:04.869162 initrd-setup-root-after-ignition[1065]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 04:52:04.871676 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 04:52:04.871676 initrd-setup-root-after-ignition[1067]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 04:52:04.875100 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 04:52:04.876562 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 04:52:04.878580 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 04:52:04.880149 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 04:52:04.928953 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 04:52:04.929057 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 04:52:04.930706 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 04:52:04.931991 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 04:52:04.933584 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 04:52:04.934368 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 04:52:04.948540 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 04:52:04.951527 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 04:52:04.973182 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 04:52:04.973405 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 04:52:04.975001 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 04:52:04.976423 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 04:52:04.977798 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 04:52:04.977921 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 04:52:04.979838 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 04:52:04.981216 systemd[1]: Stopped target basic.target - Basic System. Oct 13 04:52:04.982436 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 04:52:04.983706 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 04:52:04.985191 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 04:52:04.986661 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 04:52:04.988186 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 04:52:04.989545 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 04:52:04.991025 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 04:52:04.992465 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 04:52:04.993821 systemd[1]: Stopped target swap.target - Swaps. Oct 13 04:52:04.995006 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 04:52:04.995128 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 04:52:04.996861 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 04:52:04.998254 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 04:52:04.999697 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 04:52:05.001132 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 04:52:05.002166 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 04:52:05.002291 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 04:52:05.004349 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 04:52:05.004498 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 04:52:05.005986 systemd[1]: Stopped target paths.target - Path Units. Oct 13 04:52:05.007090 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 04:52:05.010418 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 04:52:05.011348 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 04:52:05.012999 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 04:52:05.014125 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 04:52:05.014206 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 04:52:05.015318 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 04:52:05.015410 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 04:52:05.016537 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 04:52:05.016638 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 04:52:05.017922 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 04:52:05.018020 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 04:52:05.020008 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 04:52:05.021812 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 04:52:05.022529 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 04:52:05.022644 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 04:52:05.024140 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 04:52:05.024242 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 04:52:05.025508 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 04:52:05.025606 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 04:52:05.032065 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 04:52:05.033424 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 04:52:05.037594 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 04:52:05.043654 ignition[1093]: INFO : Ignition 2.22.0 Oct 13 04:52:05.043654 ignition[1093]: INFO : Stage: umount Oct 13 04:52:05.045080 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 04:52:05.045080 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:52:05.045080 ignition[1093]: INFO : umount: umount passed Oct 13 04:52:05.045080 ignition[1093]: INFO : Ignition finished successfully Oct 13 04:52:05.047021 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 04:52:05.047192 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 04:52:05.048259 systemd[1]: Stopped target network.target - Network. Oct 13 04:52:05.049209 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 04:52:05.049257 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 04:52:05.050608 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 04:52:05.050656 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 04:52:05.051927 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 04:52:05.051971 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 04:52:05.053276 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 04:52:05.053316 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 04:52:05.054865 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 04:52:05.056414 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 04:52:05.065520 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 04:52:05.065656 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 04:52:05.075978 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 04:52:05.076094 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 04:52:05.078920 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 04:52:05.079042 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 04:52:05.080818 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 04:52:05.081753 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 04:52:05.081800 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 04:52:05.083432 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 04:52:05.083483 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 04:52:05.085560 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 04:52:05.086241 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 04:52:05.086295 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 04:52:05.087947 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 04:52:05.087992 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 04:52:05.089410 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 04:52:05.089446 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 04:52:05.090955 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 04:52:05.100673 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 04:52:05.100842 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 04:52:05.102547 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 04:52:05.102586 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 04:52:05.104201 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 04:52:05.104230 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 04:52:05.105720 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 04:52:05.105772 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 04:52:05.108063 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 04:52:05.108109 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 04:52:05.110180 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 04:52:05.110222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 04:52:05.113170 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 04:52:05.114885 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 04:52:05.114942 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 04:52:05.116721 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 04:52:05.116767 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 04:52:05.118308 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 04:52:05.118345 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 04:52:05.119928 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 04:52:05.119963 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 04:52:05.121880 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 04:52:05.121919 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:52:05.134740 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 04:52:05.134870 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 04:52:05.136671 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 04:52:05.136791 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 04:52:05.139955 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 04:52:05.141648 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 04:52:05.151509 systemd[1]: Switching root. Oct 13 04:52:05.181342 systemd-journald[343]: Journal stopped Oct 13 04:52:05.940766 systemd-journald[343]: Received SIGTERM from PID 1 (systemd). Oct 13 04:52:05.940821 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 04:52:05.940841 kernel: SELinux: policy capability open_perms=1 Oct 13 04:52:05.940854 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 04:52:05.940864 kernel: SELinux: policy capability always_check_network=0 Oct 13 04:52:05.940873 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 04:52:05.940883 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 04:52:05.940892 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 04:52:05.940902 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 04:52:05.940912 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 04:52:05.940924 kernel: audit: type=1403 audit(1760331125.387:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 04:52:05.940937 systemd[1]: Successfully loaded SELinux policy in 62.046ms. Oct 13 04:52:05.940953 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.850ms. Oct 13 04:52:05.940964 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 04:52:05.940977 systemd[1]: Detected virtualization kvm. Oct 13 04:52:05.940987 systemd[1]: Detected architecture arm64. Oct 13 04:52:05.940997 systemd[1]: Detected first boot. Oct 13 04:52:05.941009 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 04:52:05.941021 kernel: NET: Registered PF_VSOCK protocol family Oct 13 04:52:05.941031 zram_generator::config[1142]: No configuration found. Oct 13 04:52:05.941042 systemd[1]: Populated /etc with preset unit settings. Oct 13 04:52:05.941053 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 04:52:05.941063 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 04:52:05.941074 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 04:52:05.941085 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 04:52:05.941097 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 04:52:05.941107 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 04:52:05.941122 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 04:52:05.941133 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 04:52:05.941147 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 04:52:05.941160 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 04:52:05.941171 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 04:52:05.941181 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 04:52:05.941192 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 04:52:05.941202 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 04:52:05.941213 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 04:52:05.941224 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 04:52:05.941235 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 04:52:05.941246 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 13 04:52:05.941257 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 04:52:05.941268 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 04:52:05.941279 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 04:52:05.941289 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 04:52:05.941301 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 04:52:05.941312 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 04:52:05.941322 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 04:52:05.941333 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 04:52:05.941343 systemd[1]: Reached target slices.target - Slice Units. Oct 13 04:52:05.941354 systemd[1]: Reached target swap.target - Swaps. Oct 13 04:52:05.941364 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 04:52:05.941374 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 04:52:05.941404 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 04:52:05.941416 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 04:52:05.941427 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 04:52:05.941438 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 04:52:05.941448 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 04:52:05.941458 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 04:52:05.941469 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 04:52:05.941488 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 04:52:05.941499 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 04:52:05.941510 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 04:52:05.941520 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 04:52:05.941531 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 04:52:05.941542 systemd[1]: Reached target machines.target - Containers. Oct 13 04:52:05.941555 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 04:52:05.941569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:52:05.941580 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 04:52:05.941591 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 04:52:05.941602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 04:52:05.941612 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 04:52:05.941624 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 04:52:05.941636 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 04:52:05.941646 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 04:52:05.941657 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 04:52:05.941672 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 04:52:05.941687 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 04:52:05.941698 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 04:52:05.941708 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 04:52:05.941723 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:52:05.941734 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 04:52:05.941744 kernel: fuse: init (API version 7.41) Oct 13 04:52:05.941754 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 04:52:05.941770 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 04:52:05.941781 kernel: ACPI: bus type drm_connector registered Oct 13 04:52:05.941792 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 04:52:05.941804 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 04:52:05.941815 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 04:52:05.941827 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 04:52:05.941838 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 04:52:05.941851 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 04:52:05.941862 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 04:52:05.941872 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 04:52:05.941884 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 04:52:05.941912 systemd-journald[1208]: Collecting audit messages is disabled. Oct 13 04:52:05.941934 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 04:52:05.941947 systemd-journald[1208]: Journal started Oct 13 04:52:05.941968 systemd-journald[1208]: Runtime Journal (/run/log/journal/3b70ff9cd4894c7daa654cf22cd571a7) is 6M, max 48.5M, 42.4M free. Oct 13 04:52:05.753210 systemd[1]: Queued start job for default target multi-user.target. Oct 13 04:52:05.773445 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 04:52:05.773940 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 04:52:05.945431 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 04:52:05.946451 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 04:52:05.947576 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 04:52:05.949422 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 04:52:05.950512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 04:52:05.950654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 04:52:05.951750 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 04:52:05.951922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 04:52:05.953025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 04:52:05.953164 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 04:52:05.954316 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 04:52:05.954472 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 04:52:05.955461 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 04:52:05.955594 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 04:52:05.956681 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 04:52:05.957876 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 04:52:05.961196 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 04:52:05.962634 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 04:52:05.974753 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 04:52:05.975911 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 04:52:05.977792 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 04:52:05.979689 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 04:52:05.980508 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 04:52:05.980535 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 04:52:05.982051 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 04:52:05.983186 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:52:05.993165 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 04:52:05.994950 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 04:52:05.995886 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 04:52:05.996753 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 04:52:05.997647 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 04:52:05.999001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 04:52:06.001573 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 04:52:06.004785 systemd-journald[1208]: Time spent on flushing to /var/log/journal/3b70ff9cd4894c7daa654cf22cd571a7 is 19.858ms for 889 entries. Oct 13 04:52:06.004785 systemd-journald[1208]: System Journal (/var/log/journal/3b70ff9cd4894c7daa654cf22cd571a7) is 8M, max 163.5M, 155.5M free. Oct 13 04:52:06.039094 systemd-journald[1208]: Received client request to flush runtime journal. Oct 13 04:52:06.039149 kernel: loop1: detected capacity change from 0 to 100624 Oct 13 04:52:06.004659 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 04:52:06.009362 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 04:52:06.011517 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 04:52:06.013209 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 04:52:06.017147 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 04:52:06.018847 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 04:52:06.023363 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 04:52:06.032106 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Oct 13 04:52:06.032116 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Oct 13 04:52:06.035087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 04:52:06.038811 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 04:52:06.041041 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 04:52:06.044303 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 04:52:06.047521 kernel: loop2: detected capacity change from 0 to 207008 Oct 13 04:52:06.055629 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 04:52:06.073113 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 04:52:06.075403 kernel: loop3: detected capacity change from 0 to 119344 Oct 13 04:52:06.076576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 04:52:06.078858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 04:52:06.085509 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 04:52:06.105219 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Oct 13 04:52:06.105239 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Oct 13 04:52:06.107439 kernel: loop4: detected capacity change from 0 to 100624 Oct 13 04:52:06.109487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 04:52:06.114394 kernel: loop5: detected capacity change from 0 to 207008 Oct 13 04:52:06.119423 kernel: loop6: detected capacity change from 0 to 119344 Oct 13 04:52:06.121914 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 04:52:06.123558 (sd-merge)[1282]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 13 04:52:06.126454 (sd-merge)[1282]: Merged extensions into '/usr'. Oct 13 04:52:06.130127 systemd[1]: Reload requested from client PID 1257 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 04:52:06.130140 systemd[1]: Reloading... Oct 13 04:52:06.186813 systemd-resolved[1277]: Positive Trust Anchors: Oct 13 04:52:06.186833 systemd-resolved[1277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 04:52:06.186836 systemd-resolved[1277]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 04:52:06.186867 systemd-resolved[1277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 04:52:06.190410 zram_generator::config[1318]: No configuration found. Oct 13 04:52:06.195611 systemd-resolved[1277]: Defaulting to hostname 'linux'. Oct 13 04:52:06.331814 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 04:52:06.331905 systemd[1]: Reloading finished in 201 ms. Oct 13 04:52:06.350053 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 04:52:06.351315 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 04:52:06.356296 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 04:52:06.365638 systemd[1]: Starting ensure-sysext.service... Oct 13 04:52:06.367326 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 04:52:06.376546 systemd[1]: Reload requested from client PID 1349 ('systemctl') (unit ensure-sysext.service)... Oct 13 04:52:06.376561 systemd[1]: Reloading... Oct 13 04:52:06.380908 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 04:52:06.380940 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 04:52:06.381141 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 04:52:06.381315 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 04:52:06.381945 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 04:52:06.382135 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Oct 13 04:52:06.382184 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Oct 13 04:52:06.386587 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 04:52:06.386600 systemd-tmpfiles[1350]: Skipping /boot Oct 13 04:52:06.392884 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 04:52:06.392898 systemd-tmpfiles[1350]: Skipping /boot Oct 13 04:52:06.419414 zram_generator::config[1380]: No configuration found. Oct 13 04:52:06.552917 systemd[1]: Reloading finished in 176 ms. Oct 13 04:52:06.577247 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 04:52:06.601762 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 04:52:06.610203 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 04:52:06.612221 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 04:52:06.625666 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 04:52:06.627937 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 04:52:06.630175 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 04:52:06.633642 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 04:52:06.637824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:52:06.640085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 04:52:06.642233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 04:52:06.646886 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 04:52:06.648026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:52:06.648150 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:52:06.650371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:52:06.650530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:52:06.650616 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:52:06.654242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:52:06.656812 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 04:52:06.657724 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:52:06.657847 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:52:06.660510 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 04:52:06.663609 systemd[1]: Finished ensure-sysext.service. Oct 13 04:52:06.671922 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 04:52:06.675802 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 04:52:06.676059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 04:52:06.680332 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 04:52:06.684417 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 04:52:06.688778 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 04:52:06.690133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 04:52:06.690287 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 04:52:06.690894 systemd-udevd[1421]: Using default interface naming scheme 'v257'. Oct 13 04:52:06.693449 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 04:52:06.693630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 04:52:06.695634 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 04:52:06.695799 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 04:52:06.700124 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 04:52:06.700167 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 04:52:06.703378 augenrules[1454]: No rules Oct 13 04:52:06.706432 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 04:52:06.706721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 04:52:06.709675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 04:52:06.713149 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 04:52:06.768639 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 13 04:52:06.786415 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 04:52:06.788080 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 04:52:06.802312 systemd-networkd[1469]: lo: Link UP Oct 13 04:52:06.802320 systemd-networkd[1469]: lo: Gained carrier Oct 13 04:52:06.803426 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 04:52:06.805026 systemd[1]: Reached target network.target - Network. Oct 13 04:52:06.809497 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 04:52:06.814770 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 04:52:06.851158 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 04:52:06.869056 systemd-networkd[1469]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:52:06.869068 systemd-networkd[1469]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 04:52:06.870474 systemd-networkd[1469]: eth0: Link UP Oct 13 04:52:06.871015 systemd-networkd[1469]: eth0: Gained carrier Oct 13 04:52:06.871037 systemd-networkd[1469]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:52:06.876372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 04:52:06.883460 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 04:52:06.889509 systemd-networkd[1469]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 04:52:06.890576 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Oct 13 04:52:07.321438 systemd-resolved[1277]: Clock change detected. Flushing caches. Oct 13 04:52:07.321492 systemd-timesyncd[1437]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 04:52:07.321673 systemd-timesyncd[1437]: Initial clock synchronization to Mon 2025-10-13 04:52:07.321397 UTC. Oct 13 04:52:07.354868 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 04:52:07.367626 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:52:07.374088 ldconfig[1418]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 04:52:07.379045 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 04:52:07.388557 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 04:52:07.417631 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 04:52:07.419024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:52:07.421777 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 04:52:07.422903 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 04:52:07.424072 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 04:52:07.425189 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 04:52:07.426327 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 04:52:07.427341 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 04:52:07.428325 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 04:52:07.428359 systemd[1]: Reached target paths.target - Path Units. Oct 13 04:52:07.429127 systemd[1]: Reached target timers.target - Timer Units. Oct 13 04:52:07.430781 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 04:52:07.432816 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 04:52:07.435610 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 04:52:07.436718 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 04:52:07.437726 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 04:52:07.444523 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 04:52:07.445947 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 04:52:07.447527 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 04:52:07.448531 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 04:52:07.449265 systemd[1]: Reached target basic.target - Basic System. Oct 13 04:52:07.450003 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 04:52:07.450035 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 04:52:07.450993 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 04:52:07.452805 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 04:52:07.454439 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 04:52:07.457877 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 04:52:07.459774 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 04:52:07.460587 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 04:52:07.461667 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 04:52:07.463277 jq[1527]: false Oct 13 04:52:07.464619 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 04:52:07.467674 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 04:52:07.469793 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 04:52:07.475220 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 04:52:07.476027 extend-filesystems[1528]: Found /dev/vda6 Oct 13 04:52:07.476129 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 04:52:07.476574 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 04:52:07.477180 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 04:52:07.480617 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 04:52:07.480894 extend-filesystems[1528]: Found /dev/vda9 Oct 13 04:52:07.483453 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 04:52:07.483595 extend-filesystems[1528]: Checking size of /dev/vda9 Oct 13 04:52:07.484808 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 04:52:07.484993 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 04:52:07.485239 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 04:52:07.485395 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 04:52:07.487420 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 04:52:07.487745 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 04:52:07.502481 jq[1544]: true Oct 13 04:52:07.504265 (ntainerd)[1556]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 04:52:07.504853 tar[1551]: linux-arm64/LICENSE Oct 13 04:52:07.505821 tar[1551]: linux-arm64/helm Oct 13 04:52:07.507222 update_engine[1543]: I20251013 04:52:07.507018 1543 main.cc:92] Flatcar Update Engine starting Oct 13 04:52:07.511024 extend-filesystems[1528]: Resized partition /dev/vda9 Oct 13 04:52:07.513615 extend-filesystems[1572]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 04:52:07.519686 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 13 04:52:07.530789 jq[1571]: true Oct 13 04:52:07.545297 dbus-daemon[1525]: [system] SELinux support is enabled Oct 13 04:52:07.545596 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 13 04:52:07.546605 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 04:52:07.564094 update_engine[1543]: I20251013 04:52:07.553107 1543 update_check_scheduler.cc:74] Next update check in 9m10s Oct 13 04:52:07.554531 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 04:52:07.554590 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 04:52:07.555822 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 04:52:07.555840 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 04:52:07.557364 systemd[1]: Started update-engine.service - Update Engine. Oct 13 04:52:07.563150 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 04:52:07.566214 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 04:52:07.566214 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 04:52:07.566214 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 13 04:52:07.574633 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Oct 13 04:52:07.570168 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 04:52:07.570378 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 04:52:07.589003 systemd-logind[1541]: Watching system buttons on /dev/input/event0 (Power Button) Oct 13 04:52:07.589217 systemd-logind[1541]: New seat seat0. Oct 13 04:52:07.592391 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 04:52:07.606303 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Oct 13 04:52:07.606024 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 04:52:07.608083 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 04:52:07.635645 locksmithd[1576]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 04:52:07.695452 containerd[1556]: time="2025-10-13T04:52:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 04:52:07.697512 containerd[1556]: time="2025-10-13T04:52:07.697029271Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 04:52:07.707896 containerd[1556]: time="2025-10-13T04:52:07.707843151Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.68µs" Oct 13 04:52:07.707896 containerd[1556]: time="2025-10-13T04:52:07.707885791Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 04:52:07.708000 containerd[1556]: time="2025-10-13T04:52:07.707905591Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 04:52:07.708066 containerd[1556]: time="2025-10-13T04:52:07.708044551Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 04:52:07.708095 containerd[1556]: time="2025-10-13T04:52:07.708066511Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 04:52:07.708095 containerd[1556]: time="2025-10-13T04:52:07.708090791Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 04:52:07.708154 containerd[1556]: time="2025-10-13T04:52:07.708137191Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 04:52:07.708180 containerd[1556]: time="2025-10-13T04:52:07.708152591Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 04:52:07.708355 containerd[1556]: time="2025-10-13T04:52:07.708332351Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 04:52:07.708355 containerd[1556]: time="2025-10-13T04:52:07.708354471Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 04:52:07.708407 containerd[1556]: time="2025-10-13T04:52:07.708365831Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 04:52:07.708407 containerd[1556]: time="2025-10-13T04:52:07.708374511Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 04:52:07.708457 containerd[1556]: time="2025-10-13T04:52:07.708440591Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 04:52:07.708666 containerd[1556]: time="2025-10-13T04:52:07.708643231Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 04:52:07.708694 containerd[1556]: time="2025-10-13T04:52:07.708677951Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 04:52:07.708694 containerd[1556]: time="2025-10-13T04:52:07.708688631Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 04:52:07.708740 containerd[1556]: time="2025-10-13T04:52:07.708728591Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 04:52:07.709919 containerd[1556]: time="2025-10-13T04:52:07.709888231Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 04:52:07.710009 containerd[1556]: time="2025-10-13T04:52:07.709988951Z" level=info msg="metadata content store policy set" policy=shared Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.712955391Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713015591Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713029951Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713044591Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713056471Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713066271Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713077751Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713088751Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713100391Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713111271Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713119871Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713132071Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713242271Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 04:52:07.713535 containerd[1556]: time="2025-10-13T04:52:07.713262271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713281071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713293231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713307551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713318591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713330071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713340551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713362511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713374351Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713386871Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713584911Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713601471Z" level=info msg="Start snapshots syncer" Oct 13 04:52:07.713820 containerd[1556]: time="2025-10-13T04:52:07.713631151Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 04:52:07.714020 containerd[1556]: time="2025-10-13T04:52:07.713838351Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 04:52:07.714020 containerd[1556]: time="2025-10-13T04:52:07.713899351Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 04:52:07.714116 containerd[1556]: time="2025-10-13T04:52:07.713960711Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 04:52:07.714116 containerd[1556]: time="2025-10-13T04:52:07.714072151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 04:52:07.714116 containerd[1556]: time="2025-10-13T04:52:07.714099551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 04:52:07.714116 containerd[1556]: time="2025-10-13T04:52:07.714110071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 04:52:07.714185 containerd[1556]: time="2025-10-13T04:52:07.714119911Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 04:52:07.714185 containerd[1556]: time="2025-10-13T04:52:07.714131871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 04:52:07.714185 containerd[1556]: time="2025-10-13T04:52:07.714141431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 04:52:07.714185 containerd[1556]: time="2025-10-13T04:52:07.714151231Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 04:52:07.714185 containerd[1556]: time="2025-10-13T04:52:07.714178551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 04:52:07.714261 containerd[1556]: time="2025-10-13T04:52:07.714191711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 04:52:07.714261 containerd[1556]: time="2025-10-13T04:52:07.714201911Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 04:52:07.714261 containerd[1556]: time="2025-10-13T04:52:07.714230551Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 04:52:07.714261 containerd[1556]: time="2025-10-13T04:52:07.714242591Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 04:52:07.714261 containerd[1556]: time="2025-10-13T04:52:07.714250951Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 04:52:07.714261 containerd[1556]: time="2025-10-13T04:52:07.714259911Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 04:52:07.714361 containerd[1556]: time="2025-10-13T04:52:07.714268031Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 04:52:07.714361 containerd[1556]: time="2025-10-13T04:52:07.714278551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 04:52:07.714361 containerd[1556]: time="2025-10-13T04:52:07.714288631Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 04:52:07.714412 containerd[1556]: time="2025-10-13T04:52:07.714369231Z" level=info msg="runtime interface created" Oct 13 04:52:07.714412 containerd[1556]: time="2025-10-13T04:52:07.714375071Z" level=info msg="created NRI interface" Oct 13 04:52:07.714412 containerd[1556]: time="2025-10-13T04:52:07.714387031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 04:52:07.714412 containerd[1556]: time="2025-10-13T04:52:07.714403111Z" level=info msg="Connect containerd service" Oct 13 04:52:07.714475 containerd[1556]: time="2025-10-13T04:52:07.714429751Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 04:52:07.715236 containerd[1556]: time="2025-10-13T04:52:07.715208951Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 04:52:07.779081 containerd[1556]: time="2025-10-13T04:52:07.778980391Z" level=info msg="Start subscribing containerd event" Oct 13 04:52:07.779081 containerd[1556]: time="2025-10-13T04:52:07.779010871Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 04:52:07.779191 containerd[1556]: time="2025-10-13T04:52:07.779131591Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 04:52:07.779191 containerd[1556]: time="2025-10-13T04:52:07.779063511Z" level=info msg="Start recovering state" Oct 13 04:52:07.779225 containerd[1556]: time="2025-10-13T04:52:07.779216671Z" level=info msg="Start event monitor" Oct 13 04:52:07.779243 containerd[1556]: time="2025-10-13T04:52:07.779229711Z" level=info msg="Start cni network conf syncer for default" Oct 13 04:52:07.779243 containerd[1556]: time="2025-10-13T04:52:07.779236191Z" level=info msg="Start streaming server" Oct 13 04:52:07.779299 containerd[1556]: time="2025-10-13T04:52:07.779243551Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 04:52:07.779299 containerd[1556]: time="2025-10-13T04:52:07.779249911Z" level=info msg="runtime interface starting up..." Oct 13 04:52:07.779299 containerd[1556]: time="2025-10-13T04:52:07.779255391Z" level=info msg="starting plugins..." Oct 13 04:52:07.779299 containerd[1556]: time="2025-10-13T04:52:07.779267391Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 04:52:07.779548 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 04:52:07.781284 containerd[1556]: time="2025-10-13T04:52:07.780794111Z" level=info msg="containerd successfully booted in 0.087029s" Oct 13 04:52:07.852649 tar[1551]: linux-arm64/README.md Oct 13 04:52:07.873630 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 04:52:08.460702 systemd-networkd[1469]: eth0: Gained IPv6LL Oct 13 04:52:08.468605 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 04:52:08.469983 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 04:52:08.472390 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 04:52:08.474604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:52:08.477706 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 04:52:08.499958 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 04:52:08.501292 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 04:52:08.501470 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 04:52:08.503086 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 04:52:09.004150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:52:09.008035 (kubelet)[1647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 04:52:09.337777 kubelet[1647]: E1013 04:52:09.337714 1647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 04:52:09.340279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 04:52:09.340531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 04:52:09.340937 systemd[1]: kubelet.service: Consumed 734ms CPU time, 257.4M memory peak. Oct 13 04:52:09.543422 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 04:52:09.563580 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 04:52:09.566008 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 04:52:09.584449 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 04:52:09.585569 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 04:52:09.588726 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 04:52:09.605561 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 04:52:09.609009 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 04:52:09.610928 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 13 04:52:09.612201 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 04:52:09.613096 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 04:52:09.613960 systemd[1]: Startup finished in 1.189s (kernel) + 6.247s (initrd) + 3.858s (userspace) = 11.296s. Oct 13 04:52:11.004067 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 04:52:11.005165 systemd[1]: Started sshd@0-10.0.0.32:22-10.0.0.1:51520.service - OpenSSH per-connection server daemon (10.0.0.1:51520). Oct 13 04:52:11.076021 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 51520 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:52:11.077898 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:52:11.083750 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 04:52:11.084654 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 04:52:11.089336 systemd-logind[1541]: New session 1 of user core. Oct 13 04:52:11.102894 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 04:52:11.105117 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 04:52:11.118754 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 04:52:11.120981 systemd-logind[1541]: New session c1 of user core. Oct 13 04:52:11.219160 systemd[1681]: Queued start job for default target default.target. Oct 13 04:52:11.243448 systemd[1681]: Created slice app.slice - User Application Slice. Oct 13 04:52:11.243476 systemd[1681]: Reached target paths.target - Paths. Oct 13 04:52:11.243548 systemd[1681]: Reached target timers.target - Timers. Oct 13 04:52:11.244714 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 04:52:11.253938 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 04:52:11.253996 systemd[1681]: Reached target sockets.target - Sockets. Oct 13 04:52:11.254030 systemd[1681]: Reached target basic.target - Basic System. Oct 13 04:52:11.254058 systemd[1681]: Reached target default.target - Main User Target. Oct 13 04:52:11.254084 systemd[1681]: Startup finished in 127ms. Oct 13 04:52:11.254276 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 04:52:11.255729 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 04:52:11.320406 systemd[1]: Started sshd@1-10.0.0.32:22-10.0.0.1:51536.service - OpenSSH per-connection server daemon (10.0.0.1:51536). Oct 13 04:52:11.368343 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 51536 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:52:11.369562 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:52:11.373567 systemd-logind[1541]: New session 2 of user core. Oct 13 04:52:11.387708 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 04:52:11.438547 sshd[1695]: Connection closed by 10.0.0.1 port 51536 Oct 13 04:52:11.438991 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Oct 13 04:52:11.448407 systemd[1]: sshd@1-10.0.0.32:22-10.0.0.1:51536.service: Deactivated successfully. Oct 13 04:52:11.449862 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 04:52:11.450563 systemd-logind[1541]: Session 2 logged out. Waiting for processes to exit. Oct 13 04:52:11.453073 systemd[1]: Started sshd@2-10.0.0.32:22-10.0.0.1:51540.service - OpenSSH per-connection server daemon (10.0.0.1:51540). Oct 13 04:52:11.453710 systemd-logind[1541]: Removed session 2. Oct 13 04:52:11.494484 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 51540 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:52:11.495673 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:52:11.499589 systemd-logind[1541]: New session 3 of user core. Oct 13 04:52:11.508669 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 04:52:11.556560 sshd[1704]: Connection closed by 10.0.0.1 port 51540 Oct 13 04:52:11.556999 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Oct 13 04:52:11.566243 systemd[1]: sshd@2-10.0.0.32:22-10.0.0.1:51540.service: Deactivated successfully. Oct 13 04:52:11.568646 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 04:52:11.569578 systemd-logind[1541]: Session 3 logged out. Waiting for processes to exit. Oct 13 04:52:11.571764 systemd[1]: Started sshd@3-10.0.0.32:22-10.0.0.1:51552.service - OpenSSH per-connection server daemon (10.0.0.1:51552). Oct 13 04:52:11.572400 systemd-logind[1541]: Removed session 3. Oct 13 04:52:11.628474 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 51552 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:52:11.629645 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:52:11.634015 systemd-logind[1541]: New session 4 of user core. Oct 13 04:52:11.640660 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 04:52:11.692091 sshd[1714]: Connection closed by 10.0.0.1 port 51552 Oct 13 04:52:11.692518 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Oct 13 04:52:11.700047 systemd[1]: sshd@3-10.0.0.32:22-10.0.0.1:51552.service: Deactivated successfully. Oct 13 04:52:11.702864 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 04:52:11.703499 systemd-logind[1541]: Session 4 logged out. Waiting for processes to exit. Oct 13 04:52:11.705658 systemd[1]: Started sshd@4-10.0.0.32:22-10.0.0.1:51566.service - OpenSSH per-connection server daemon (10.0.0.1:51566). Oct 13 04:52:11.706086 systemd-logind[1541]: Removed session 4. Oct 13 04:52:11.760073 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 51566 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:52:11.761172 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:52:11.765596 systemd-logind[1541]: New session 5 of user core. Oct 13 04:52:11.780678 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 04:52:11.838461 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 04:52:11.839088 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:52:11.859493 sudo[1724]: pam_unix(sudo:session): session closed for user root Oct 13 04:52:11.861245 sshd[1723]: Connection closed by 10.0.0.1 port 51566 Oct 13 04:52:11.861648 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Oct 13 04:52:11.870430 systemd[1]: sshd@4-10.0.0.32:22-10.0.0.1:51566.service: Deactivated successfully. Oct 13 04:52:11.872046 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 04:52:11.874084 systemd-logind[1541]: Session 5 logged out. Waiting for processes to exit. Oct 13 04:52:11.876228 systemd[1]: Started sshd@5-10.0.0.32:22-10.0.0.1:51572.service - OpenSSH per-connection server daemon (10.0.0.1:51572). Oct 13 04:52:11.876689 systemd-logind[1541]: Removed session 5. Oct 13 04:52:11.927465 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 51572 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:52:11.928717 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:52:11.933029 systemd-logind[1541]: New session 6 of user core. Oct 13 04:52:11.947687 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 04:52:12.000552 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 04:52:12.000810 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:52:12.005666 sudo[1735]: pam_unix(sudo:session): session closed for user root Oct 13 04:52:12.011466 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 04:52:12.011743 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:52:12.020643 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 04:52:12.070048 augenrules[1757]: No rules Oct 13 04:52:12.071137 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 04:52:12.071376 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 04:52:12.072840 sudo[1734]: pam_unix(sudo:session): session closed for user root Oct 13 04:52:12.074444 sshd[1733]: Connection closed by 10.0.0.1 port 51572 Oct 13 04:52:12.074845 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Oct 13 04:52:12.092634 systemd[1]: sshd@5-10.0.0.32:22-10.0.0.1:51572.service: Deactivated successfully. Oct 13 04:52:12.094162 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 04:52:12.097250 systemd-logind[1541]: Session 6 logged out. Waiting for processes to exit. Oct 13 04:52:12.098534 systemd[1]: Started sshd@6-10.0.0.32:22-10.0.0.1:51584.service - OpenSSH per-connection server daemon (10.0.0.1:51584). Oct 13 04:52:12.099341 systemd-logind[1541]: Removed session 6. Oct 13 04:52:12.154662 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 51584 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:52:12.155463 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:52:12.159575 systemd-logind[1541]: New session 7 of user core. Oct 13 04:52:12.171710 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 04:52:12.224208 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 04:52:12.224469 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:52:12.516674 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 04:52:12.541821 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 04:52:12.753906 dockerd[1790]: time="2025-10-13T04:52:12.753818911Z" level=info msg="Starting up" Oct 13 04:52:12.754759 dockerd[1790]: time="2025-10-13T04:52:12.754735951Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 04:52:12.765819 dockerd[1790]: time="2025-10-13T04:52:12.765776951Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 04:52:12.945952 dockerd[1790]: time="2025-10-13T04:52:12.945898871Z" level=info msg="Loading containers: start." Oct 13 04:52:12.954540 kernel: Initializing XFRM netlink socket Oct 13 04:52:13.137197 systemd-networkd[1469]: docker0: Link UP Oct 13 04:52:13.141009 dockerd[1790]: time="2025-10-13T04:52:13.140961751Z" level=info msg="Loading containers: done." Oct 13 04:52:13.151686 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1923409777-merged.mount: Deactivated successfully. Oct 13 04:52:13.153123 dockerd[1790]: time="2025-10-13T04:52:13.153068951Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 04:52:13.153180 dockerd[1790]: time="2025-10-13T04:52:13.153144351Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 04:52:13.153235 dockerd[1790]: time="2025-10-13T04:52:13.153220071Z" level=info msg="Initializing buildkit" Oct 13 04:52:13.173929 dockerd[1790]: time="2025-10-13T04:52:13.173897911Z" level=info msg="Completed buildkit initialization" Oct 13 04:52:13.178452 dockerd[1790]: time="2025-10-13T04:52:13.178403551Z" level=info msg="Daemon has completed initialization" Oct 13 04:52:13.178520 dockerd[1790]: time="2025-10-13T04:52:13.178484151Z" level=info msg="API listen on /run/docker.sock" Oct 13 04:52:13.178660 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 04:52:13.956524 containerd[1556]: time="2025-10-13T04:52:13.956150471Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 13 04:52:14.504736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258213662.mount: Deactivated successfully. Oct 13 04:52:15.453530 containerd[1556]: time="2025-10-13T04:52:15.453062271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:15.453530 containerd[1556]: time="2025-10-13T04:52:15.453494591Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Oct 13 04:52:15.454471 containerd[1556]: time="2025-10-13T04:52:15.454439071Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:15.457569 containerd[1556]: time="2025-10-13T04:52:15.457538311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:15.458239 containerd[1556]: time="2025-10-13T04:52:15.458200031Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.5020072s" Oct 13 04:52:15.458239 containerd[1556]: time="2025-10-13T04:52:15.458237911Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 13 04:52:15.458977 containerd[1556]: time="2025-10-13T04:52:15.458942271Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 13 04:52:16.479049 containerd[1556]: time="2025-10-13T04:52:16.478994111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:16.480473 containerd[1556]: time="2025-10-13T04:52:16.480436791Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Oct 13 04:52:16.481373 containerd[1556]: time="2025-10-13T04:52:16.481333991Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:16.484359 containerd[1556]: time="2025-10-13T04:52:16.484326391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:16.485350 containerd[1556]: time="2025-10-13T04:52:16.485311351Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.0263348s" Oct 13 04:52:16.485413 containerd[1556]: time="2025-10-13T04:52:16.485358071Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 13 04:52:16.485885 containerd[1556]: time="2025-10-13T04:52:16.485843911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 13 04:52:17.599402 containerd[1556]: time="2025-10-13T04:52:17.599352831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:17.600368 containerd[1556]: time="2025-10-13T04:52:17.600336231Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Oct 13 04:52:17.601498 containerd[1556]: time="2025-10-13T04:52:17.601127751Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:17.604124 containerd[1556]: time="2025-10-13T04:52:17.604096871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:17.605708 containerd[1556]: time="2025-10-13T04:52:17.605667231Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.11979132s" Oct 13 04:52:17.605708 containerd[1556]: time="2025-10-13T04:52:17.605707871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 13 04:52:17.606285 containerd[1556]: time="2025-10-13T04:52:17.606096831Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 13 04:52:18.639460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927472566.mount: Deactivated successfully. Oct 13 04:52:19.047023 containerd[1556]: time="2025-10-13T04:52:19.046654671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:19.047344 containerd[1556]: time="2025-10-13T04:52:19.047179311Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Oct 13 04:52:19.048455 containerd[1556]: time="2025-10-13T04:52:19.048398071Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:19.050298 containerd[1556]: time="2025-10-13T04:52:19.050273391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:19.050983 containerd[1556]: time="2025-10-13T04:52:19.050823951Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.44468696s" Oct 13 04:52:19.050983 containerd[1556]: time="2025-10-13T04:52:19.050856831Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 13 04:52:19.051390 containerd[1556]: time="2025-10-13T04:52:19.051359511Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 13 04:52:19.518440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 04:52:19.519831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:52:19.678706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:52:19.682713 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 04:52:19.726021 kubelet[2089]: E1013 04:52:19.725976 2089 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 04:52:19.729043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 04:52:19.729274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 04:52:19.729849 systemd[1]: kubelet.service: Consumed 142ms CPU time, 108.4M memory peak. Oct 13 04:52:19.937641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509559571.mount: Deactivated successfully. Oct 13 04:52:20.613965 containerd[1556]: time="2025-10-13T04:52:20.613903591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:20.614418 containerd[1556]: time="2025-10-13T04:52:20.614369271Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Oct 13 04:52:20.615237 containerd[1556]: time="2025-10-13T04:52:20.615201311Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:20.618614 containerd[1556]: time="2025-10-13T04:52:20.618580391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:20.619389 containerd[1556]: time="2025-10-13T04:52:20.619206351Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.56780908s" Oct 13 04:52:20.619389 containerd[1556]: time="2025-10-13T04:52:20.619255271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 13 04:52:20.620022 containerd[1556]: time="2025-10-13T04:52:20.619999991Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 04:52:21.058019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3670274016.mount: Deactivated successfully. Oct 13 04:52:21.065230 containerd[1556]: time="2025-10-13T04:52:21.065171591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 04:52:21.066568 containerd[1556]: time="2025-10-13T04:52:21.066525591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 13 04:52:21.067536 containerd[1556]: time="2025-10-13T04:52:21.067478071Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 04:52:21.071578 containerd[1556]: time="2025-10-13T04:52:21.071501311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 04:52:21.072719 containerd[1556]: time="2025-10-13T04:52:21.072684791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 452.65516ms" Oct 13 04:52:21.072760 containerd[1556]: time="2025-10-13T04:52:21.072714831Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 13 04:52:21.073380 containerd[1556]: time="2025-10-13T04:52:21.073347791Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 13 04:52:21.593906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956338158.mount: Deactivated successfully. Oct 13 04:52:23.212278 containerd[1556]: time="2025-10-13T04:52:23.212201871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:23.212693 containerd[1556]: time="2025-10-13T04:52:23.212667631Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Oct 13 04:52:23.213607 containerd[1556]: time="2025-10-13T04:52:23.213559551Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:23.216112 containerd[1556]: time="2025-10-13T04:52:23.216078751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:23.217981 containerd[1556]: time="2025-10-13T04:52:23.217947191Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.14456496s" Oct 13 04:52:23.217981 containerd[1556]: time="2025-10-13T04:52:23.217980551Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 13 04:52:29.309537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:52:29.309679 systemd[1]: kubelet.service: Consumed 142ms CPU time, 108.4M memory peak. Oct 13 04:52:29.311564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:52:29.331944 systemd[1]: Reload requested from client PID 2241 ('systemctl') (unit session-7.scope)... Oct 13 04:52:29.331962 systemd[1]: Reloading... Oct 13 04:52:29.409528 zram_generator::config[2288]: No configuration found. Oct 13 04:52:29.636345 systemd[1]: Reloading finished in 304 ms. Oct 13 04:52:29.682157 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 04:52:29.682237 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 04:52:29.682524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:52:29.682570 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.1M memory peak. Oct 13 04:52:29.684104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:52:29.795429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:52:29.799697 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 04:52:29.834206 kubelet[2330]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 04:52:29.834206 kubelet[2330]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 04:52:29.834206 kubelet[2330]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 04:52:29.834574 kubelet[2330]: I1013 04:52:29.834305 2330 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 04:52:30.639306 kubelet[2330]: I1013 04:52:30.639253 2330 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 04:52:30.639306 kubelet[2330]: I1013 04:52:30.639291 2330 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 04:52:30.639604 kubelet[2330]: I1013 04:52:30.639578 2330 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 04:52:30.666247 kubelet[2330]: E1013 04:52:30.666199 2330 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:52:30.667108 kubelet[2330]: I1013 04:52:30.667087 2330 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 04:52:30.672543 kubelet[2330]: I1013 04:52:30.672520 2330 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 04:52:30.675289 kubelet[2330]: I1013 04:52:30.675256 2330 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 04:52:30.675943 kubelet[2330]: I1013 04:52:30.675892 2330 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 04:52:30.676100 kubelet[2330]: I1013 04:52:30.675938 2330 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 04:52:30.676191 kubelet[2330]: I1013 04:52:30.676173 2330 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 04:52:30.676191 kubelet[2330]: I1013 04:52:30.676183 2330 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 04:52:30.676384 kubelet[2330]: I1013 04:52:30.676370 2330 state_mem.go:36] "Initialized new in-memory state store" Oct 13 04:52:30.678874 kubelet[2330]: I1013 04:52:30.678846 2330 kubelet.go:446] "Attempting to sync node with API server" Oct 13 04:52:30.678912 kubelet[2330]: I1013 04:52:30.678877 2330 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 04:52:30.678912 kubelet[2330]: I1013 04:52:30.678902 2330 kubelet.go:352] "Adding apiserver pod source" Oct 13 04:52:30.678958 kubelet[2330]: I1013 04:52:30.678912 2330 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 04:52:30.681351 kubelet[2330]: W1013 04:52:30.681104 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Oct 13 04:52:30.681351 kubelet[2330]: E1013 04:52:30.681164 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:52:30.681351 kubelet[2330]: W1013 04:52:30.681280 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Oct 13 04:52:30.681351 kubelet[2330]: E1013 04:52:30.681323 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:52:30.682177 kubelet[2330]: I1013 04:52:30.682155 2330 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 04:52:30.683141 kubelet[2330]: I1013 04:52:30.683108 2330 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 04:52:30.683267 kubelet[2330]: W1013 04:52:30.683250 2330 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 04:52:30.684195 kubelet[2330]: I1013 04:52:30.684164 2330 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 04:52:30.684255 kubelet[2330]: I1013 04:52:30.684218 2330 server.go:1287] "Started kubelet" Oct 13 04:52:30.684586 kubelet[2330]: I1013 04:52:30.684553 2330 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 04:52:30.684926 kubelet[2330]: I1013 04:52:30.684891 2330 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 04:52:30.685654 kubelet[2330]: I1013 04:52:30.685630 2330 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 04:52:30.686661 kubelet[2330]: I1013 04:52:30.686326 2330 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 04:52:30.686921 kubelet[2330]: I1013 04:52:30.686898 2330 server.go:479] "Adding debug handlers to kubelet server" Oct 13 04:52:30.688072 kubelet[2330]: I1013 04:52:30.688044 2330 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 04:52:30.689156 kubelet[2330]: E1013 04:52:30.688846 2330 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.32:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.32:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df3dadb95df1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 04:52:30.684184351 +0000 UTC m=+0.881612761,LastTimestamp:2025-10-13 04:52:30.684184351 +0000 UTC m=+0.881612761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 04:52:30.690130 kubelet[2330]: E1013 04:52:30.690078 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 04:52:30.690130 kubelet[2330]: I1013 04:52:30.690117 2330 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 04:52:30.690522 kubelet[2330]: W1013 04:52:30.690418 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Oct 13 04:52:30.690522 kubelet[2330]: I1013 04:52:30.690127 2330 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 04:52:30.690522 kubelet[2330]: E1013 04:52:30.690468 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:52:30.690628 kubelet[2330]: I1013 04:52:30.690560 2330 reconciler.go:26] "Reconciler: start to sync state" Oct 13 04:52:30.691221 kubelet[2330]: E1013 04:52:30.691070 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="200ms" Oct 13 04:52:30.691423 kubelet[2330]: I1013 04:52:30.691391 2330 factory.go:221] Registration of the systemd container factory successfully Oct 13 04:52:30.691598 kubelet[2330]: I1013 04:52:30.691581 2330 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 04:52:30.694924 kubelet[2330]: E1013 04:52:30.694894 2330 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 04:52:30.695743 kubelet[2330]: I1013 04:52:30.695718 2330 factory.go:221] Registration of the containerd container factory successfully Oct 13 04:52:30.707446 kubelet[2330]: I1013 04:52:30.707407 2330 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 04:52:30.707446 kubelet[2330]: I1013 04:52:30.707423 2330 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 04:52:30.707446 kubelet[2330]: I1013 04:52:30.707439 2330 state_mem.go:36] "Initialized new in-memory state store" Oct 13 04:52:30.708399 kubelet[2330]: I1013 04:52:30.708261 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 04:52:30.709274 kubelet[2330]: I1013 04:52:30.709254 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 04:52:30.709373 kubelet[2330]: I1013 04:52:30.709362 2330 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 04:52:30.709436 kubelet[2330]: I1013 04:52:30.709424 2330 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 04:52:30.709477 kubelet[2330]: I1013 04:52:30.709470 2330 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 04:52:30.709681 kubelet[2330]: E1013 04:52:30.709662 2330 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 04:52:30.713211 kubelet[2330]: W1013 04:52:30.713146 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Oct 13 04:52:30.713287 kubelet[2330]: E1013 04:52:30.713222 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:52:30.791295 kubelet[2330]: E1013 04:52:30.791252 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 04:52:30.795800 kubelet[2330]: I1013 04:52:30.795774 2330 policy_none.go:49] "None policy: Start" Oct 13 04:52:30.795800 kubelet[2330]: I1013 04:52:30.795793 2330 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 04:52:30.795800 kubelet[2330]: I1013 04:52:30.795806 2330 state_mem.go:35] "Initializing new in-memory state store" Oct 13 04:52:30.802233 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 04:52:30.810088 kubelet[2330]: E1013 04:52:30.810055 2330 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 04:52:30.820592 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 04:52:30.823654 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 04:52:30.841418 kubelet[2330]: I1013 04:52:30.841285 2330 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 04:52:30.841677 kubelet[2330]: I1013 04:52:30.841476 2330 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 04:52:30.841677 kubelet[2330]: I1013 04:52:30.841487 2330 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 04:52:30.842013 kubelet[2330]: I1013 04:52:30.841991 2330 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 04:52:30.842754 kubelet[2330]: E1013 04:52:30.842735 2330 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 04:52:30.842816 kubelet[2330]: E1013 04:52:30.842779 2330 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 04:52:30.892523 kubelet[2330]: E1013 04:52:30.892432 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="400ms" Oct 13 04:52:30.943625 kubelet[2330]: I1013 04:52:30.943558 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 04:52:30.944033 kubelet[2330]: E1013 04:52:30.943987 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Oct 13 04:52:31.018422 systemd[1]: Created slice kubepods-burstable-pod057429968f1de6a5c3976e6a56c59f9c.slice - libcontainer container kubepods-burstable-pod057429968f1de6a5c3976e6a56c59f9c.slice. Oct 13 04:52:31.049124 kubelet[2330]: E1013 04:52:31.049081 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:52:31.051845 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 13 04:52:31.053843 kubelet[2330]: E1013 04:52:31.053803 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:52:31.056147 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 13 04:52:31.057983 kubelet[2330]: E1013 04:52:31.057959 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:52:31.093384 kubelet[2330]: I1013 04:52:31.093341 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:31.093473 kubelet[2330]: I1013 04:52:31.093422 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:31.093473 kubelet[2330]: I1013 04:52:31.093447 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:31.093550 kubelet[2330]: I1013 04:52:31.093469 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 13 04:52:31.093550 kubelet[2330]: I1013 04:52:31.093532 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:31.093599 kubelet[2330]: I1013 04:52:31.093549 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:31.093599 kubelet[2330]: I1013 04:52:31.093565 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/057429968f1de6a5c3976e6a56c59f9c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"057429968f1de6a5c3976e6a56c59f9c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:52:31.093644 kubelet[2330]: I1013 04:52:31.093579 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/057429968f1de6a5c3976e6a56c59f9c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"057429968f1de6a5c3976e6a56c59f9c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:52:31.093644 kubelet[2330]: I1013 04:52:31.093618 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/057429968f1de6a5c3976e6a56c59f9c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"057429968f1de6a5c3976e6a56c59f9c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:52:31.145855 kubelet[2330]: I1013 04:52:31.145578 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 04:52:31.146332 kubelet[2330]: E1013 04:52:31.146034 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Oct 13 04:52:31.293654 kubelet[2330]: E1013 04:52:31.293598 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="800ms" Oct 13 04:52:31.350186 kubelet[2330]: E1013 04:52:31.350091 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:31.350793 containerd[1556]: time="2025-10-13T04:52:31.350747191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:057429968f1de6a5c3976e6a56c59f9c,Namespace:kube-system,Attempt:0,}" Oct 13 04:52:31.355210 kubelet[2330]: E1013 04:52:31.354982 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:31.355496 containerd[1556]: time="2025-10-13T04:52:31.355464751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 13 04:52:31.358785 kubelet[2330]: E1013 04:52:31.358721 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:31.359413 containerd[1556]: time="2025-10-13T04:52:31.359245711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 13 04:52:31.373305 containerd[1556]: time="2025-10-13T04:52:31.373232951Z" level=info msg="connecting to shim 61d5477bc6ea9d6a9a219b7ae61280b8d9ac8e127454c1bcb98acca3d681fa2d" address="unix:///run/containerd/s/fcf8892c960bb519a9fc20916c015202295653f630ec00784b0e1bfe3ccf6de3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:52:31.387309 containerd[1556]: time="2025-10-13T04:52:31.387268231Z" level=info msg="connecting to shim 4795c32169a4db30782c983a8d2fbb8d4b962b37f7697e62821c6e0b8cce0c9e" address="unix:///run/containerd/s/9ce93a07e4e423cd917f7a5a1684169f3d7ab99ffe10459cdb4f4661b5c05e12" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:52:31.389936 containerd[1556]: time="2025-10-13T04:52:31.389897351Z" level=info msg="connecting to shim a546c58875161cb9fab149a7b2ea58a210f13a268a8d4d2d4bd01aae9fe18746" address="unix:///run/containerd/s/bb4bf7936dc24f9b9b24d1a198408ca781e0c29520e87905e71f6c6f3297853f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:52:31.407659 systemd[1]: Started cri-containerd-61d5477bc6ea9d6a9a219b7ae61280b8d9ac8e127454c1bcb98acca3d681fa2d.scope - libcontainer container 61d5477bc6ea9d6a9a219b7ae61280b8d9ac8e127454c1bcb98acca3d681fa2d. Oct 13 04:52:31.411208 systemd[1]: Started cri-containerd-a546c58875161cb9fab149a7b2ea58a210f13a268a8d4d2d4bd01aae9fe18746.scope - libcontainer container a546c58875161cb9fab149a7b2ea58a210f13a268a8d4d2d4bd01aae9fe18746. Oct 13 04:52:31.416992 systemd[1]: Started cri-containerd-4795c32169a4db30782c983a8d2fbb8d4b962b37f7697e62821c6e0b8cce0c9e.scope - libcontainer container 4795c32169a4db30782c983a8d2fbb8d4b962b37f7697e62821c6e0b8cce0c9e. Oct 13 04:52:31.456300 containerd[1556]: time="2025-10-13T04:52:31.456110231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:057429968f1de6a5c3976e6a56c59f9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"61d5477bc6ea9d6a9a219b7ae61280b8d9ac8e127454c1bcb98acca3d681fa2d\"" Oct 13 04:52:31.457893 kubelet[2330]: E1013 04:52:31.457867 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:31.458062 containerd[1556]: time="2025-10-13T04:52:31.457923631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"4795c32169a4db30782c983a8d2fbb8d4b962b37f7697e62821c6e0b8cce0c9e\"" Oct 13 04:52:31.458605 kubelet[2330]: E1013 04:52:31.458586 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:31.458953 containerd[1556]: time="2025-10-13T04:52:31.458922111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a546c58875161cb9fab149a7b2ea58a210f13a268a8d4d2d4bd01aae9fe18746\"" Oct 13 04:52:31.460242 kubelet[2330]: E1013 04:52:31.460219 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:31.460586 containerd[1556]: time="2025-10-13T04:52:31.460529871Z" level=info msg="CreateContainer within sandbox \"61d5477bc6ea9d6a9a219b7ae61280b8d9ac8e127454c1bcb98acca3d681fa2d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 04:52:31.460786 containerd[1556]: time="2025-10-13T04:52:31.460743871Z" level=info msg="CreateContainer within sandbox \"4795c32169a4db30782c983a8d2fbb8d4b962b37f7697e62821c6e0b8cce0c9e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 04:52:31.461786 containerd[1556]: time="2025-10-13T04:52:31.461746631Z" level=info msg="CreateContainer within sandbox \"a546c58875161cb9fab149a7b2ea58a210f13a268a8d4d2d4bd01aae9fe18746\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 04:52:31.469221 containerd[1556]: time="2025-10-13T04:52:31.469189911Z" level=info msg="Container ac436ee2f3706e716d64994732bf518c80f2c28669f535b75874cc1d86864707: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:52:31.470868 containerd[1556]: time="2025-10-13T04:52:31.470839831Z" level=info msg="Container e565d26e22bf55705a2edd94d3a1e70dfdb214921bf7eec7579abbca23a55814: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:52:31.473381 containerd[1556]: time="2025-10-13T04:52:31.472806391Z" level=info msg="Container 13bed4927081663150b54c08d3c0ee2e6fd581993c7eac5a0bf7ebcaba923668: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:52:31.476883 containerd[1556]: time="2025-10-13T04:52:31.476842071Z" level=info msg="CreateContainer within sandbox \"4795c32169a4db30782c983a8d2fbb8d4b962b37f7697e62821c6e0b8cce0c9e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ac436ee2f3706e716d64994732bf518c80f2c28669f535b75874cc1d86864707\"" Oct 13 04:52:31.477570 containerd[1556]: time="2025-10-13T04:52:31.477547151Z" level=info msg="StartContainer for \"ac436ee2f3706e716d64994732bf518c80f2c28669f535b75874cc1d86864707\"" Oct 13 04:52:31.478622 containerd[1556]: time="2025-10-13T04:52:31.478593191Z" level=info msg="connecting to shim ac436ee2f3706e716d64994732bf518c80f2c28669f535b75874cc1d86864707" address="unix:///run/containerd/s/9ce93a07e4e423cd917f7a5a1684169f3d7ab99ffe10459cdb4f4661b5c05e12" protocol=ttrpc version=3 Oct 13 04:52:31.481270 containerd[1556]: time="2025-10-13T04:52:31.481231311Z" level=info msg="CreateContainer within sandbox \"61d5477bc6ea9d6a9a219b7ae61280b8d9ac8e127454c1bcb98acca3d681fa2d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e565d26e22bf55705a2edd94d3a1e70dfdb214921bf7eec7579abbca23a55814\"" Oct 13 04:52:31.482414 containerd[1556]: time="2025-10-13T04:52:31.482314871Z" level=info msg="CreateContainer within sandbox \"a546c58875161cb9fab149a7b2ea58a210f13a268a8d4d2d4bd01aae9fe18746\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"13bed4927081663150b54c08d3c0ee2e6fd581993c7eac5a0bf7ebcaba923668\"" Oct 13 04:52:31.482460 containerd[1556]: time="2025-10-13T04:52:31.482399431Z" level=info msg="StartContainer for \"e565d26e22bf55705a2edd94d3a1e70dfdb214921bf7eec7579abbca23a55814\"" Oct 13 04:52:31.482626 containerd[1556]: time="2025-10-13T04:52:31.482603911Z" level=info msg="StartContainer for \"13bed4927081663150b54c08d3c0ee2e6fd581993c7eac5a0bf7ebcaba923668\"" Oct 13 04:52:31.483636 containerd[1556]: time="2025-10-13T04:52:31.483562471Z" level=info msg="connecting to shim 13bed4927081663150b54c08d3c0ee2e6fd581993c7eac5a0bf7ebcaba923668" address="unix:///run/containerd/s/bb4bf7936dc24f9b9b24d1a198408ca781e0c29520e87905e71f6c6f3297853f" protocol=ttrpc version=3 Oct 13 04:52:31.484034 kubelet[2330]: W1013 04:52:31.483981 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused Oct 13 04:52:31.484091 kubelet[2330]: E1013 04:52:31.484048 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:52:31.484113 containerd[1556]: time="2025-10-13T04:52:31.484073831Z" level=info msg="connecting to shim e565d26e22bf55705a2edd94d3a1e70dfdb214921bf7eec7579abbca23a55814" address="unix:///run/containerd/s/fcf8892c960bb519a9fc20916c015202295653f630ec00784b0e1bfe3ccf6de3" protocol=ttrpc version=3 Oct 13 04:52:31.498685 systemd[1]: Started cri-containerd-ac436ee2f3706e716d64994732bf518c80f2c28669f535b75874cc1d86864707.scope - libcontainer container ac436ee2f3706e716d64994732bf518c80f2c28669f535b75874cc1d86864707. Oct 13 04:52:31.503084 systemd[1]: Started cri-containerd-13bed4927081663150b54c08d3c0ee2e6fd581993c7eac5a0bf7ebcaba923668.scope - libcontainer container 13bed4927081663150b54c08d3c0ee2e6fd581993c7eac5a0bf7ebcaba923668. Oct 13 04:52:31.506351 systemd[1]: Started cri-containerd-e565d26e22bf55705a2edd94d3a1e70dfdb214921bf7eec7579abbca23a55814.scope - libcontainer container e565d26e22bf55705a2edd94d3a1e70dfdb214921bf7eec7579abbca23a55814. Oct 13 04:52:31.551044 containerd[1556]: time="2025-10-13T04:52:31.550929351Z" level=info msg="StartContainer for \"ac436ee2f3706e716d64994732bf518c80f2c28669f535b75874cc1d86864707\" returns successfully" Oct 13 04:52:31.551201 kubelet[2330]: I1013 04:52:31.550981 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 04:52:31.551432 kubelet[2330]: E1013 04:52:31.551402 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Oct 13 04:52:31.554759 containerd[1556]: time="2025-10-13T04:52:31.554573311Z" level=info msg="StartContainer for \"e565d26e22bf55705a2edd94d3a1e70dfdb214921bf7eec7579abbca23a55814\" returns successfully" Oct 13 04:52:31.554759 containerd[1556]: time="2025-10-13T04:52:31.555660071Z" level=info msg="StartContainer for \"13bed4927081663150b54c08d3c0ee2e6fd581993c7eac5a0bf7ebcaba923668\" returns successfully" Oct 13 04:52:31.725671 kubelet[2330]: E1013 04:52:31.725564 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:52:31.725903 kubelet[2330]: E1013 04:52:31.725686 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:31.728365 kubelet[2330]: E1013 04:52:31.728347 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:52:31.728612 kubelet[2330]: E1013 04:52:31.728593 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:31.733608 kubelet[2330]: E1013 04:52:31.733574 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:52:31.733705 kubelet[2330]: E1013 04:52:31.733678 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:32.353393 kubelet[2330]: I1013 04:52:32.353360 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 04:52:32.733555 kubelet[2330]: E1013 04:52:32.733341 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:52:32.733555 kubelet[2330]: E1013 04:52:32.733460 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:32.733833 kubelet[2330]: E1013 04:52:32.733728 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:52:32.733927 kubelet[2330]: E1013 04:52:32.733853 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:33.682229 kubelet[2330]: I1013 04:52:33.682201 2330 apiserver.go:52] "Watching apiserver" Oct 13 04:52:33.694722 kubelet[2330]: E1013 04:52:33.694675 2330 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 04:52:33.779460 kubelet[2330]: I1013 04:52:33.779414 2330 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 04:52:33.791334 kubelet[2330]: I1013 04:52:33.791292 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 04:52:33.791499 kubelet[2330]: I1013 04:52:33.791478 2330 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 04:52:33.809203 kubelet[2330]: E1013 04:52:33.809163 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 04:52:33.809203 kubelet[2330]: I1013 04:52:33.809205 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:33.811177 kubelet[2330]: E1013 04:52:33.811151 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:33.811177 kubelet[2330]: I1013 04:52:33.811176 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 04:52:33.814342 kubelet[2330]: E1013 04:52:33.814317 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 04:52:35.513851 kubelet[2330]: I1013 04:52:35.513819 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 04:52:35.519799 kubelet[2330]: E1013 04:52:35.519765 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:35.733356 systemd[1]: Reload requested from client PID 2607 ('systemctl') (unit session-7.scope)... Oct 13 04:52:35.733371 systemd[1]: Reloading... Oct 13 04:52:35.739020 kubelet[2330]: E1013 04:52:35.738980 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:35.796538 zram_generator::config[2651]: No configuration found. Oct 13 04:52:36.038342 systemd[1]: Reloading finished in 304 ms. Oct 13 04:52:36.054241 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:52:36.073648 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 04:52:36.074089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:52:36.074150 systemd[1]: kubelet.service: Consumed 1.256s CPU time, 128M memory peak. Oct 13 04:52:36.076219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:52:36.220384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:52:36.226327 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 04:52:36.267218 kubelet[2693]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 04:52:36.267218 kubelet[2693]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 04:52:36.267218 kubelet[2693]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 04:52:36.267581 kubelet[2693]: I1013 04:52:36.267270 2693 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 04:52:36.274536 kubelet[2693]: I1013 04:52:36.272800 2693 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 04:52:36.274536 kubelet[2693]: I1013 04:52:36.273025 2693 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 04:52:36.274536 kubelet[2693]: I1013 04:52:36.273461 2693 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 04:52:36.275185 kubelet[2693]: I1013 04:52:36.275167 2693 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 13 04:52:36.278011 kubelet[2693]: I1013 04:52:36.277984 2693 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 04:52:36.284545 kubelet[2693]: I1013 04:52:36.283596 2693 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 04:52:36.286205 kubelet[2693]: I1013 04:52:36.286176 2693 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 04:52:36.286443 kubelet[2693]: I1013 04:52:36.286399 2693 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 04:52:36.286592 kubelet[2693]: I1013 04:52:36.286427 2693 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 04:52:36.286667 kubelet[2693]: I1013 04:52:36.286601 2693 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 04:52:36.286667 kubelet[2693]: I1013 04:52:36.286611 2693 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 04:52:36.286667 kubelet[2693]: I1013 04:52:36.286647 2693 state_mem.go:36] "Initialized new in-memory state store" Oct 13 04:52:36.286793 kubelet[2693]: I1013 04:52:36.286778 2693 kubelet.go:446] "Attempting to sync node with API server" Oct 13 04:52:36.286823 kubelet[2693]: I1013 04:52:36.286794 2693 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 04:52:36.286823 kubelet[2693]: I1013 04:52:36.286817 2693 kubelet.go:352] "Adding apiserver pod source" Oct 13 04:52:36.286862 kubelet[2693]: I1013 04:52:36.286829 2693 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 04:52:36.287468 kubelet[2693]: I1013 04:52:36.287393 2693 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 04:52:36.287904 kubelet[2693]: I1013 04:52:36.287875 2693 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 04:52:36.289582 kubelet[2693]: I1013 04:52:36.288382 2693 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 04:52:36.289582 kubelet[2693]: I1013 04:52:36.288415 2693 server.go:1287] "Started kubelet" Oct 13 04:52:36.289910 kubelet[2693]: I1013 04:52:36.289874 2693 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 04:52:36.290380 kubelet[2693]: I1013 04:52:36.290328 2693 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 04:52:36.290597 kubelet[2693]: I1013 04:52:36.290570 2693 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 04:52:36.292400 kubelet[2693]: I1013 04:52:36.292371 2693 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 04:52:36.293992 kubelet[2693]: E1013 04:52:36.293956 2693 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 04:52:36.299513 kubelet[2693]: I1013 04:52:36.298403 2693 server.go:479] "Adding debug handlers to kubelet server" Oct 13 04:52:36.303399 kubelet[2693]: I1013 04:52:36.302410 2693 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 04:52:36.304515 kubelet[2693]: I1013 04:52:36.304381 2693 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 04:52:36.309321 kubelet[2693]: I1013 04:52:36.309279 2693 factory.go:221] Registration of the systemd container factory successfully Oct 13 04:52:36.309397 kubelet[2693]: I1013 04:52:36.309383 2693 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 04:52:36.311086 kubelet[2693]: I1013 04:52:36.310607 2693 factory.go:221] Registration of the containerd container factory successfully Oct 13 04:52:36.312094 kubelet[2693]: I1013 04:52:36.312067 2693 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 04:52:36.312241 kubelet[2693]: I1013 04:52:36.312225 2693 reconciler.go:26] "Reconciler: start to sync state" Oct 13 04:52:36.315882 kubelet[2693]: I1013 04:52:36.315832 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 04:52:36.317911 kubelet[2693]: I1013 04:52:36.317595 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 04:52:36.317911 kubelet[2693]: I1013 04:52:36.317621 2693 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 04:52:36.317911 kubelet[2693]: I1013 04:52:36.317644 2693 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 04:52:36.317911 kubelet[2693]: I1013 04:52:36.317650 2693 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 04:52:36.317911 kubelet[2693]: E1013 04:52:36.317694 2693 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 04:52:36.346883 kubelet[2693]: I1013 04:52:36.346857 2693 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 04:52:36.347052 kubelet[2693]: I1013 04:52:36.347037 2693 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 04:52:36.347135 kubelet[2693]: I1013 04:52:36.347125 2693 state_mem.go:36] "Initialized new in-memory state store" Oct 13 04:52:36.347473 kubelet[2693]: I1013 04:52:36.347455 2693 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 04:52:36.347594 kubelet[2693]: I1013 04:52:36.347567 2693 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 04:52:36.347640 kubelet[2693]: I1013 04:52:36.347633 2693 policy_none.go:49] "None policy: Start" Oct 13 04:52:36.347684 kubelet[2693]: I1013 04:52:36.347677 2693 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 04:52:36.347757 kubelet[2693]: I1013 04:52:36.347746 2693 state_mem.go:35] "Initializing new in-memory state store" Oct 13 04:52:36.347940 kubelet[2693]: I1013 04:52:36.347925 2693 state_mem.go:75] "Updated machine memory state" Oct 13 04:52:36.351771 kubelet[2693]: I1013 04:52:36.351751 2693 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 04:52:36.352104 kubelet[2693]: I1013 04:52:36.351892 2693 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 04:52:36.352104 kubelet[2693]: I1013 04:52:36.351907 2693 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 04:52:36.352104 kubelet[2693]: I1013 04:52:36.352077 2693 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 04:52:36.353498 kubelet[2693]: E1013 04:52:36.353473 2693 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 04:52:36.419232 kubelet[2693]: I1013 04:52:36.419133 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 04:52:36.419232 kubelet[2693]: I1013 04:52:36.419135 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:36.419390 kubelet[2693]: I1013 04:52:36.419293 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 04:52:36.425821 kubelet[2693]: E1013 04:52:36.425776 2693 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 04:52:36.458474 kubelet[2693]: I1013 04:52:36.458442 2693 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 04:52:36.469492 kubelet[2693]: I1013 04:52:36.469454 2693 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 04:52:36.469679 kubelet[2693]: I1013 04:52:36.469559 2693 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 04:52:36.613303 kubelet[2693]: I1013 04:52:36.613177 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:36.613303 kubelet[2693]: I1013 04:52:36.613216 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/057429968f1de6a5c3976e6a56c59f9c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"057429968f1de6a5c3976e6a56c59f9c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:52:36.613303 kubelet[2693]: I1013 04:52:36.613237 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:36.613303 kubelet[2693]: I1013 04:52:36.613255 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/057429968f1de6a5c3976e6a56c59f9c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"057429968f1de6a5c3976e6a56c59f9c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:52:36.613303 kubelet[2693]: I1013 04:52:36.613270 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:36.613494 kubelet[2693]: I1013 04:52:36.613285 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:36.613494 kubelet[2693]: I1013 04:52:36.613308 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:52:36.613494 kubelet[2693]: I1013 04:52:36.613325 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 13 04:52:36.613494 kubelet[2693]: I1013 04:52:36.613345 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/057429968f1de6a5c3976e6a56c59f9c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"057429968f1de6a5c3976e6a56c59f9c\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:52:36.726309 kubelet[2693]: E1013 04:52:36.726053 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:36.727743 kubelet[2693]: E1013 04:52:36.727615 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:36.727743 kubelet[2693]: E1013 04:52:36.727727 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:36.740978 sudo[2729]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 13 04:52:36.741219 sudo[2729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 13 04:52:37.069050 sudo[2729]: pam_unix(sudo:session): session closed for user root Oct 13 04:52:37.288811 kubelet[2693]: I1013 04:52:37.288763 2693 apiserver.go:52] "Watching apiserver" Oct 13 04:52:37.312717 kubelet[2693]: I1013 04:52:37.312672 2693 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 04:52:37.339688 kubelet[2693]: E1013 04:52:37.335569 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:37.339688 kubelet[2693]: I1013 04:52:37.336241 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 04:52:37.340131 kubelet[2693]: E1013 04:52:37.340115 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:37.341559 kubelet[2693]: E1013 04:52:37.341539 2693 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 04:52:37.341790 kubelet[2693]: E1013 04:52:37.341775 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:37.362382 kubelet[2693]: I1013 04:52:37.362309 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.362279671 podStartE2EDuration="2.362279671s" podCreationTimestamp="2025-10-13 04:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 04:52:37.360552831 +0000 UTC m=+1.129654561" watchObservedRunningTime="2025-10-13 04:52:37.362279671 +0000 UTC m=+1.131381401" Oct 13 04:52:37.381744 kubelet[2693]: I1013 04:52:37.381685 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.381663551 podStartE2EDuration="1.381663551s" podCreationTimestamp="2025-10-13 04:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 04:52:37.381016191 +0000 UTC m=+1.150117921" watchObservedRunningTime="2025-10-13 04:52:37.381663551 +0000 UTC m=+1.150765281" Oct 13 04:52:37.382314 kubelet[2693]: I1013 04:52:37.382175 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.382165791 podStartE2EDuration="1.382165791s" podCreationTimestamp="2025-10-13 04:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 04:52:37.371393351 +0000 UTC m=+1.140495121" watchObservedRunningTime="2025-10-13 04:52:37.382165791 +0000 UTC m=+1.151267561" Oct 13 04:52:38.336785 kubelet[2693]: E1013 04:52:38.336714 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:38.337176 kubelet[2693]: E1013 04:52:38.336869 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:38.853667 sudo[1770]: pam_unix(sudo:session): session closed for user root Oct 13 04:52:38.855494 sshd[1769]: Connection closed by 10.0.0.1 port 51584 Oct 13 04:52:38.858756 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Oct 13 04:52:38.862434 systemd[1]: sshd@6-10.0.0.32:22-10.0.0.1:51584.service: Deactivated successfully. Oct 13 04:52:38.864192 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 04:52:38.864350 systemd[1]: session-7.scope: Consumed 7.997s CPU time, 252.1M memory peak. Oct 13 04:52:38.865262 systemd-logind[1541]: Session 7 logged out. Waiting for processes to exit. Oct 13 04:52:38.866273 systemd-logind[1541]: Removed session 7. Oct 13 04:52:40.676914 kubelet[2693]: I1013 04:52:40.676869 2693 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 04:52:40.677251 containerd[1556]: time="2025-10-13T04:52:40.677165233Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 04:52:40.677411 kubelet[2693]: I1013 04:52:40.677342 2693 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 04:52:41.412834 systemd[1]: Created slice kubepods-besteffort-podebd82312_7e6e_4df7_93f7_318579a39af1.slice - libcontainer container kubepods-besteffort-podebd82312_7e6e_4df7_93f7_318579a39af1.slice. Oct 13 04:52:41.428960 systemd[1]: Created slice kubepods-burstable-pod50ed0e2a_8871_43bd_a142_58e260e6704b.slice - libcontainer container kubepods-burstable-pod50ed0e2a_8871_43bd_a142_58e260e6704b.slice. Oct 13 04:52:41.448772 kubelet[2693]: I1013 04:52:41.448733 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-etc-cni-netd\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.448772 kubelet[2693]: I1013 04:52:41.448780 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebd82312-7e6e-4df7-93f7-318579a39af1-xtables-lock\") pod \"kube-proxy-xk99v\" (UID: \"ebd82312-7e6e-4df7-93f7-318579a39af1\") " pod="kube-system/kube-proxy-xk99v" Oct 13 04:52:41.448983 kubelet[2693]: I1013 04:52:41.448799 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebd82312-7e6e-4df7-93f7-318579a39af1-lib-modules\") pod \"kube-proxy-xk99v\" (UID: \"ebd82312-7e6e-4df7-93f7-318579a39af1\") " pod="kube-system/kube-proxy-xk99v" Oct 13 04:52:41.448983 kubelet[2693]: I1013 04:52:41.448815 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p556\" (UniqueName: \"kubernetes.io/projected/ebd82312-7e6e-4df7-93f7-318579a39af1-kube-api-access-2p556\") pod \"kube-proxy-xk99v\" (UID: \"ebd82312-7e6e-4df7-93f7-318579a39af1\") " pod="kube-system/kube-proxy-xk99v" Oct 13 04:52:41.448983 kubelet[2693]: I1013 04:52:41.448835 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50ed0e2a-8871-43bd-a142-58e260e6704b-clustermesh-secrets\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.448983 kubelet[2693]: I1013 04:52:41.448850 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-hostproc\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.448983 kubelet[2693]: I1013 04:52:41.448873 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-config-path\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.449087 kubelet[2693]: I1013 04:52:41.448908 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p29mv\" (UniqueName: \"kubernetes.io/projected/50ed0e2a-8871-43bd-a142-58e260e6704b-kube-api-access-p29mv\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.449087 kubelet[2693]: I1013 04:52:41.448930 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50ed0e2a-8871-43bd-a142-58e260e6704b-hubble-tls\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.449087 kubelet[2693]: I1013 04:52:41.448944 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-run\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.449087 kubelet[2693]: I1013 04:52:41.448966 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-bpf-maps\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.449087 kubelet[2693]: I1013 04:52:41.448980 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cni-path\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.449087 kubelet[2693]: I1013 04:52:41.448994 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-host-proc-sys-kernel\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.449205 kubelet[2693]: I1013 04:52:41.449016 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ebd82312-7e6e-4df7-93f7-318579a39af1-kube-proxy\") pod \"kube-proxy-xk99v\" (UID: \"ebd82312-7e6e-4df7-93f7-318579a39af1\") " pod="kube-system/kube-proxy-xk99v" Oct 13 04:52:41.449205 kubelet[2693]: I1013 04:52:41.449031 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-xtables-lock\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.449205 kubelet[2693]: I1013 04:52:41.449045 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-host-proc-sys-net\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.449205 kubelet[2693]: I1013 04:52:41.449083 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-lib-modules\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.449205 kubelet[2693]: I1013 04:52:41.449107 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-cgroup\") pod \"cilium-znrdk\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " pod="kube-system/cilium-znrdk" Oct 13 04:52:41.525243 kubelet[2693]: E1013 04:52:41.525193 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:41.567647 kubelet[2693]: E1013 04:52:41.567615 2693 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 13 04:52:41.567647 kubelet[2693]: E1013 04:52:41.567645 2693 projected.go:194] Error preparing data for projected volume kube-api-access-p29mv for pod kube-system/cilium-znrdk: configmap "kube-root-ca.crt" not found Oct 13 04:52:41.567799 kubelet[2693]: E1013 04:52:41.567700 2693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50ed0e2a-8871-43bd-a142-58e260e6704b-kube-api-access-p29mv podName:50ed0e2a-8871-43bd-a142-58e260e6704b nodeName:}" failed. No retries permitted until 2025-10-13 04:52:42.067679967 +0000 UTC m=+5.836781697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p29mv" (UniqueName: "kubernetes.io/projected/50ed0e2a-8871-43bd-a142-58e260e6704b-kube-api-access-p29mv") pod "cilium-znrdk" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b") : configmap "kube-root-ca.crt" not found Oct 13 04:52:41.570182 kubelet[2693]: E1013 04:52:41.570088 2693 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 13 04:52:41.570182 kubelet[2693]: E1013 04:52:41.570117 2693 projected.go:194] Error preparing data for projected volume kube-api-access-2p556 for pod kube-system/kube-proxy-xk99v: configmap "kube-root-ca.crt" not found Oct 13 04:52:41.570182 kubelet[2693]: E1013 04:52:41.570160 2693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebd82312-7e6e-4df7-93f7-318579a39af1-kube-api-access-2p556 podName:ebd82312-7e6e-4df7-93f7-318579a39af1 nodeName:}" failed. No retries permitted until 2025-10-13 04:52:42.070146505 +0000 UTC m=+5.839248195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2p556" (UniqueName: "kubernetes.io/projected/ebd82312-7e6e-4df7-93f7-318579a39af1-kube-api-access-2p556") pod "kube-proxy-xk99v" (UID: "ebd82312-7e6e-4df7-93f7-318579a39af1") : configmap "kube-root-ca.crt" not found Oct 13 04:52:41.817777 systemd[1]: Created slice kubepods-besteffort-pod95b0640a_2e4a_48b5_95ed_71d264f48fdb.slice - libcontainer container kubepods-besteffort-pod95b0640a_2e4a_48b5_95ed_71d264f48fdb.slice. Oct 13 04:52:41.851924 kubelet[2693]: I1013 04:52:41.851863 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hggqh\" (UniqueName: \"kubernetes.io/projected/95b0640a-2e4a-48b5-95ed-71d264f48fdb-kube-api-access-hggqh\") pod \"cilium-operator-6c4d7847fc-hj7np\" (UID: \"95b0640a-2e4a-48b5-95ed-71d264f48fdb\") " pod="kube-system/cilium-operator-6c4d7847fc-hj7np" Oct 13 04:52:41.851924 kubelet[2693]: I1013 04:52:41.851909 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95b0640a-2e4a-48b5-95ed-71d264f48fdb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hj7np\" (UID: \"95b0640a-2e4a-48b5-95ed-71d264f48fdb\") " pod="kube-system/cilium-operator-6c4d7847fc-hj7np" Oct 13 04:52:42.121191 kubelet[2693]: E1013 04:52:42.121047 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:42.121933 containerd[1556]: time="2025-10-13T04:52:42.121828027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hj7np,Uid:95b0640a-2e4a-48b5-95ed-71d264f48fdb,Namespace:kube-system,Attempt:0,}" Oct 13 04:52:42.137473 containerd[1556]: time="2025-10-13T04:52:42.137325972Z" level=info msg="connecting to shim 42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f" address="unix:///run/containerd/s/01dfe0c04d4b0355a65191bd688fdf3c0f30050d8c89b458d671721ada2242c9" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:52:42.153667 systemd[1]: Started cri-containerd-42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f.scope - libcontainer container 42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f. Oct 13 04:52:42.183645 containerd[1556]: time="2025-10-13T04:52:42.183606084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hj7np,Uid:95b0640a-2e4a-48b5-95ed-71d264f48fdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f\"" Oct 13 04:52:42.184462 kubelet[2693]: E1013 04:52:42.184439 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:42.185676 containerd[1556]: time="2025-10-13T04:52:42.185649978Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 13 04:52:42.327352 kubelet[2693]: E1013 04:52:42.327320 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:42.327737 containerd[1556]: time="2025-10-13T04:52:42.327677058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xk99v,Uid:ebd82312-7e6e-4df7-93f7-318579a39af1,Namespace:kube-system,Attempt:0,}" Oct 13 04:52:42.333707 kubelet[2693]: E1013 04:52:42.333660 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:42.334358 containerd[1556]: time="2025-10-13T04:52:42.334100621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-znrdk,Uid:50ed0e2a-8871-43bd-a142-58e260e6704b,Namespace:kube-system,Attempt:0,}" Oct 13 04:52:42.344608 kubelet[2693]: E1013 04:52:42.344584 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:42.362349 containerd[1556]: time="2025-10-13T04:52:42.362280332Z" level=info msg="connecting to shim c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808" address="unix:///run/containerd/s/3288d201c55222313aa8e1a295e4cfdaaeba6a7b16553b78a21ec52ccd6937ce" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:52:42.366560 containerd[1556]: time="2025-10-13T04:52:42.366500440Z" level=info msg="connecting to shim 10477f793c9f7205ef4c6dfea69a731dab930ec30c2fd61f38da67cd7050143a" address="unix:///run/containerd/s/b124b0af8f1427ab7e59f6e97b15339d3c3ec5d20265c8e2107727a2c6921a1e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:52:42.384781 systemd[1]: Started cri-containerd-c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808.scope - libcontainer container c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808. Oct 13 04:52:42.388561 systemd[1]: Started cri-containerd-10477f793c9f7205ef4c6dfea69a731dab930ec30c2fd61f38da67cd7050143a.scope - libcontainer container 10477f793c9f7205ef4c6dfea69a731dab930ec30c2fd61f38da67cd7050143a. Oct 13 04:52:42.413431 containerd[1556]: time="2025-10-13T04:52:42.413361757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xk99v,Uid:ebd82312-7e6e-4df7-93f7-318579a39af1,Namespace:kube-system,Attempt:0,} returns sandbox id \"10477f793c9f7205ef4c6dfea69a731dab930ec30c2fd61f38da67cd7050143a\"" Oct 13 04:52:42.414139 kubelet[2693]: E1013 04:52:42.414097 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:42.416188 containerd[1556]: time="2025-10-13T04:52:42.416033975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-znrdk,Uid:50ed0e2a-8871-43bd-a142-58e260e6704b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\"" Oct 13 04:52:42.417577 containerd[1556]: time="2025-10-13T04:52:42.416894701Z" level=info msg="CreateContainer within sandbox \"10477f793c9f7205ef4c6dfea69a731dab930ec30c2fd61f38da67cd7050143a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 04:52:42.417649 kubelet[2693]: E1013 04:52:42.416911 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:42.423401 containerd[1556]: time="2025-10-13T04:52:42.423370025Z" level=info msg="Container 29124972d5e2795c76439ad0a6d6233d73281346f3dfaa7a468a4d71ce930d15: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:52:42.435522 kubelet[2693]: E1013 04:52:42.435475 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:42.450570 containerd[1556]: time="2025-10-13T04:52:42.450518488Z" level=info msg="CreateContainer within sandbox \"10477f793c9f7205ef4c6dfea69a731dab930ec30c2fd61f38da67cd7050143a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"29124972d5e2795c76439ad0a6d6233d73281346f3dfaa7a468a4d71ce930d15\"" Oct 13 04:52:42.451203 containerd[1556]: time="2025-10-13T04:52:42.451167452Z" level=info msg="StartContainer for \"29124972d5e2795c76439ad0a6d6233d73281346f3dfaa7a468a4d71ce930d15\"" Oct 13 04:52:42.452575 containerd[1556]: time="2025-10-13T04:52:42.452547942Z" level=info msg="connecting to shim 29124972d5e2795c76439ad0a6d6233d73281346f3dfaa7a468a4d71ce930d15" address="unix:///run/containerd/s/b124b0af8f1427ab7e59f6e97b15339d3c3ec5d20265c8e2107727a2c6921a1e" protocol=ttrpc version=3 Oct 13 04:52:42.480680 systemd[1]: Started cri-containerd-29124972d5e2795c76439ad0a6d6233d73281346f3dfaa7a468a4d71ce930d15.scope - libcontainer container 29124972d5e2795c76439ad0a6d6233d73281346f3dfaa7a468a4d71ce930d15. Oct 13 04:52:42.511042 containerd[1556]: time="2025-10-13T04:52:42.511003497Z" level=info msg="StartContainer for \"29124972d5e2795c76439ad0a6d6233d73281346f3dfaa7a468a4d71ce930d15\" returns successfully" Oct 13 04:52:43.357047 kubelet[2693]: E1013 04:52:43.357001 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:43.358146 kubelet[2693]: E1013 04:52:43.357706 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:43.359624 kubelet[2693]: E1013 04:52:43.359564 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:43.367164 kubelet[2693]: I1013 04:52:43.367030 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xk99v" podStartSLOduration=2.366062881 podStartE2EDuration="2.366062881s" podCreationTimestamp="2025-10-13 04:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 04:52:43.36594732 +0000 UTC m=+7.135049050" watchObservedRunningTime="2025-10-13 04:52:43.366062881 +0000 UTC m=+7.135164571" Oct 13 04:52:43.477238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755989393.mount: Deactivated successfully. Oct 13 04:52:46.513960 kubelet[2693]: E1013 04:52:46.513877 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:46.771221 containerd[1556]: time="2025-10-13T04:52:46.771174272Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:46.771861 containerd[1556]: time="2025-10-13T04:52:46.771822716Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Oct 13 04:52:46.772486 containerd[1556]: time="2025-10-13T04:52:46.772449159Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:46.773540 containerd[1556]: time="2025-10-13T04:52:46.773498005Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.587815147s" Oct 13 04:52:46.773569 containerd[1556]: time="2025-10-13T04:52:46.773547005Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 13 04:52:46.775907 containerd[1556]: time="2025-10-13T04:52:46.775846377Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 13 04:52:46.776896 containerd[1556]: time="2025-10-13T04:52:46.776859502Z" level=info msg="CreateContainer within sandbox \"42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 13 04:52:46.786350 containerd[1556]: time="2025-10-13T04:52:46.785187506Z" level=info msg="Container 0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:52:46.795811 containerd[1556]: time="2025-10-13T04:52:46.795762161Z" level=info msg="CreateContainer within sandbox \"42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\"" Oct 13 04:52:46.796257 containerd[1556]: time="2025-10-13T04:52:46.796233363Z" level=info msg="StartContainer for \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\"" Oct 13 04:52:46.797246 containerd[1556]: time="2025-10-13T04:52:46.797199848Z" level=info msg="connecting to shim 0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb" address="unix:///run/containerd/s/01dfe0c04d4b0355a65191bd688fdf3c0f30050d8c89b458d671721ada2242c9" protocol=ttrpc version=3 Oct 13 04:52:46.853744 systemd[1]: Started cri-containerd-0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb.scope - libcontainer container 0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb. Oct 13 04:52:46.880249 containerd[1556]: time="2025-10-13T04:52:46.880138601Z" level=info msg="StartContainer for \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\" returns successfully" Oct 13 04:52:47.367935 kubelet[2693]: E1013 04:52:47.367889 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:47.369885 kubelet[2693]: E1013 04:52:47.369857 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:47.380774 kubelet[2693]: I1013 04:52:47.380684 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hj7np" podStartSLOduration=1.790081649 podStartE2EDuration="6.38058461s" podCreationTimestamp="2025-10-13 04:52:41 +0000 UTC" firstStartedPulling="2025-10-13 04:52:42.185166095 +0000 UTC m=+5.954267825" lastFinishedPulling="2025-10-13 04:52:46.775669056 +0000 UTC m=+10.544770786" observedRunningTime="2025-10-13 04:52:47.380370049 +0000 UTC m=+11.149471779" watchObservedRunningTime="2025-10-13 04:52:47.38058461 +0000 UTC m=+11.149686380" Oct 13 04:52:48.369998 kubelet[2693]: E1013 04:52:48.369966 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:48.369998 kubelet[2693]: E1013 04:52:48.369994 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:52.323275 update_engine[1543]: I20251013 04:52:52.322997 1543 update_attempter.cc:509] Updating boot flags... Oct 13 04:52:53.002943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount547411766.mount: Deactivated successfully. Oct 13 04:52:56.939870 containerd[1556]: time="2025-10-13T04:52:56.939812301Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Oct 13 04:52:56.942052 containerd[1556]: time="2025-10-13T04:52:56.942017547Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.165992929s" Oct 13 04:52:56.942122 containerd[1556]: time="2025-10-13T04:52:56.942064828Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 13 04:52:56.945801 containerd[1556]: time="2025-10-13T04:52:56.945754438Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:56.946495 containerd[1556]: time="2025-10-13T04:52:56.946468080Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:52:56.959457 containerd[1556]: time="2025-10-13T04:52:56.959389675Z" level=info msg="CreateContainer within sandbox \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 13 04:52:56.972048 containerd[1556]: time="2025-10-13T04:52:56.971737429Z" level=info msg="Container 01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:52:56.976097 containerd[1556]: time="2025-10-13T04:52:56.975984360Z" level=info msg="CreateContainer within sandbox \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\"" Oct 13 04:52:56.978044 containerd[1556]: time="2025-10-13T04:52:56.978017406Z" level=info msg="StartContainer for \"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\"" Oct 13 04:52:56.979166 containerd[1556]: time="2025-10-13T04:52:56.979129209Z" level=info msg="connecting to shim 01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7" address="unix:///run/containerd/s/3288d201c55222313aa8e1a295e4cfdaaeba6a7b16553b78a21ec52ccd6937ce" protocol=ttrpc version=3 Oct 13 04:52:57.015725 systemd[1]: Started cri-containerd-01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7.scope - libcontainer container 01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7. Oct 13 04:52:57.045077 containerd[1556]: time="2025-10-13T04:52:57.045037742Z" level=info msg="StartContainer for \"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\" returns successfully" Oct 13 04:52:57.054338 systemd[1]: cri-containerd-01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7.scope: Deactivated successfully. Oct 13 04:52:57.078452 containerd[1556]: time="2025-10-13T04:52:57.078405947Z" level=info msg="received exit event container_id:\"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\" id:\"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\" pid:3177 exited_at:{seconds:1760331177 nanos:74373217}" Oct 13 04:52:57.078706 containerd[1556]: time="2025-10-13T04:52:57.078496548Z" level=info msg="TaskExit event in podsandbox handler container_id:\"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\" id:\"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\" pid:3177 exited_at:{seconds:1760331177 nanos:74373217}" Oct 13 04:52:57.110202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7-rootfs.mount: Deactivated successfully. Oct 13 04:52:57.396294 kubelet[2693]: E1013 04:52:57.396265 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:57.404765 containerd[1556]: time="2025-10-13T04:52:57.404727505Z" level=info msg="CreateContainer within sandbox \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 13 04:52:57.412526 containerd[1556]: time="2025-10-13T04:52:57.412163364Z" level=info msg="Container 06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:52:57.417973 containerd[1556]: time="2025-10-13T04:52:57.417934659Z" level=info msg="CreateContainer within sandbox \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\"" Oct 13 04:52:57.418417 containerd[1556]: time="2025-10-13T04:52:57.418392340Z" level=info msg="StartContainer for \"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\"" Oct 13 04:52:57.419466 containerd[1556]: time="2025-10-13T04:52:57.419426903Z" level=info msg="connecting to shim 06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363" address="unix:///run/containerd/s/3288d201c55222313aa8e1a295e4cfdaaeba6a7b16553b78a21ec52ccd6937ce" protocol=ttrpc version=3 Oct 13 04:52:57.451054 systemd[1]: Started cri-containerd-06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363.scope - libcontainer container 06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363. Oct 13 04:52:57.480064 containerd[1556]: time="2025-10-13T04:52:57.479893778Z" level=info msg="StartContainer for \"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\" returns successfully" Oct 13 04:52:57.489294 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 04:52:57.489923 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 04:52:57.489995 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 13 04:52:57.491264 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 04:52:57.493189 systemd[1]: cri-containerd-06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363.scope: Deactivated successfully. Oct 13 04:52:57.495015 containerd[1556]: time="2025-10-13T04:52:57.494985737Z" level=info msg="received exit event container_id:\"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\" id:\"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\" pid:3223 exited_at:{seconds:1760331177 nanos:494487255}" Oct 13 04:52:57.496422 containerd[1556]: time="2025-10-13T04:52:57.496376340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\" id:\"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\" pid:3223 exited_at:{seconds:1760331177 nanos:494487255}" Oct 13 04:52:57.512861 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 04:52:58.401527 kubelet[2693]: E1013 04:52:58.401256 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:58.410143 containerd[1556]: time="2025-10-13T04:52:58.410092021Z" level=info msg="CreateContainer within sandbox \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 13 04:52:58.437066 containerd[1556]: time="2025-10-13T04:52:58.436755605Z" level=info msg="Container bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:52:58.450244 containerd[1556]: time="2025-10-13T04:52:58.450182597Z" level=info msg="CreateContainer within sandbox \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\"" Oct 13 04:52:58.450988 containerd[1556]: time="2025-10-13T04:52:58.450771998Z" level=info msg="StartContainer for \"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\"" Oct 13 04:52:58.452830 containerd[1556]: time="2025-10-13T04:52:58.452772123Z" level=info msg="connecting to shim bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970" address="unix:///run/containerd/s/3288d201c55222313aa8e1a295e4cfdaaeba6a7b16553b78a21ec52ccd6937ce" protocol=ttrpc version=3 Oct 13 04:52:58.477718 systemd[1]: Started cri-containerd-bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970.scope - libcontainer container bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970. Oct 13 04:52:58.561744 containerd[1556]: time="2025-10-13T04:52:58.561628345Z" level=info msg="StartContainer for \"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\" returns successfully" Oct 13 04:52:58.561723 systemd[1]: cri-containerd-bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970.scope: Deactivated successfully. Oct 13 04:52:58.562909 containerd[1556]: time="2025-10-13T04:52:58.562788268Z" level=info msg="received exit event container_id:\"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\" id:\"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\" pid:3271 exited_at:{seconds:1760331178 nanos:562462667}" Oct 13 04:52:58.563063 containerd[1556]: time="2025-10-13T04:52:58.563022589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\" id:\"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\" pid:3271 exited_at:{seconds:1760331178 nanos:562462667}" Oct 13 04:52:58.971859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970-rootfs.mount: Deactivated successfully. Oct 13 04:52:59.406718 kubelet[2693]: E1013 04:52:59.406549 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:52:59.409084 containerd[1556]: time="2025-10-13T04:52:59.408500802Z" level=info msg="CreateContainer within sandbox \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 13 04:52:59.463243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2412277842.mount: Deactivated successfully. Oct 13 04:52:59.463874 containerd[1556]: time="2025-10-13T04:52:59.463411766Z" level=info msg="Container f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:52:59.468851 containerd[1556]: time="2025-10-13T04:52:59.468809338Z" level=info msg="CreateContainer within sandbox \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\"" Oct 13 04:52:59.469364 containerd[1556]: time="2025-10-13T04:52:59.469198299Z" level=info msg="StartContainer for \"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\"" Oct 13 04:52:59.470288 containerd[1556]: time="2025-10-13T04:52:59.470253822Z" level=info msg="connecting to shim f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073" address="unix:///run/containerd/s/3288d201c55222313aa8e1a295e4cfdaaeba6a7b16553b78a21ec52ccd6937ce" protocol=ttrpc version=3 Oct 13 04:52:59.495748 systemd[1]: Started cri-containerd-f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073.scope - libcontainer container f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073. Oct 13 04:52:59.517573 systemd[1]: cri-containerd-f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073.scope: Deactivated successfully. Oct 13 04:52:59.518335 containerd[1556]: time="2025-10-13T04:52:59.518208930Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\" id:\"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\" pid:3310 exited_at:{seconds:1760331179 nanos:517684409}" Oct 13 04:52:59.518972 containerd[1556]: time="2025-10-13T04:52:59.518938092Z" level=info msg="received exit event container_id:\"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\" id:\"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\" pid:3310 exited_at:{seconds:1760331179 nanos:517684409}" Oct 13 04:52:59.520847 containerd[1556]: time="2025-10-13T04:52:59.520821256Z" level=info msg="StartContainer for \"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\" returns successfully" Oct 13 04:52:59.971990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073-rootfs.mount: Deactivated successfully. Oct 13 04:53:00.411674 kubelet[2693]: E1013 04:53:00.411643 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:00.417091 containerd[1556]: time="2025-10-13T04:53:00.417036499Z" level=info msg="CreateContainer within sandbox \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 13 04:53:00.430968 containerd[1556]: time="2025-10-13T04:53:00.430204167Z" level=info msg="Container 293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:53:00.431636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318797125.mount: Deactivated successfully. Oct 13 04:53:00.443764 containerd[1556]: time="2025-10-13T04:53:00.443643356Z" level=info msg="CreateContainer within sandbox \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\"" Oct 13 04:53:00.444146 containerd[1556]: time="2025-10-13T04:53:00.444122797Z" level=info msg="StartContainer for \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\"" Oct 13 04:53:00.445188 containerd[1556]: time="2025-10-13T04:53:00.445127319Z" level=info msg="connecting to shim 293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802" address="unix:///run/containerd/s/3288d201c55222313aa8e1a295e4cfdaaeba6a7b16553b78a21ec52ccd6937ce" protocol=ttrpc version=3 Oct 13 04:53:00.465695 systemd[1]: Started cri-containerd-293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802.scope - libcontainer container 293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802. Oct 13 04:53:00.492315 containerd[1556]: time="2025-10-13T04:53:00.492278218Z" level=info msg="StartContainer for \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" returns successfully" Oct 13 04:53:00.581803 containerd[1556]: time="2025-10-13T04:53:00.581760928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" id:\"9e1bfe22f00b4a764e2e394b3d981b034acefeac7b1f8d5178c3c8da4229da53\" pid:3380 exited_at:{seconds:1760331180 nanos:581428447}" Oct 13 04:53:00.671064 kubelet[2693]: I1013 04:53:00.670766 2693 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 04:53:00.705059 systemd[1]: Created slice kubepods-burstable-pod40f9740f_f16c_43cb_a27e_7b617b161858.slice - libcontainer container kubepods-burstable-pod40f9740f_f16c_43cb_a27e_7b617b161858.slice. Oct 13 04:53:00.727636 systemd[1]: Created slice kubepods-burstable-pod1f1ea7c1_f840_4de4_92b2_24e7f8b7395e.slice - libcontainer container kubepods-burstable-pod1f1ea7c1_f840_4de4_92b2_24e7f8b7395e.slice. Oct 13 04:53:00.786162 kubelet[2693]: I1013 04:53:00.786044 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvr6h\" (UniqueName: \"kubernetes.io/projected/40f9740f-f16c-43cb-a27e-7b617b161858-kube-api-access-mvr6h\") pod \"coredns-668d6bf9bc-wmnpq\" (UID: \"40f9740f-f16c-43cb-a27e-7b617b161858\") " pod="kube-system/coredns-668d6bf9bc-wmnpq" Oct 13 04:53:00.786606 kubelet[2693]: I1013 04:53:00.786578 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdcq\" (UniqueName: \"kubernetes.io/projected/1f1ea7c1-f840-4de4-92b2-24e7f8b7395e-kube-api-access-vvdcq\") pod \"coredns-668d6bf9bc-hxlm6\" (UID: \"1f1ea7c1-f840-4de4-92b2-24e7f8b7395e\") " pod="kube-system/coredns-668d6bf9bc-hxlm6" Oct 13 04:53:00.786788 kubelet[2693]: I1013 04:53:00.786769 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f1ea7c1-f840-4de4-92b2-24e7f8b7395e-config-volume\") pod \"coredns-668d6bf9bc-hxlm6\" (UID: \"1f1ea7c1-f840-4de4-92b2-24e7f8b7395e\") " pod="kube-system/coredns-668d6bf9bc-hxlm6" Oct 13 04:53:00.787002 kubelet[2693]: I1013 04:53:00.786971 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40f9740f-f16c-43cb-a27e-7b617b161858-config-volume\") pod \"coredns-668d6bf9bc-wmnpq\" (UID: \"40f9740f-f16c-43cb-a27e-7b617b161858\") " pod="kube-system/coredns-668d6bf9bc-wmnpq" Oct 13 04:53:01.010194 kubelet[2693]: E1013 04:53:01.010065 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:01.011736 containerd[1556]: time="2025-10-13T04:53:01.011665196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wmnpq,Uid:40f9740f-f16c-43cb-a27e-7b617b161858,Namespace:kube-system,Attempt:0,}" Oct 13 04:53:01.030307 kubelet[2693]: E1013 04:53:01.030215 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:01.032286 containerd[1556]: time="2025-10-13T04:53:01.032233836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hxlm6,Uid:1f1ea7c1-f840-4de4-92b2-24e7f8b7395e,Namespace:kube-system,Attempt:0,}" Oct 13 04:53:01.421714 kubelet[2693]: E1013 04:53:01.421682 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:02.422770 kubelet[2693]: E1013 04:53:02.422730 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:02.441142 systemd[1]: Started sshd@7-10.0.0.32:22-10.0.0.1:44602.service - OpenSSH per-connection server daemon (10.0.0.1:44602). Oct 13 04:53:02.500236 sshd[3481]: Accepted publickey for core from 10.0.0.1 port 44602 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:02.501756 sshd-session[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:02.506372 systemd-logind[1541]: New session 8 of user core. Oct 13 04:53:02.516674 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 04:53:02.547396 systemd-networkd[1469]: cilium_host: Link UP Oct 13 04:53:02.547546 systemd-networkd[1469]: cilium_net: Link UP Oct 13 04:53:02.547672 systemd-networkd[1469]: cilium_host: Gained carrier Oct 13 04:53:02.547813 systemd-networkd[1469]: cilium_net: Gained carrier Oct 13 04:53:02.628676 systemd-networkd[1469]: cilium_vxlan: Link UP Oct 13 04:53:02.628683 systemd-networkd[1469]: cilium_vxlan: Gained carrier Oct 13 04:53:02.652523 sshd[3484]: Connection closed by 10.0.0.1 port 44602 Oct 13 04:53:02.652881 sshd-session[3481]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:02.657541 systemd[1]: sshd@7-10.0.0.32:22-10.0.0.1:44602.service: Deactivated successfully. Oct 13 04:53:02.662033 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 04:53:02.663335 systemd-logind[1541]: Session 8 logged out. Waiting for processes to exit. Oct 13 04:53:02.666018 systemd-logind[1541]: Removed session 8. Oct 13 04:53:02.893543 kernel: NET: Registered PF_ALG protocol family Oct 13 04:53:02.924772 systemd-networkd[1469]: cilium_net: Gained IPv6LL Oct 13 04:53:02.925038 systemd-networkd[1469]: cilium_host: Gained IPv6LL Oct 13 04:53:03.424478 kubelet[2693]: E1013 04:53:03.424453 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:03.479028 systemd-networkd[1469]: lxc_health: Link UP Oct 13 04:53:03.479279 systemd-networkd[1469]: lxc_health: Gained carrier Oct 13 04:53:04.067527 kernel: eth0: renamed from tmp56501 Oct 13 04:53:04.068227 systemd-networkd[1469]: lxc63c403ec0704: Link UP Oct 13 04:53:04.068444 systemd-networkd[1469]: lxc63c403ec0704: Gained carrier Oct 13 04:53:04.084174 systemd-networkd[1469]: lxc0f5ecd921540: Link UP Oct 13 04:53:04.084674 kernel: eth0: renamed from tmpda753 Oct 13 04:53:04.085866 systemd-networkd[1469]: lxc0f5ecd921540: Gained carrier Oct 13 04:53:04.268760 systemd-networkd[1469]: cilium_vxlan: Gained IPv6LL Oct 13 04:53:04.359879 kubelet[2693]: I1013 04:53:04.359726 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-znrdk" podStartSLOduration=8.826410239 podStartE2EDuration="23.359708944s" podCreationTimestamp="2025-10-13 04:52:41 +0000 UTC" firstStartedPulling="2025-10-13 04:52:42.417776787 +0000 UTC m=+6.186878517" lastFinishedPulling="2025-10-13 04:52:56.951075492 +0000 UTC m=+20.720177222" observedRunningTime="2025-10-13 04:53:01.438438402 +0000 UTC m=+25.207540132" watchObservedRunningTime="2025-10-13 04:53:04.359708944 +0000 UTC m=+28.128810634" Oct 13 04:53:04.426330 kubelet[2693]: E1013 04:53:04.426289 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:04.908814 systemd-networkd[1469]: lxc_health: Gained IPv6LL Oct 13 04:53:05.164659 systemd-networkd[1469]: lxc0f5ecd921540: Gained IPv6LL Oct 13 04:53:05.428550 kubelet[2693]: E1013 04:53:05.428444 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:05.932755 systemd-networkd[1469]: lxc63c403ec0704: Gained IPv6LL Oct 13 04:53:06.431181 kubelet[2693]: E1013 04:53:06.430219 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:07.647218 containerd[1556]: time="2025-10-13T04:53:07.647109629Z" level=info msg="connecting to shim 56501266665bace069a834e49a616bc5db52ce7053acea3105f7d446a76582d4" address="unix:///run/containerd/s/52dcd80476af5e82cbd6a71ad8f26fa321eb38d116b94abb47c792d1be120ee2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:53:07.648392 containerd[1556]: time="2025-10-13T04:53:07.648196831Z" level=info msg="connecting to shim da7535efba66a0555a1bd267858eaae5f37c2f6380d8a0c5d3d66253ac756616" address="unix:///run/containerd/s/a19703f10f5ea6eaba40ab78273a5127199fc01c94b7c7c5dc7bf0280fea78e8" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:53:07.683704 systemd[1]: Started cri-containerd-da7535efba66a0555a1bd267858eaae5f37c2f6380d8a0c5d3d66253ac756616.scope - libcontainer container da7535efba66a0555a1bd267858eaae5f37c2f6380d8a0c5d3d66253ac756616. Oct 13 04:53:07.685378 systemd[1]: Started sshd@8-10.0.0.32:22-10.0.0.1:58362.service - OpenSSH per-connection server daemon (10.0.0.1:58362). Oct 13 04:53:07.689451 systemd[1]: Started cri-containerd-56501266665bace069a834e49a616bc5db52ce7053acea3105f7d446a76582d4.scope - libcontainer container 56501266665bace069a834e49a616bc5db52ce7053acea3105f7d446a76582d4. Oct 13 04:53:07.705490 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 04:53:07.707635 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 04:53:07.738811 containerd[1556]: time="2025-10-13T04:53:07.738703233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wmnpq,Uid:40f9740f-f16c-43cb-a27e-7b617b161858,Namespace:kube-system,Attempt:0,} returns sandbox id \"56501266665bace069a834e49a616bc5db52ce7053acea3105f7d446a76582d4\"" Oct 13 04:53:07.740530 containerd[1556]: time="2025-10-13T04:53:07.739668314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hxlm6,Uid:1f1ea7c1-f840-4de4-92b2-24e7f8b7395e,Namespace:kube-system,Attempt:0,} returns sandbox id \"da7535efba66a0555a1bd267858eaae5f37c2f6380d8a0c5d3d66253ac756616\"" Oct 13 04:53:07.740858 kubelet[2693]: E1013 04:53:07.740835 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:07.741828 kubelet[2693]: E1013 04:53:07.741804 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:07.743358 containerd[1556]: time="2025-10-13T04:53:07.743215359Z" level=info msg="CreateContainer within sandbox \"56501266665bace069a834e49a616bc5db52ce7053acea3105f7d446a76582d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 04:53:07.744014 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 58362 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:07.745035 containerd[1556]: time="2025-10-13T04:53:07.745008241Z" level=info msg="CreateContainer within sandbox \"da7535efba66a0555a1bd267858eaae5f37c2f6380d8a0c5d3d66253ac756616\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 04:53:07.747782 sshd-session[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:07.754297 containerd[1556]: time="2025-10-13T04:53:07.753907973Z" level=info msg="Container b5e41ba345b356719657da68908e6186247d96d2d88c0fec832ee28c05443a3c: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:53:07.759633 systemd-logind[1541]: New session 9 of user core. Oct 13 04:53:07.764193 containerd[1556]: time="2025-10-13T04:53:07.764044387Z" level=info msg="Container 6963b0185220ac04412f8baf4c655385450075af5f8efde439706d686e5ec071: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:53:07.768678 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 04:53:07.769747 containerd[1556]: time="2025-10-13T04:53:07.769677354Z" level=info msg="CreateContainer within sandbox \"56501266665bace069a834e49a616bc5db52ce7053acea3105f7d446a76582d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5e41ba345b356719657da68908e6186247d96d2d88c0fec832ee28c05443a3c\"" Oct 13 04:53:07.770527 containerd[1556]: time="2025-10-13T04:53:07.770424755Z" level=info msg="StartContainer for \"b5e41ba345b356719657da68908e6186247d96d2d88c0fec832ee28c05443a3c\"" Oct 13 04:53:07.771890 containerd[1556]: time="2025-10-13T04:53:07.771679517Z" level=info msg="connecting to shim b5e41ba345b356719657da68908e6186247d96d2d88c0fec832ee28c05443a3c" address="unix:///run/containerd/s/52dcd80476af5e82cbd6a71ad8f26fa321eb38d116b94abb47c792d1be120ee2" protocol=ttrpc version=3 Oct 13 04:53:07.771890 containerd[1556]: time="2025-10-13T04:53:07.771800317Z" level=info msg="CreateContainer within sandbox \"da7535efba66a0555a1bd267858eaae5f37c2f6380d8a0c5d3d66253ac756616\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6963b0185220ac04412f8baf4c655385450075af5f8efde439706d686e5ec071\"" Oct 13 04:53:07.772567 containerd[1556]: time="2025-10-13T04:53:07.772335118Z" level=info msg="StartContainer for \"6963b0185220ac04412f8baf4c655385450075af5f8efde439706d686e5ec071\"" Oct 13 04:53:07.773972 containerd[1556]: time="2025-10-13T04:53:07.773941960Z" level=info msg="connecting to shim 6963b0185220ac04412f8baf4c655385450075af5f8efde439706d686e5ec071" address="unix:///run/containerd/s/a19703f10f5ea6eaba40ab78273a5127199fc01c94b7c7c5dc7bf0280fea78e8" protocol=ttrpc version=3 Oct 13 04:53:07.803747 systemd[1]: Started cri-containerd-6963b0185220ac04412f8baf4c655385450075af5f8efde439706d686e5ec071.scope - libcontainer container 6963b0185220ac04412f8baf4c655385450075af5f8efde439706d686e5ec071. Oct 13 04:53:07.805085 systemd[1]: Started cri-containerd-b5e41ba345b356719657da68908e6186247d96d2d88c0fec832ee28c05443a3c.scope - libcontainer container b5e41ba345b356719657da68908e6186247d96d2d88c0fec832ee28c05443a3c. Oct 13 04:53:07.842665 containerd[1556]: time="2025-10-13T04:53:07.842574172Z" level=info msg="StartContainer for \"b5e41ba345b356719657da68908e6186247d96d2d88c0fec832ee28c05443a3c\" returns successfully" Oct 13 04:53:07.843724 containerd[1556]: time="2025-10-13T04:53:07.843606094Z" level=info msg="StartContainer for \"6963b0185220ac04412f8baf4c655385450075af5f8efde439706d686e5ec071\" returns successfully" Oct 13 04:53:07.918345 sshd[3984]: Connection closed by 10.0.0.1 port 58362 Oct 13 04:53:07.919370 sshd-session[3954]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:07.924697 systemd-logind[1541]: Session 9 logged out. Waiting for processes to exit. Oct 13 04:53:07.924868 systemd[1]: sshd@8-10.0.0.32:22-10.0.0.1:58362.service: Deactivated successfully. Oct 13 04:53:07.926780 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 04:53:07.930566 systemd-logind[1541]: Removed session 9. Oct 13 04:53:08.435596 kubelet[2693]: E1013 04:53:08.435553 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:08.438393 kubelet[2693]: E1013 04:53:08.438360 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:08.466816 kubelet[2693]: I1013 04:53:08.465775 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wmnpq" podStartSLOduration=27.465756692 podStartE2EDuration="27.465756692s" podCreationTimestamp="2025-10-13 04:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 04:53:08.447465149 +0000 UTC m=+32.216566879" watchObservedRunningTime="2025-10-13 04:53:08.465756692 +0000 UTC m=+32.234858462" Oct 13 04:53:08.466816 kubelet[2693]: I1013 04:53:08.465898 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hxlm6" podStartSLOduration=27.465893292 podStartE2EDuration="27.465893292s" podCreationTimestamp="2025-10-13 04:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 04:53:08.463424649 +0000 UTC m=+32.232526379" watchObservedRunningTime="2025-10-13 04:53:08.465893292 +0000 UTC m=+32.234995022" Oct 13 04:53:09.440139 kubelet[2693]: E1013 04:53:09.440096 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:10.442032 kubelet[2693]: E1013 04:53:10.441920 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:12.939934 systemd[1]: Started sshd@9-10.0.0.32:22-10.0.0.1:58372.service - OpenSSH per-connection server daemon (10.0.0.1:58372). Oct 13 04:53:13.007146 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 58372 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:13.008461 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:13.012039 systemd-logind[1541]: New session 10 of user core. Oct 13 04:53:13.021689 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 04:53:13.147965 sshd[4081]: Connection closed by 10.0.0.1 port 58372 Oct 13 04:53:13.148282 sshd-session[4078]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:13.152044 systemd[1]: sshd@9-10.0.0.32:22-10.0.0.1:58372.service: Deactivated successfully. Oct 13 04:53:13.153680 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 04:53:13.154705 systemd-logind[1541]: Session 10 logged out. Waiting for processes to exit. Oct 13 04:53:13.155568 systemd-logind[1541]: Removed session 10. Oct 13 04:53:18.170904 systemd[1]: Started sshd@10-10.0.0.32:22-10.0.0.1:43728.service - OpenSSH per-connection server daemon (10.0.0.1:43728). Oct 13 04:53:18.219057 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 43728 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:18.220309 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:18.224896 systemd-logind[1541]: New session 11 of user core. Oct 13 04:53:18.235693 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 04:53:18.360784 sshd[4101]: Connection closed by 10.0.0.1 port 43728 Oct 13 04:53:18.361134 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:18.381812 systemd[1]: sshd@10-10.0.0.32:22-10.0.0.1:43728.service: Deactivated successfully. Oct 13 04:53:18.384499 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 04:53:18.385254 systemd-logind[1541]: Session 11 logged out. Waiting for processes to exit. Oct 13 04:53:18.387737 systemd[1]: Started sshd@11-10.0.0.32:22-10.0.0.1:43736.service - OpenSSH per-connection server daemon (10.0.0.1:43736). Oct 13 04:53:18.388329 systemd-logind[1541]: Removed session 11. Oct 13 04:53:18.440968 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 43736 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:18.441779 kubelet[2693]: E1013 04:53:18.439974 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:18.446087 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:18.454757 systemd-logind[1541]: New session 12 of user core. Oct 13 04:53:18.459261 kubelet[2693]: E1013 04:53:18.459216 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:18.462724 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 04:53:18.624591 sshd[4121]: Connection closed by 10.0.0.1 port 43736 Oct 13 04:53:18.625197 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:18.634887 systemd[1]: sshd@11-10.0.0.32:22-10.0.0.1:43736.service: Deactivated successfully. Oct 13 04:53:18.637965 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 04:53:18.646413 systemd-logind[1541]: Session 12 logged out. Waiting for processes to exit. Oct 13 04:53:18.649053 systemd[1]: Started sshd@12-10.0.0.32:22-10.0.0.1:43738.service - OpenSSH per-connection server daemon (10.0.0.1:43738). Oct 13 04:53:18.650480 systemd-logind[1541]: Removed session 12. Oct 13 04:53:18.712241 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 43738 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:18.714031 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:18.718451 systemd-logind[1541]: New session 13 of user core. Oct 13 04:53:18.738747 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 04:53:18.859050 sshd[4137]: Connection closed by 10.0.0.1 port 43738 Oct 13 04:53:18.858539 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:18.862589 systemd[1]: sshd@12-10.0.0.32:22-10.0.0.1:43738.service: Deactivated successfully. Oct 13 04:53:18.864245 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 04:53:18.866151 systemd-logind[1541]: Session 13 logged out. Waiting for processes to exit. Oct 13 04:53:18.867300 systemd-logind[1541]: Removed session 13. Oct 13 04:53:23.873990 systemd[1]: Started sshd@13-10.0.0.32:22-10.0.0.1:43740.service - OpenSSH per-connection server daemon (10.0.0.1:43740). Oct 13 04:53:23.926192 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 43740 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:23.927487 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:23.931676 systemd-logind[1541]: New session 14 of user core. Oct 13 04:53:23.937655 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 04:53:24.054800 sshd[4153]: Connection closed by 10.0.0.1 port 43740 Oct 13 04:53:24.055271 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:24.063471 systemd[1]: sshd@13-10.0.0.32:22-10.0.0.1:43740.service: Deactivated successfully. Oct 13 04:53:24.065110 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 04:53:24.065804 systemd-logind[1541]: Session 14 logged out. Waiting for processes to exit. Oct 13 04:53:24.068247 systemd[1]: Started sshd@14-10.0.0.32:22-10.0.0.1:43742.service - OpenSSH per-connection server daemon (10.0.0.1:43742). Oct 13 04:53:24.069298 systemd-logind[1541]: Removed session 14. Oct 13 04:53:24.122200 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 43742 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:24.123569 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:24.127600 systemd-logind[1541]: New session 15 of user core. Oct 13 04:53:24.133680 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 04:53:24.309824 sshd[4169]: Connection closed by 10.0.0.1 port 43742 Oct 13 04:53:24.310700 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:24.319494 systemd[1]: sshd@14-10.0.0.32:22-10.0.0.1:43742.service: Deactivated successfully. Oct 13 04:53:24.320991 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 04:53:24.322423 systemd-logind[1541]: Session 15 logged out. Waiting for processes to exit. Oct 13 04:53:24.323758 systemd[1]: Started sshd@15-10.0.0.32:22-10.0.0.1:43746.service - OpenSSH per-connection server daemon (10.0.0.1:43746). Oct 13 04:53:24.324654 systemd-logind[1541]: Removed session 15. Oct 13 04:53:24.382759 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 43746 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:24.384030 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:24.388212 systemd-logind[1541]: New session 16 of user core. Oct 13 04:53:24.394706 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 04:53:24.905006 sshd[4184]: Connection closed by 10.0.0.1 port 43746 Oct 13 04:53:24.905454 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:24.914056 systemd[1]: sshd@15-10.0.0.32:22-10.0.0.1:43746.service: Deactivated successfully. Oct 13 04:53:24.915612 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 04:53:24.916465 systemd-logind[1541]: Session 16 logged out. Waiting for processes to exit. Oct 13 04:53:24.920976 systemd[1]: Started sshd@16-10.0.0.32:22-10.0.0.1:43756.service - OpenSSH per-connection server daemon (10.0.0.1:43756). Oct 13 04:53:24.923095 systemd-logind[1541]: Removed session 16. Oct 13 04:53:24.973226 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 43756 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:24.974600 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:24.978568 systemd-logind[1541]: New session 17 of user core. Oct 13 04:53:24.984643 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 04:53:25.214271 sshd[4206]: Connection closed by 10.0.0.1 port 43756 Oct 13 04:53:25.213243 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:25.224550 systemd[1]: sshd@16-10.0.0.32:22-10.0.0.1:43756.service: Deactivated successfully. Oct 13 04:53:25.226221 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 04:53:25.226954 systemd-logind[1541]: Session 17 logged out. Waiting for processes to exit. Oct 13 04:53:25.229583 systemd[1]: Started sshd@17-10.0.0.32:22-10.0.0.1:43764.service - OpenSSH per-connection server daemon (10.0.0.1:43764). Oct 13 04:53:25.231118 systemd-logind[1541]: Removed session 17. Oct 13 04:53:25.282545 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 43764 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:25.283834 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:25.288401 systemd-logind[1541]: New session 18 of user core. Oct 13 04:53:25.303667 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 04:53:25.415765 sshd[4220]: Connection closed by 10.0.0.1 port 43764 Oct 13 04:53:25.417369 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:25.420774 systemd[1]: sshd@17-10.0.0.32:22-10.0.0.1:43764.service: Deactivated successfully. Oct 13 04:53:25.422970 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 04:53:25.423626 systemd-logind[1541]: Session 18 logged out. Waiting for processes to exit. Oct 13 04:53:25.424838 systemd-logind[1541]: Removed session 18. Oct 13 04:53:30.431658 systemd[1]: Started sshd@18-10.0.0.32:22-10.0.0.1:33874.service - OpenSSH per-connection server daemon (10.0.0.1:33874). Oct 13 04:53:30.501286 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 33874 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:30.502353 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:30.506518 systemd-logind[1541]: New session 19 of user core. Oct 13 04:53:30.526757 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 04:53:30.634203 sshd[4239]: Connection closed by 10.0.0.1 port 33874 Oct 13 04:53:30.634576 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:30.639246 systemd[1]: sshd@18-10.0.0.32:22-10.0.0.1:33874.service: Deactivated successfully. Oct 13 04:53:30.640981 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 04:53:30.643117 systemd-logind[1541]: Session 19 logged out. Waiting for processes to exit. Oct 13 04:53:30.644376 systemd-logind[1541]: Removed session 19. Oct 13 04:53:35.646985 systemd[1]: Started sshd@19-10.0.0.32:22-10.0.0.1:48698.service - OpenSSH per-connection server daemon (10.0.0.1:48698). Oct 13 04:53:35.689425 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 48698 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:35.690666 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:35.694656 systemd-logind[1541]: New session 20 of user core. Oct 13 04:53:35.710776 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 04:53:35.817458 sshd[4256]: Connection closed by 10.0.0.1 port 48698 Oct 13 04:53:35.817808 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:35.821559 systemd[1]: sshd@19-10.0.0.32:22-10.0.0.1:48698.service: Deactivated successfully. Oct 13 04:53:35.825206 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 04:53:35.825913 systemd-logind[1541]: Session 20 logged out. Waiting for processes to exit. Oct 13 04:53:35.827289 systemd-logind[1541]: Removed session 20. Oct 13 04:53:40.830643 systemd[1]: Started sshd@20-10.0.0.32:22-10.0.0.1:48734.service - OpenSSH per-connection server daemon (10.0.0.1:48734). Oct 13 04:53:40.896742 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 48734 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:40.898027 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:40.901851 systemd-logind[1541]: New session 21 of user core. Oct 13 04:53:40.913731 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 04:53:41.023636 sshd[4274]: Connection closed by 10.0.0.1 port 48734 Oct 13 04:53:41.023444 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:41.030587 systemd[1]: sshd@20-10.0.0.32:22-10.0.0.1:48734.service: Deactivated successfully. Oct 13 04:53:41.032921 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 04:53:41.035115 systemd-logind[1541]: Session 21 logged out. Waiting for processes to exit. Oct 13 04:53:41.037656 systemd[1]: Started sshd@21-10.0.0.32:22-10.0.0.1:48740.service - OpenSSH per-connection server daemon (10.0.0.1:48740). Oct 13 04:53:41.038186 systemd-logind[1541]: Removed session 21. Oct 13 04:53:41.107591 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 48740 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:41.108624 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:41.112571 systemd-logind[1541]: New session 22 of user core. Oct 13 04:53:41.121649 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 04:53:43.748799 containerd[1556]: time="2025-10-13T04:53:43.748749650Z" level=info msg="StopContainer for \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\" with timeout 30 (s)" Oct 13 04:53:43.749836 containerd[1556]: time="2025-10-13T04:53:43.749789210Z" level=info msg="Stop container \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\" with signal terminated" Oct 13 04:53:43.782107 systemd[1]: cri-containerd-0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb.scope: Deactivated successfully. Oct 13 04:53:43.783308 containerd[1556]: time="2025-10-13T04:53:43.783271273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\" id:\"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\" pid:3101 exited_at:{seconds:1760331223 nanos:782917032}" Oct 13 04:53:43.783392 containerd[1556]: time="2025-10-13T04:53:43.783317753Z" level=info msg="received exit event container_id:\"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\" id:\"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\" pid:3101 exited_at:{seconds:1760331223 nanos:782917032}" Oct 13 04:53:43.808637 containerd[1556]: time="2025-10-13T04:53:43.806959385Z" level=info msg="TaskExit event in podsandbox handler container_id:\"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" id:\"16d11455db19c53543cc1ce704107093ebadb0183164f655686ee7909bed8405\" pid:4321 exited_at:{seconds:1760331223 nanos:805911263}" Oct 13 04:53:43.808288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb-rootfs.mount: Deactivated successfully. Oct 13 04:53:43.811360 containerd[1556]: time="2025-10-13T04:53:43.811313231Z" level=info msg="StopContainer for \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" with timeout 2 (s)" Oct 13 04:53:43.811794 containerd[1556]: time="2025-10-13T04:53:43.811765671Z" level=info msg="Stop container \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" with signal terminated" Oct 13 04:53:43.818852 containerd[1556]: time="2025-10-13T04:53:43.818804841Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 04:53:43.820787 systemd-networkd[1469]: lxc_health: Link DOWN Oct 13 04:53:43.820812 systemd-networkd[1469]: lxc_health: Lost carrier Oct 13 04:53:43.827462 containerd[1556]: time="2025-10-13T04:53:43.827325732Z" level=info msg="StopContainer for \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\" returns successfully" Oct 13 04:53:43.829557 containerd[1556]: time="2025-10-13T04:53:43.829483015Z" level=info msg="StopPodSandbox for \"42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f\"" Oct 13 04:53:43.836711 containerd[1556]: time="2025-10-13T04:53:43.836663865Z" level=info msg="Container to stop \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 04:53:43.839258 systemd[1]: cri-containerd-293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802.scope: Deactivated successfully. Oct 13 04:53:43.839595 systemd[1]: cri-containerd-293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802.scope: Consumed 6.193s CPU time, 123.3M memory peak, 160K read from disk, 12.9M written to disk. Oct 13 04:53:43.841253 containerd[1556]: time="2025-10-13T04:53:43.841106471Z" level=info msg="received exit event container_id:\"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" id:\"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" pid:3348 exited_at:{seconds:1760331223 nanos:840667550}" Oct 13 04:53:43.841326 containerd[1556]: time="2025-10-13T04:53:43.841274871Z" level=info msg="TaskExit event in podsandbox handler container_id:\"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" id:\"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" pid:3348 exited_at:{seconds:1760331223 nanos:840667550}" Oct 13 04:53:43.846733 systemd[1]: cri-containerd-42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f.scope: Deactivated successfully. Oct 13 04:53:43.849347 containerd[1556]: time="2025-10-13T04:53:43.849083642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f\" id:\"42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f\" pid:2808 exit_status:137 exited_at:{seconds:1760331223 nanos:848246521}" Oct 13 04:53:43.867009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802-rootfs.mount: Deactivated successfully. Oct 13 04:53:43.875945 containerd[1556]: time="2025-10-13T04:53:43.875902158Z" level=info msg="StopContainer for \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" returns successfully" Oct 13 04:53:43.876894 containerd[1556]: time="2025-10-13T04:53:43.876852279Z" level=info msg="StopPodSandbox for \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\"" Oct 13 04:53:43.876963 containerd[1556]: time="2025-10-13T04:53:43.876926719Z" level=info msg="Container to stop \"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 04:53:43.876963 containerd[1556]: time="2025-10-13T04:53:43.876942639Z" level=info msg="Container to stop \"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 04:53:43.876963 containerd[1556]: time="2025-10-13T04:53:43.876951079Z" level=info msg="Container to stop \"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 04:53:43.876963 containerd[1556]: time="2025-10-13T04:53:43.876959519Z" level=info msg="Container to stop \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 04:53:43.877052 containerd[1556]: time="2025-10-13T04:53:43.876967319Z" level=info msg="Container to stop \"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 04:53:43.885106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f-rootfs.mount: Deactivated successfully. Oct 13 04:53:43.885998 systemd[1]: cri-containerd-c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808.scope: Deactivated successfully. Oct 13 04:53:43.890027 containerd[1556]: time="2025-10-13T04:53:43.889988497Z" level=info msg="shim disconnected" id=42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f namespace=k8s.io Oct 13 04:53:43.906711 containerd[1556]: time="2025-10-13T04:53:43.890021537Z" level=warning msg="cleaning up after shim disconnected" id=42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f namespace=k8s.io Oct 13 04:53:43.906869 containerd[1556]: time="2025-10-13T04:53:43.906847360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 04:53:43.914727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808-rootfs.mount: Deactivated successfully. Oct 13 04:53:43.931269 containerd[1556]: time="2025-10-13T04:53:43.931222353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" id:\"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" pid:2884 exit_status:137 exited_at:{seconds:1760331223 nanos:887613934}" Oct 13 04:53:43.932046 containerd[1556]: time="2025-10-13T04:53:43.932020834Z" level=info msg="TearDown network for sandbox \"42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f\" successfully" Oct 13 04:53:43.932112 containerd[1556]: time="2025-10-13T04:53:43.932099354Z" level=info msg="StopPodSandbox for \"42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f\" returns successfully" Oct 13 04:53:43.933198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f-shm.mount: Deactivated successfully. Oct 13 04:53:43.937001 containerd[1556]: time="2025-10-13T04:53:43.936938481Z" level=info msg="received exit event sandbox_id:\"42286a686f33c1df15ba1aac6d662f768bd12690d238898577d47dcbc224836f\" exit_status:137 exited_at:{seconds:1760331223 nanos:848246521}" Oct 13 04:53:43.943967 containerd[1556]: time="2025-10-13T04:53:43.943930010Z" level=info msg="received exit event sandbox_id:\"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" exit_status:137 exited_at:{seconds:1760331223 nanos:887613934}" Oct 13 04:53:43.944854 containerd[1556]: time="2025-10-13T04:53:43.944828771Z" level=info msg="TearDown network for sandbox \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" successfully" Oct 13 04:53:43.944954 containerd[1556]: time="2025-10-13T04:53:43.944940372Z" level=info msg="StopPodSandbox for \"c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808\" returns successfully" Oct 13 04:53:43.945157 containerd[1556]: time="2025-10-13T04:53:43.944836811Z" level=info msg="shim disconnected" id=c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808 namespace=k8s.io Oct 13 04:53:43.945157 containerd[1556]: time="2025-10-13T04:53:43.945115012Z" level=warning msg="cleaning up after shim disconnected" id=c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808 namespace=k8s.io Oct 13 04:53:43.945317 containerd[1556]: time="2025-10-13T04:53:43.945142732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 04:53:44.069753 kubelet[2693]: I1013 04:53:44.069621 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-etc-cni-netd\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.069753 kubelet[2693]: I1013 04:53:44.069672 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-run\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.069753 kubelet[2693]: I1013 04:53:44.069690 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-host-proc-sys-net\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.069753 kubelet[2693]: I1013 04:53:44.069734 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-cgroup\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.069753 kubelet[2693]: I1013 04:53:44.069760 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-config-path\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.070889 kubelet[2693]: I1013 04:53:44.069781 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hggqh\" (UniqueName: \"kubernetes.io/projected/95b0640a-2e4a-48b5-95ed-71d264f48fdb-kube-api-access-hggqh\") pod \"95b0640a-2e4a-48b5-95ed-71d264f48fdb\" (UID: \"95b0640a-2e4a-48b5-95ed-71d264f48fdb\") " Oct 13 04:53:44.070889 kubelet[2693]: I1013 04:53:44.069798 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95b0640a-2e4a-48b5-95ed-71d264f48fdb-cilium-config-path\") pod \"95b0640a-2e4a-48b5-95ed-71d264f48fdb\" (UID: \"95b0640a-2e4a-48b5-95ed-71d264f48fdb\") " Oct 13 04:53:44.070889 kubelet[2693]: I1013 04:53:44.069817 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-hostproc\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.070889 kubelet[2693]: I1013 04:53:44.069872 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-bpf-maps\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.070889 kubelet[2693]: I1013 04:53:44.069891 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cni-path\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.070889 kubelet[2693]: I1013 04:53:44.069914 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-host-proc-sys-kernel\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.071015 kubelet[2693]: I1013 04:53:44.069929 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-xtables-lock\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.071015 kubelet[2693]: I1013 04:53:44.069942 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-lib-modules\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.071015 kubelet[2693]: I1013 04:53:44.069964 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50ed0e2a-8871-43bd-a142-58e260e6704b-clustermesh-secrets\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.071015 kubelet[2693]: I1013 04:53:44.069982 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p29mv\" (UniqueName: \"kubernetes.io/projected/50ed0e2a-8871-43bd-a142-58e260e6704b-kube-api-access-p29mv\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.071015 kubelet[2693]: I1013 04:53:44.070001 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50ed0e2a-8871-43bd-a142-58e260e6704b-hubble-tls\") pod \"50ed0e2a-8871-43bd-a142-58e260e6704b\" (UID: \"50ed0e2a-8871-43bd-a142-58e260e6704b\") " Oct 13 04:53:44.071402 kubelet[2693]: I1013 04:53:44.071172 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 04:53:44.071402 kubelet[2693]: I1013 04:53:44.071206 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 04:53:44.071402 kubelet[2693]: I1013 04:53:44.071175 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-hostproc" (OuterVolumeSpecName: "hostproc") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 04:53:44.071402 kubelet[2693]: I1013 04:53:44.071172 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 04:53:44.071402 kubelet[2693]: I1013 04:53:44.071234 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 04:53:44.074664 kubelet[2693]: I1013 04:53:44.074609 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 04:53:44.074664 kubelet[2693]: I1013 04:53:44.074611 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 04:53:44.074760 kubelet[2693]: I1013 04:53:44.074686 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 04:53:44.074760 kubelet[2693]: I1013 04:53:44.074705 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cni-path" (OuterVolumeSpecName: "cni-path") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 04:53:44.074760 kubelet[2693]: I1013 04:53:44.074721 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 04:53:44.075892 kubelet[2693]: I1013 04:53:44.075827 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 04:53:44.076903 kubelet[2693]: I1013 04:53:44.076870 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b0640a-2e4a-48b5-95ed-71d264f48fdb-kube-api-access-hggqh" (OuterVolumeSpecName: "kube-api-access-hggqh") pod "95b0640a-2e4a-48b5-95ed-71d264f48fdb" (UID: "95b0640a-2e4a-48b5-95ed-71d264f48fdb"). InnerVolumeSpecName "kube-api-access-hggqh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 04:53:44.077192 kubelet[2693]: I1013 04:53:44.077162 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ed0e2a-8871-43bd-a142-58e260e6704b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 04:53:44.077332 kubelet[2693]: I1013 04:53:44.077304 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ed0e2a-8871-43bd-a142-58e260e6704b-kube-api-access-p29mv" (OuterVolumeSpecName: "kube-api-access-p29mv") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "kube-api-access-p29mv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 04:53:44.078008 kubelet[2693]: I1013 04:53:44.077970 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b0640a-2e4a-48b5-95ed-71d264f48fdb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "95b0640a-2e4a-48b5-95ed-71d264f48fdb" (UID: "95b0640a-2e4a-48b5-95ed-71d264f48fdb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 04:53:44.078265 kubelet[2693]: I1013 04:53:44.078211 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ed0e2a-8871-43bd-a142-58e260e6704b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "50ed0e2a-8871-43bd-a142-58e260e6704b" (UID: "50ed0e2a-8871-43bd-a142-58e260e6704b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 04:53:44.170778 kubelet[2693]: I1013 04:53:44.170741 2693 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.170962 kubelet[2693]: I1013 04:53:44.170950 2693 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171024 kubelet[2693]: I1013 04:53:44.171015 2693 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171078 kubelet[2693]: I1013 04:53:44.171065 2693 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171144 kubelet[2693]: I1013 04:53:44.171135 2693 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171198 kubelet[2693]: I1013 04:53:44.171190 2693 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171332 kubelet[2693]: I1013 04:53:44.171238 2693 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50ed0e2a-8871-43bd-a142-58e260e6704b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171332 kubelet[2693]: I1013 04:53:44.171250 2693 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p29mv\" (UniqueName: \"kubernetes.io/projected/50ed0e2a-8871-43bd-a142-58e260e6704b-kube-api-access-p29mv\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171332 kubelet[2693]: I1013 04:53:44.171259 2693 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50ed0e2a-8871-43bd-a142-58e260e6704b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171332 kubelet[2693]: I1013 04:53:44.171266 2693 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171332 kubelet[2693]: I1013 04:53:44.171274 2693 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171332 kubelet[2693]: I1013 04:53:44.171283 2693 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171332 kubelet[2693]: I1013 04:53:44.171291 2693 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50ed0e2a-8871-43bd-a142-58e260e6704b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171332 kubelet[2693]: I1013 04:53:44.171299 2693 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hggqh\" (UniqueName: \"kubernetes.io/projected/95b0640a-2e4a-48b5-95ed-71d264f48fdb-kube-api-access-hggqh\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171493 kubelet[2693]: I1013 04:53:44.171307 2693 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95b0640a-2e4a-48b5-95ed-71d264f48fdb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.171493 kubelet[2693]: I1013 04:53:44.171317 2693 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50ed0e2a-8871-43bd-a142-58e260e6704b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 13 04:53:44.325872 systemd[1]: Removed slice kubepods-besteffort-pod95b0640a_2e4a_48b5_95ed_71d264f48fdb.slice - libcontainer container kubepods-besteffort-pod95b0640a_2e4a_48b5_95ed_71d264f48fdb.slice. Oct 13 04:53:44.327468 systemd[1]: Removed slice kubepods-burstable-pod50ed0e2a_8871_43bd_a142_58e260e6704b.slice - libcontainer container kubepods-burstable-pod50ed0e2a_8871_43bd_a142_58e260e6704b.slice. Oct 13 04:53:44.327584 systemd[1]: kubepods-burstable-pod50ed0e2a_8871_43bd_a142_58e260e6704b.slice: Consumed 6.274s CPU time, 123.6M memory peak, 164K read from disk, 12.9M written to disk. Oct 13 04:53:44.507009 kubelet[2693]: I1013 04:53:44.506727 2693 scope.go:117] "RemoveContainer" containerID="0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb" Oct 13 04:53:44.509467 containerd[1556]: time="2025-10-13T04:53:44.509316533Z" level=info msg="RemoveContainer for \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\"" Oct 13 04:53:44.513317 containerd[1556]: time="2025-10-13T04:53:44.513288537Z" level=info msg="RemoveContainer for \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\" returns successfully" Oct 13 04:53:44.515653 kubelet[2693]: I1013 04:53:44.515622 2693 scope.go:117] "RemoveContainer" containerID="0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb" Oct 13 04:53:44.516009 containerd[1556]: time="2025-10-13T04:53:44.515936166Z" level=error msg="ContainerStatus for \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\": not found" Oct 13 04:53:44.517770 kubelet[2693]: E1013 04:53:44.517717 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\": not found" containerID="0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb" Oct 13 04:53:44.522957 kubelet[2693]: I1013 04:53:44.522845 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb"} err="failed to get container status \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ca8803f22f066ad5bb87af8adf1727b8fbd820f0c272cc57f9e6f796b531dfb\": not found" Oct 13 04:53:44.522957 kubelet[2693]: I1013 04:53:44.522960 2693 scope.go:117] "RemoveContainer" containerID="293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802" Oct 13 04:53:44.526089 containerd[1556]: time="2025-10-13T04:53:44.525501031Z" level=info msg="RemoveContainer for \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\"" Oct 13 04:53:44.535997 containerd[1556]: time="2025-10-13T04:53:44.535946546Z" level=info msg="RemoveContainer for \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" returns successfully" Oct 13 04:53:44.536294 kubelet[2693]: I1013 04:53:44.536270 2693 scope.go:117] "RemoveContainer" containerID="f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073" Oct 13 04:53:44.537642 containerd[1556]: time="2025-10-13T04:53:44.537621125Z" level=info msg="RemoveContainer for \"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\"" Oct 13 04:53:44.541232 containerd[1556]: time="2025-10-13T04:53:44.541141244Z" level=info msg="RemoveContainer for \"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\" returns successfully" Oct 13 04:53:44.541418 kubelet[2693]: I1013 04:53:44.541402 2693 scope.go:117] "RemoveContainer" containerID="bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970" Oct 13 04:53:44.543709 containerd[1556]: time="2025-10-13T04:53:44.543682512Z" level=info msg="RemoveContainer for \"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\"" Oct 13 04:53:44.547197 containerd[1556]: time="2025-10-13T04:53:44.547170030Z" level=info msg="RemoveContainer for \"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\" returns successfully" Oct 13 04:53:44.547380 kubelet[2693]: I1013 04:53:44.547351 2693 scope.go:117] "RemoveContainer" containerID="06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363" Oct 13 04:53:44.548882 containerd[1556]: time="2025-10-13T04:53:44.548839248Z" level=info msg="RemoveContainer for \"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\"" Oct 13 04:53:44.551467 containerd[1556]: time="2025-10-13T04:53:44.551427597Z" level=info msg="RemoveContainer for \"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\" returns successfully" Oct 13 04:53:44.551619 kubelet[2693]: I1013 04:53:44.551595 2693 scope.go:117] "RemoveContainer" containerID="01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7" Oct 13 04:53:44.563395 containerd[1556]: time="2025-10-13T04:53:44.563355088Z" level=info msg="RemoveContainer for \"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\"" Oct 13 04:53:44.566199 containerd[1556]: time="2025-10-13T04:53:44.566160719Z" level=info msg="RemoveContainer for \"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\" returns successfully" Oct 13 04:53:44.566520 kubelet[2693]: I1013 04:53:44.566403 2693 scope.go:117] "RemoveContainer" containerID="293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802" Oct 13 04:53:44.566797 containerd[1556]: time="2025-10-13T04:53:44.566763406Z" level=error msg="ContainerStatus for \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\": not found" Oct 13 04:53:44.566945 kubelet[2693]: E1013 04:53:44.566911 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\": not found" containerID="293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802" Oct 13 04:53:44.567027 kubelet[2693]: I1013 04:53:44.567002 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802"} err="failed to get container status \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\": rpc error: code = NotFound desc = an error occurred when try to find container \"293da93d61f56c643cfc35f4c36f5521ac67c368e49625a80c7f2c9b7a869802\": not found" Oct 13 04:53:44.567127 kubelet[2693]: I1013 04:53:44.567085 2693 scope.go:117] "RemoveContainer" containerID="f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073" Oct 13 04:53:44.567332 containerd[1556]: time="2025-10-13T04:53:44.567299611Z" level=error msg="ContainerStatus for \"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\": not found" Oct 13 04:53:44.567481 kubelet[2693]: E1013 04:53:44.567459 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\": not found" containerID="f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073" Oct 13 04:53:44.567558 kubelet[2693]: I1013 04:53:44.567487 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073"} err="failed to get container status \"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1a963324f58c15788727ee85b77f5fdbea4a71e34180f1c389cb49c41ca4073\": not found" Oct 13 04:53:44.567558 kubelet[2693]: I1013 04:53:44.567516 2693 scope.go:117] "RemoveContainer" containerID="bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970" Oct 13 04:53:44.567754 containerd[1556]: time="2025-10-13T04:53:44.567722576Z" level=error msg="ContainerStatus for \"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\": not found" Oct 13 04:53:44.567891 kubelet[2693]: E1013 04:53:44.567851 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\": not found" containerID="bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970" Oct 13 04:53:44.568029 kubelet[2693]: I1013 04:53:44.567876 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970"} err="failed to get container status \"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb2f5b1d447c0f178d6d7d58895975839c407cc23de171a47894665781199970\": not found" Oct 13 04:53:44.568029 kubelet[2693]: I1013 04:53:44.567965 2693 scope.go:117] "RemoveContainer" containerID="06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363" Oct 13 04:53:44.568153 containerd[1556]: time="2025-10-13T04:53:44.568098980Z" level=error msg="ContainerStatus for \"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\": not found" Oct 13 04:53:44.568268 kubelet[2693]: E1013 04:53:44.568245 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\": not found" containerID="06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363" Oct 13 04:53:44.568424 kubelet[2693]: I1013 04:53:44.568301 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363"} err="failed to get container status \"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\": rpc error: code = NotFound desc = an error occurred when try to find container \"06c19fa73a4b708e602af09e37810c2389e636e70a95a980e312635dde7bf363\": not found" Oct 13 04:53:44.568424 kubelet[2693]: I1013 04:53:44.568320 2693 scope.go:117] "RemoveContainer" containerID="01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7" Oct 13 04:53:44.569603 containerd[1556]: time="2025-10-13T04:53:44.568441704Z" level=error msg="ContainerStatus for \"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\": not found" Oct 13 04:53:44.569738 kubelet[2693]: E1013 04:53:44.569716 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\": not found" containerID="01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7" Oct 13 04:53:44.569783 kubelet[2693]: I1013 04:53:44.569747 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7"} err="failed to get container status \"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"01d0907f5d1a87e344ae8aea367b0c521ba3e713e5fa26abe568d5c0cac3c1b7\": not found" Oct 13 04:53:44.808127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c387a8694f4e62a1dda56cfe40a6174fd69efd0fd7c1cb4b07528b29ff886808-shm.mount: Deactivated successfully. Oct 13 04:53:44.808223 systemd[1]: var-lib-kubelet-pods-50ed0e2a\x2d8871\x2d43bd\x2da142\x2d58e260e6704b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp29mv.mount: Deactivated successfully. Oct 13 04:53:44.808275 systemd[1]: var-lib-kubelet-pods-95b0640a\x2d2e4a\x2d48b5\x2d95ed\x2d71d264f48fdb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhggqh.mount: Deactivated successfully. Oct 13 04:53:44.808327 systemd[1]: var-lib-kubelet-pods-50ed0e2a\x2d8871\x2d43bd\x2da142\x2d58e260e6704b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 13 04:53:44.808374 systemd[1]: var-lib-kubelet-pods-50ed0e2a\x2d8871\x2d43bd\x2da142\x2d58e260e6704b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 13 04:53:45.704621 sshd[4291]: Connection closed by 10.0.0.1 port 48740 Oct 13 04:53:45.705231 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:45.723145 systemd[1]: sshd@21-10.0.0.32:22-10.0.0.1:48740.service: Deactivated successfully. Oct 13 04:53:45.725128 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 04:53:45.725423 systemd[1]: session-22.scope: Consumed 1.965s CPU time, 28.8M memory peak. Oct 13 04:53:45.726017 systemd-logind[1541]: Session 22 logged out. Waiting for processes to exit. Oct 13 04:53:45.729049 systemd[1]: Started sshd@22-10.0.0.32:22-10.0.0.1:46080.service - OpenSSH per-connection server daemon (10.0.0.1:46080). Oct 13 04:53:45.729568 systemd-logind[1541]: Removed session 22. Oct 13 04:53:45.784140 sshd[4445]: Accepted publickey for core from 10.0.0.1 port 46080 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:45.785588 sshd-session[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:45.790054 systemd-logind[1541]: New session 23 of user core. Oct 13 04:53:45.796685 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 04:53:46.321738 kubelet[2693]: I1013 04:53:46.320928 2693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ed0e2a-8871-43bd-a142-58e260e6704b" path="/var/lib/kubelet/pods/50ed0e2a-8871-43bd-a142-58e260e6704b/volumes" Oct 13 04:53:46.321738 kubelet[2693]: I1013 04:53:46.321449 2693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95b0640a-2e4a-48b5-95ed-71d264f48fdb" path="/var/lib/kubelet/pods/95b0640a-2e4a-48b5-95ed-71d264f48fdb/volumes" Oct 13 04:53:46.377245 kubelet[2693]: E1013 04:53:46.377161 2693 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 13 04:53:46.687221 sshd[4448]: Connection closed by 10.0.0.1 port 46080 Oct 13 04:53:46.687913 sshd-session[4445]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:46.702959 systemd[1]: sshd@22-10.0.0.32:22-10.0.0.1:46080.service: Deactivated successfully. Oct 13 04:53:46.710030 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 04:53:46.711559 systemd-logind[1541]: Session 23 logged out. Waiting for processes to exit. Oct 13 04:53:46.713682 kubelet[2693]: I1013 04:53:46.713608 2693 memory_manager.go:355] "RemoveStaleState removing state" podUID="95b0640a-2e4a-48b5-95ed-71d264f48fdb" containerName="cilium-operator" Oct 13 04:53:46.713682 kubelet[2693]: I1013 04:53:46.713659 2693 memory_manager.go:355] "RemoveStaleState removing state" podUID="50ed0e2a-8871-43bd-a142-58e260e6704b" containerName="cilium-agent" Oct 13 04:53:46.715290 systemd-logind[1541]: Removed session 23. Oct 13 04:53:46.719300 systemd[1]: Started sshd@23-10.0.0.32:22-10.0.0.1:46084.service - OpenSSH per-connection server daemon (10.0.0.1:46084). Oct 13 04:53:46.736183 systemd[1]: Created slice kubepods-burstable-pod0df55bbf_77a1_4572_8a5d_eccb5853b1e5.slice - libcontainer container kubepods-burstable-pod0df55bbf_77a1_4572_8a5d_eccb5853b1e5.slice. Oct 13 04:53:46.781410 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 46084 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:46.782807 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:46.787039 systemd-logind[1541]: New session 24 of user core. Oct 13 04:53:46.789822 kubelet[2693]: I1013 04:53:46.789756 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-bpf-maps\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790197 kubelet[2693]: I1013 04:53:46.789798 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-cni-path\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790197 kubelet[2693]: I1013 04:53:46.789928 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-host-proc-sys-kernel\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790197 kubelet[2693]: I1013 04:53:46.789950 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-etc-cni-netd\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790197 kubelet[2693]: I1013 04:53:46.789964 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-lib-modules\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790197 kubelet[2693]: I1013 04:53:46.789982 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-clustermesh-secrets\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790197 kubelet[2693]: I1013 04:53:46.789996 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-cilium-ipsec-secrets\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790352 kubelet[2693]: I1013 04:53:46.790011 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-cilium-run\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790352 kubelet[2693]: I1013 04:53:46.790026 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-hostproc\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790352 kubelet[2693]: I1013 04:53:46.790042 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-cilium-config-path\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790352 kubelet[2693]: I1013 04:53:46.790057 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-cilium-cgroup\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790352 kubelet[2693]: I1013 04:53:46.790072 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw9sx\" (UniqueName: \"kubernetes.io/projected/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-kube-api-access-dw9sx\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790352 kubelet[2693]: I1013 04:53:46.790089 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-xtables-lock\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790463 kubelet[2693]: I1013 04:53:46.790103 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-hubble-tls\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.790463 kubelet[2693]: I1013 04:53:46.790119 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0df55bbf-77a1-4572-8a5d-eccb5853b1e5-host-proc-sys-net\") pod \"cilium-6zbc7\" (UID: \"0df55bbf-77a1-4572-8a5d-eccb5853b1e5\") " pod="kube-system/cilium-6zbc7" Oct 13 04:53:46.796656 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 04:53:46.845346 sshd[4464]: Connection closed by 10.0.0.1 port 46084 Oct 13 04:53:46.845192 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:46.854924 systemd[1]: sshd@23-10.0.0.32:22-10.0.0.1:46084.service: Deactivated successfully. Oct 13 04:53:46.856688 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 04:53:46.857530 systemd-logind[1541]: Session 24 logged out. Waiting for processes to exit. Oct 13 04:53:46.860320 systemd[1]: Started sshd@24-10.0.0.32:22-10.0.0.1:46086.service - OpenSSH per-connection server daemon (10.0.0.1:46086). Oct 13 04:53:46.861048 systemd-logind[1541]: Removed session 24. Oct 13 04:53:46.917156 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 46086 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:53:46.918731 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:53:46.923135 systemd-logind[1541]: New session 25 of user core. Oct 13 04:53:46.934750 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 04:53:47.039097 kubelet[2693]: E1013 04:53:47.039051 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:47.040083 containerd[1556]: time="2025-10-13T04:53:47.040042645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6zbc7,Uid:0df55bbf-77a1-4572-8a5d-eccb5853b1e5,Namespace:kube-system,Attempt:0,}" Oct 13 04:53:47.063356 containerd[1556]: time="2025-10-13T04:53:47.063311240Z" level=info msg="connecting to shim b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19" address="unix:///run/containerd/s/12403c777dbad952c5a8ea2cb16e075d700fc1a1f3e3e5c1b6f32d2760613644" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:53:47.086697 systemd[1]: Started cri-containerd-b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19.scope - libcontainer container b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19. Oct 13 04:53:47.123877 containerd[1556]: time="2025-10-13T04:53:47.123831612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6zbc7,Uid:0df55bbf-77a1-4572-8a5d-eccb5853b1e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19\"" Oct 13 04:53:47.124848 kubelet[2693]: E1013 04:53:47.124826 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:47.127354 containerd[1556]: time="2025-10-13T04:53:47.127308608Z" level=info msg="CreateContainer within sandbox \"b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 13 04:53:47.134717 containerd[1556]: time="2025-10-13T04:53:47.134576721Z" level=info msg="Container 62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:53:47.140732 containerd[1556]: time="2025-10-13T04:53:47.140610702Z" level=info msg="CreateContainer within sandbox \"b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9\"" Oct 13 04:53:47.141282 containerd[1556]: time="2025-10-13T04:53:47.141255309Z" level=info msg="StartContainer for \"62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9\"" Oct 13 04:53:47.142104 containerd[1556]: time="2025-10-13T04:53:47.142079677Z" level=info msg="connecting to shim 62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9" address="unix:///run/containerd/s/12403c777dbad952c5a8ea2cb16e075d700fc1a1f3e3e5c1b6f32d2760613644" protocol=ttrpc version=3 Oct 13 04:53:47.170721 systemd[1]: Started cri-containerd-62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9.scope - libcontainer container 62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9. Oct 13 04:53:47.196246 containerd[1556]: time="2025-10-13T04:53:47.196208025Z" level=info msg="StartContainer for \"62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9\" returns successfully" Oct 13 04:53:47.204445 systemd[1]: cri-containerd-62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9.scope: Deactivated successfully. Oct 13 04:53:47.207846 containerd[1556]: time="2025-10-13T04:53:47.207809902Z" level=info msg="received exit event container_id:\"62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9\" id:\"62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9\" pid:4543 exited_at:{seconds:1760331227 nanos:207549659}" Oct 13 04:53:47.208041 containerd[1556]: time="2025-10-13T04:53:47.207911023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9\" id:\"62c00b6ab2479c028e913120222d7ad5e0fb91aeca3f5e5f006b711db75a27c9\" pid:4543 exited_at:{seconds:1760331227 nanos:207549659}" Oct 13 04:53:47.525139 kubelet[2693]: E1013 04:53:47.524787 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:47.527141 containerd[1556]: time="2025-10-13T04:53:47.527102852Z" level=info msg="CreateContainer within sandbox \"b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 13 04:53:47.533587 containerd[1556]: time="2025-10-13T04:53:47.533542157Z" level=info msg="Container f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:53:47.539389 containerd[1556]: time="2025-10-13T04:53:47.539300936Z" level=info msg="CreateContainer within sandbox \"b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c\"" Oct 13 04:53:47.540922 containerd[1556]: time="2025-10-13T04:53:47.539881541Z" level=info msg="StartContainer for \"f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c\"" Oct 13 04:53:47.541367 containerd[1556]: time="2025-10-13T04:53:47.541334196Z" level=info msg="connecting to shim f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c" address="unix:///run/containerd/s/12403c777dbad952c5a8ea2cb16e075d700fc1a1f3e3e5c1b6f32d2760613644" protocol=ttrpc version=3 Oct 13 04:53:47.569735 systemd[1]: Started cri-containerd-f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c.scope - libcontainer container f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c. Oct 13 04:53:47.597800 containerd[1556]: time="2025-10-13T04:53:47.597758407Z" level=info msg="StartContainer for \"f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c\" returns successfully" Oct 13 04:53:47.603367 systemd[1]: cri-containerd-f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c.scope: Deactivated successfully. Oct 13 04:53:47.604651 containerd[1556]: time="2025-10-13T04:53:47.603919309Z" level=info msg="received exit event container_id:\"f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c\" id:\"f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c\" pid:4591 exited_at:{seconds:1760331227 nanos:603737427}" Oct 13 04:53:47.605116 containerd[1556]: time="2025-10-13T04:53:47.605090481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c\" id:\"f3d337beb3239f1167adf18bafedec7f9d4795a55bc70752aec531bfaf0d701c\" pid:4591 exited_at:{seconds:1760331227 nanos:603737427}" Oct 13 04:53:48.502967 kubelet[2693]: I1013 04:53:48.502905 2693 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T04:53:48Z","lastTransitionTime":"2025-10-13T04:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 13 04:53:48.531116 kubelet[2693]: E1013 04:53:48.530897 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:48.541226 containerd[1556]: time="2025-10-13T04:53:48.535707308Z" level=info msg="CreateContainer within sandbox \"b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 13 04:53:48.558818 containerd[1556]: time="2025-10-13T04:53:48.558774135Z" level=info msg="Container 58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:53:48.567879 containerd[1556]: time="2025-10-13T04:53:48.567823944Z" level=info msg="CreateContainer within sandbox \"b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2\"" Oct 13 04:53:48.569715 containerd[1556]: time="2025-10-13T04:53:48.568739633Z" level=info msg="StartContainer for \"58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2\"" Oct 13 04:53:48.570360 containerd[1556]: time="2025-10-13T04:53:48.570281448Z" level=info msg="connecting to shim 58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2" address="unix:///run/containerd/s/12403c777dbad952c5a8ea2cb16e075d700fc1a1f3e3e5c1b6f32d2760613644" protocol=ttrpc version=3 Oct 13 04:53:48.597705 systemd[1]: Started cri-containerd-58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2.scope - libcontainer container 58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2. Oct 13 04:53:48.632882 systemd[1]: cri-containerd-58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2.scope: Deactivated successfully. Oct 13 04:53:48.633952 containerd[1556]: time="2025-10-13T04:53:48.633904634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2\" id:\"58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2\" pid:4637 exited_at:{seconds:1760331228 nanos:633497350}" Oct 13 04:53:48.637005 containerd[1556]: time="2025-10-13T04:53:48.636951024Z" level=info msg="received exit event container_id:\"58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2\" id:\"58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2\" pid:4637 exited_at:{seconds:1760331228 nanos:633497350}" Oct 13 04:53:48.646907 containerd[1556]: time="2025-10-13T04:53:48.646754081Z" level=info msg="StartContainer for \"58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2\" returns successfully" Oct 13 04:53:48.666540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58dd1cd5cf88c993ebaed501c67997e7fbb4ba02f27c59592967009ec028dca2-rootfs.mount: Deactivated successfully. Oct 13 04:53:49.537577 kubelet[2693]: E1013 04:53:49.536385 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:49.540989 containerd[1556]: time="2025-10-13T04:53:49.540922293Z" level=info msg="CreateContainer within sandbox \"b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 13 04:53:49.557083 containerd[1556]: time="2025-10-13T04:53:49.556277120Z" level=info msg="Container da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:53:49.564608 containerd[1556]: time="2025-10-13T04:53:49.564449278Z" level=info msg="CreateContainer within sandbox \"b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f\"" Oct 13 04:53:49.566717 containerd[1556]: time="2025-10-13T04:53:49.565321367Z" level=info msg="StartContainer for \"da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f\"" Oct 13 04:53:49.566717 containerd[1556]: time="2025-10-13T04:53:49.566217295Z" level=info msg="connecting to shim da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f" address="unix:///run/containerd/s/12403c777dbad952c5a8ea2cb16e075d700fc1a1f3e3e5c1b6f32d2760613644" protocol=ttrpc version=3 Oct 13 04:53:49.586733 systemd[1]: Started cri-containerd-da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f.scope - libcontainer container da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f. Oct 13 04:53:49.616049 systemd[1]: cri-containerd-da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f.scope: Deactivated successfully. Oct 13 04:53:49.616346 containerd[1556]: time="2025-10-13T04:53:49.616181934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f\" id:\"da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f\" pid:4676 exited_at:{seconds:1760331229 nanos:615896931}" Oct 13 04:53:49.616346 containerd[1556]: time="2025-10-13T04:53:49.616300775Z" level=info msg="received exit event container_id:\"da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f\" id:\"da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f\" pid:4676 exited_at:{seconds:1760331229 nanos:615896931}" Oct 13 04:53:49.624029 containerd[1556]: time="2025-10-13T04:53:49.623991768Z" level=info msg="StartContainer for \"da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f\" returns successfully" Oct 13 04:53:49.636346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da09396ec2d819f229488be010069f92dc7ae5ed37dca764ffab4c6dc592ee9f-rootfs.mount: Deactivated successfully. Oct 13 04:53:50.542392 kubelet[2693]: E1013 04:53:50.542349 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:50.544465 containerd[1556]: time="2025-10-13T04:53:50.544430836Z" level=info msg="CreateContainer within sandbox \"b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 13 04:53:50.569544 containerd[1556]: time="2025-10-13T04:53:50.566446321Z" level=info msg="Container c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:53:50.572242 containerd[1556]: time="2025-10-13T04:53:50.572183855Z" level=info msg="CreateContainer within sandbox \"b12a07b4363c0c64b3cc8e46266c5cb4369f4f4138db9ad3156f48449aef9d19\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4\"" Oct 13 04:53:50.572843 containerd[1556]: time="2025-10-13T04:53:50.572819781Z" level=info msg="StartContainer for \"c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4\"" Oct 13 04:53:50.574613 containerd[1556]: time="2025-10-13T04:53:50.574548997Z" level=info msg="connecting to shim c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4" address="unix:///run/containerd/s/12403c777dbad952c5a8ea2cb16e075d700fc1a1f3e3e5c1b6f32d2760613644" protocol=ttrpc version=3 Oct 13 04:53:50.599737 systemd[1]: Started cri-containerd-c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4.scope - libcontainer container c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4. Oct 13 04:53:50.639740 containerd[1556]: time="2025-10-13T04:53:50.639701483Z" level=info msg="StartContainer for \"c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4\" returns successfully" Oct 13 04:53:50.712679 containerd[1556]: time="2025-10-13T04:53:50.712630122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4\" id:\"764b1eaf8b996e199dc5f37a347aa5d80ef801df20bd83bc5ac5b805a1db9fca\" pid:4743 exited_at:{seconds:1760331230 nanos:712296559}" Oct 13 04:53:50.908617 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 13 04:53:51.548826 kubelet[2693]: E1013 04:53:51.548797 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:51.566032 kubelet[2693]: I1013 04:53:51.565960 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6zbc7" podStartSLOduration=5.565944162 podStartE2EDuration="5.565944162s" podCreationTimestamp="2025-10-13 04:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 04:53:51.565359317 +0000 UTC m=+75.334461047" watchObservedRunningTime="2025-10-13 04:53:51.565944162 +0000 UTC m=+75.335045892" Oct 13 04:53:53.040549 kubelet[2693]: E1013 04:53:53.040495 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:53.488835 containerd[1556]: time="2025-10-13T04:53:53.488723858Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4\" id:\"56decbea3231806e4faac84473caa554b08a290040715c75b66fde7d8145267c\" pid:5160 exit_status:1 exited_at:{seconds:1760331233 nanos:488222173}" Oct 13 04:53:53.728366 systemd-networkd[1469]: lxc_health: Link UP Oct 13 04:53:53.728821 systemd-networkd[1469]: lxc_health: Gained carrier Oct 13 04:53:54.893471 systemd-networkd[1469]: lxc_health: Gained IPv6LL Oct 13 04:53:55.041784 kubelet[2693]: E1013 04:53:55.041736 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:55.562117 kubelet[2693]: E1013 04:53:55.562008 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:55.629222 containerd[1556]: time="2025-10-13T04:53:55.627815965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4\" id:\"9aa7c250b5d97305ddb0adcda2474407c447a93995cdf36ecbc187564248f0f0\" pid:5285 exited_at:{seconds:1760331235 nanos:627160200}" Oct 13 04:53:56.563629 kubelet[2693]: E1013 04:53:56.563574 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:57.318613 kubelet[2693]: E1013 04:53:57.318460 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:53:57.755629 containerd[1556]: time="2025-10-13T04:53:57.755493670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4\" id:\"4f9bc21c790c9539f572b171dba254820af653ba1578813927446fc4998f1929\" pid:5319 exited_at:{seconds:1760331237 nanos:755188508}" Oct 13 04:53:59.893620 containerd[1556]: time="2025-10-13T04:53:59.893527411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2dbc0d73e1c1a2d523d14655a02a54036d609e7c8e4c72954dac2c207f0dfa4\" id:\"97a9ec6015d0373fd26a24f9881672bb6969102b5d765bc4c69bf991057170b5\" pid:5344 exited_at:{seconds:1760331239 nanos:893226969}" Oct 13 04:53:59.907052 sshd[4479]: Connection closed by 10.0.0.1 port 46086 Oct 13 04:53:59.907831 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Oct 13 04:53:59.912463 systemd[1]: sshd@24-10.0.0.32:22-10.0.0.1:46086.service: Deactivated successfully. Oct 13 04:53:59.914352 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 04:53:59.915282 systemd-logind[1541]: Session 25 logged out. Waiting for processes to exit. Oct 13 04:53:59.916683 systemd-logind[1541]: Removed session 25. Oct 13 04:54:00.321881 kubelet[2693]: E1013 04:54:00.321169 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"