Oct 13 05:10:13.342523 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 13 05:10:13.342547 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Mon Oct 13 03:30:16 -00 2025 Oct 13 05:10:13.342556 kernel: KASLR enabled Oct 13 05:10:13.342562 kernel: efi: EFI v2.7 by EDK II Oct 13 05:10:13.342568 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 13 05:10:13.342574 kernel: random: crng init done Oct 13 05:10:13.342582 kernel: secureboot: Secure boot disabled Oct 13 05:10:13.342588 kernel: ACPI: Early table checksum verification disabled Oct 13 05:10:13.342596 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 13 05:10:13.342602 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 13 05:10:13.342609 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:10:13.342615 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:10:13.342621 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:10:13.342627 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:10:13.342637 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:10:13.342644 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:10:13.342650 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:10:13.342656 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:10:13.342663 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:10:13.342669 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 13 05:10:13.342675 kernel: ACPI: Use ACPI SPCR as default console: No Oct 13 05:10:13.342682 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 05:10:13.342689 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 13 05:10:13.342696 kernel: Zone ranges: Oct 13 05:10:13.342711 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 05:10:13.342717 kernel: DMA32 empty Oct 13 05:10:13.342724 kernel: Normal empty Oct 13 05:10:13.342730 kernel: Device empty Oct 13 05:10:13.342736 kernel: Movable zone start for each node Oct 13 05:10:13.342742 kernel: Early memory node ranges Oct 13 05:10:13.342749 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 13 05:10:13.342755 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 13 05:10:13.342761 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 13 05:10:13.342767 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 13 05:10:13.342775 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 13 05:10:13.342781 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 13 05:10:13.342788 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 13 05:10:13.342794 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 13 05:10:13.342801 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 13 05:10:13.342807 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 13 05:10:13.342818 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 13 05:10:13.342825 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 13 05:10:13.342832 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 13 05:10:13.342839 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 05:10:13.342846 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 13 05:10:13.342853 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 13 05:10:13.342860 kernel: psci: probing for conduit method from ACPI. Oct 13 05:10:13.342867 kernel: psci: PSCIv1.1 detected in firmware. Oct 13 05:10:13.342875 kernel: psci: Using standard PSCI v0.2 function IDs Oct 13 05:10:13.342882 kernel: psci: Trusted OS migration not required Oct 13 05:10:13.342888 kernel: psci: SMC Calling Convention v1.1 Oct 13 05:10:13.342902 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 13 05:10:13.342909 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 13 05:10:13.342916 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 13 05:10:13.342924 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 13 05:10:13.342931 kernel: Detected PIPT I-cache on CPU0 Oct 13 05:10:13.342938 kernel: CPU features: detected: GIC system register CPU interface Oct 13 05:10:13.342945 kernel: CPU features: detected: Spectre-v4 Oct 13 05:10:13.342952 kernel: CPU features: detected: Spectre-BHB Oct 13 05:10:13.342960 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 13 05:10:13.342985 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 13 05:10:13.342993 kernel: CPU features: detected: ARM erratum 1418040 Oct 13 05:10:13.343000 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 13 05:10:13.343007 kernel: alternatives: applying boot alternatives Oct 13 05:10:13.343015 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1a81e36b39d22063d1d9b2ac3307af6d1e57cfd926c8fafd214fb74284e73d99 Oct 13 05:10:13.343022 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:10:13.343029 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:10:13.343036 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:10:13.343043 kernel: Fallback order for Node 0: 0 Oct 13 05:10:13.343052 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 13 05:10:13.343059 kernel: Policy zone: DMA Oct 13 05:10:13.343066 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:10:13.343073 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 13 05:10:13.343080 kernel: software IO TLB: area num 4. Oct 13 05:10:13.343087 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 13 05:10:13.343094 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 13 05:10:13.343101 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 05:10:13.343108 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:10:13.343115 kernel: rcu: RCU event tracing is enabled. Oct 13 05:10:13.343122 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 05:10:13.343130 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:10:13.343138 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:10:13.343145 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:10:13.343152 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 05:10:13.343158 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:10:13.343165 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:10:13.343172 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 13 05:10:13.343179 kernel: GICv3: 256 SPIs implemented Oct 13 05:10:13.343185 kernel: GICv3: 0 Extended SPIs implemented Oct 13 05:10:13.343192 kernel: Root IRQ handler: gic_handle_irq Oct 13 05:10:13.343199 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 13 05:10:13.343206 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 13 05:10:13.343213 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 13 05:10:13.343220 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 13 05:10:13.343227 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 13 05:10:13.343234 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 13 05:10:13.343241 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 13 05:10:13.343247 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 13 05:10:13.343254 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:10:13.343261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 05:10:13.343268 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 13 05:10:13.343275 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 13 05:10:13.343283 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 13 05:10:13.343290 kernel: arm-pv: using stolen time PV Oct 13 05:10:13.343297 kernel: Console: colour dummy device 80x25 Oct 13 05:10:13.343305 kernel: ACPI: Core revision 20240827 Oct 13 05:10:13.343313 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 13 05:10:13.343320 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:10:13.343328 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:10:13.343335 kernel: landlock: Up and running. Oct 13 05:10:13.343344 kernel: SELinux: Initializing. Oct 13 05:10:13.343351 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:10:13.343359 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:10:13.343366 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:10:13.343374 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:10:13.343382 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:10:13.343389 kernel: Remapping and enabling EFI services. Oct 13 05:10:13.343398 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:10:13.343409 kernel: Detected PIPT I-cache on CPU1 Oct 13 05:10:13.343417 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 13 05:10:13.343426 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 13 05:10:13.343433 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 05:10:13.343441 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 13 05:10:13.343448 kernel: Detected PIPT I-cache on CPU2 Oct 13 05:10:13.343456 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 13 05:10:13.343465 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 13 05:10:13.343473 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 05:10:13.343480 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 13 05:10:13.343488 kernel: Detected PIPT I-cache on CPU3 Oct 13 05:10:13.343495 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 13 05:10:13.343503 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 13 05:10:13.343512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 05:10:13.343519 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 13 05:10:13.343526 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 05:10:13.343534 kernel: SMP: Total of 4 processors activated. Oct 13 05:10:13.343541 kernel: CPU: All CPU(s) started at EL1 Oct 13 05:10:13.343549 kernel: CPU features: detected: 32-bit EL0 Support Oct 13 05:10:13.343556 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 13 05:10:13.343564 kernel: CPU features: detected: Common not Private translations Oct 13 05:10:13.343572 kernel: CPU features: detected: CRC32 instructions Oct 13 05:10:13.343579 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 13 05:10:13.343587 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 13 05:10:13.343594 kernel: CPU features: detected: LSE atomic instructions Oct 13 05:10:13.343602 kernel: CPU features: detected: Privileged Access Never Oct 13 05:10:13.343609 kernel: CPU features: detected: RAS Extension Support Oct 13 05:10:13.343616 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 13 05:10:13.343625 kernel: alternatives: applying system-wide alternatives Oct 13 05:10:13.343633 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 13 05:10:13.343640 kernel: Memory: 2450400K/2572288K available (11200K kernel code, 2456K rwdata, 9080K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Oct 13 05:10:13.343648 kernel: devtmpfs: initialized Oct 13 05:10:13.343656 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:10:13.343663 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 05:10:13.343671 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 13 05:10:13.343679 kernel: 0 pages in range for non-PLT usage Oct 13 05:10:13.343687 kernel: 515040 pages in range for PLT usage Oct 13 05:10:13.343694 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:10:13.343706 kernel: SMBIOS 3.0.0 present. Oct 13 05:10:13.343714 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 13 05:10:13.343721 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:10:13.343729 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:10:13.343738 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 13 05:10:13.343745 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 13 05:10:13.343753 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 13 05:10:13.343760 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:10:13.343768 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Oct 13 05:10:13.343775 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:10:13.343782 kernel: cpuidle: using governor menu Oct 13 05:10:13.343790 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 13 05:10:13.343799 kernel: ASID allocator initialised with 32768 entries Oct 13 05:10:13.343806 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:10:13.343816 kernel: Serial: AMBA PL011 UART driver Oct 13 05:10:13.343828 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:10:13.343835 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:10:13.343843 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 13 05:10:13.343851 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 13 05:10:13.343860 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:10:13.343871 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:10:13.343880 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 13 05:10:13.343887 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 13 05:10:13.343894 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:10:13.343902 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:10:13.343909 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:10:13.343918 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:10:13.343926 kernel: ACPI: Interpreter enabled Oct 13 05:10:13.343933 kernel: ACPI: Using GIC for interrupt routing Oct 13 05:10:13.343941 kernel: ACPI: MCFG table detected, 1 entries Oct 13 05:10:13.343949 kernel: ACPI: CPU0 has been hot-added Oct 13 05:10:13.343956 kernel: ACPI: CPU1 has been hot-added Oct 13 05:10:13.343973 kernel: ACPI: CPU2 has been hot-added Oct 13 05:10:13.343982 kernel: ACPI: CPU3 has been hot-added Oct 13 05:10:13.343991 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 13 05:10:13.343999 kernel: printk: legacy console [ttyAMA0] enabled Oct 13 05:10:13.344007 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:10:13.344179 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:10:13.344274 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 13 05:10:13.344369 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 13 05:10:13.344487 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 13 05:10:13.344570 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 13 05:10:13.344580 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 13 05:10:13.344588 kernel: PCI host bridge to bus 0000:00 Oct 13 05:10:13.344677 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 13 05:10:13.344759 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 13 05:10:13.344836 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 13 05:10:13.344908 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:10:13.345018 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:10:13.345110 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 05:10:13.345196 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 13 05:10:13.345279 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 13 05:10:13.345361 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 13 05:10:13.345440 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 13 05:10:13.345520 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 13 05:10:13.345599 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 13 05:10:13.345672 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 13 05:10:13.345761 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 13 05:10:13.345839 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 13 05:10:13.345849 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 13 05:10:13.345857 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 13 05:10:13.345864 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 13 05:10:13.345872 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 13 05:10:13.345882 kernel: iommu: Default domain type: Translated Oct 13 05:10:13.345890 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 13 05:10:13.345898 kernel: efivars: Registered efivars operations Oct 13 05:10:13.345905 kernel: vgaarb: loaded Oct 13 05:10:13.345913 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 13 05:10:13.345921 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:10:13.345928 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:10:13.345935 kernel: pnp: PnP ACPI init Oct 13 05:10:13.346035 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 13 05:10:13.346047 kernel: pnp: PnP ACPI: found 1 devices Oct 13 05:10:13.346054 kernel: NET: Registered PF_INET protocol family Oct 13 05:10:13.346062 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:10:13.346070 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 05:10:13.346078 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:10:13.346088 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:10:13.346096 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 05:10:13.346103 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 05:10:13.346111 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:10:13.346119 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:10:13.346126 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:10:13.346133 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:10:13.346142 kernel: kvm [1]: HYP mode not available Oct 13 05:10:13.346150 kernel: Initialise system trusted keyrings Oct 13 05:10:13.346157 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 05:10:13.346165 kernel: Key type asymmetric registered Oct 13 05:10:13.346172 kernel: Asymmetric key parser 'x509' registered Oct 13 05:10:13.346179 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 13 05:10:13.346187 kernel: io scheduler mq-deadline registered Oct 13 05:10:13.346196 kernel: io scheduler kyber registered Oct 13 05:10:13.346203 kernel: io scheduler bfq registered Oct 13 05:10:13.346211 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 13 05:10:13.346218 kernel: ACPI: button: Power Button [PWRB] Oct 13 05:10:13.346226 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 13 05:10:13.346310 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 13 05:10:13.346320 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:10:13.346329 kernel: thunder_xcv, ver 1.0 Oct 13 05:10:13.346337 kernel: thunder_bgx, ver 1.0 Oct 13 05:10:13.346344 kernel: nicpf, ver 1.0 Oct 13 05:10:13.346352 kernel: nicvf, ver 1.0 Oct 13 05:10:13.346557 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 13 05:10:13.346645 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-13T05:10:12 UTC (1760332212) Oct 13 05:10:13.346656 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 05:10:13.346667 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 13 05:10:13.346675 kernel: watchdog: NMI not fully supported Oct 13 05:10:13.346683 kernel: watchdog: Hard watchdog permanently disabled Oct 13 05:10:13.346691 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:10:13.346705 kernel: Segment Routing with IPv6 Oct 13 05:10:13.346714 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:10:13.346721 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:10:13.346731 kernel: Key type dns_resolver registered Oct 13 05:10:13.346739 kernel: registered taskstats version 1 Oct 13 05:10:13.346746 kernel: Loading compiled-in X.509 certificates Oct 13 05:10:13.346753 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 0d5be6bcdaeaf26c55e47d87e2567b03196058e4' Oct 13 05:10:13.346761 kernel: Demotion targets for Node 0: null Oct 13 05:10:13.346769 kernel: Key type .fscrypt registered Oct 13 05:10:13.346776 kernel: Key type fscrypt-provisioning registered Oct 13 05:10:13.346784 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:10:13.346792 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:10:13.346799 kernel: ima: No architecture policies found Oct 13 05:10:13.346807 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 13 05:10:13.346814 kernel: clk: Disabling unused clocks Oct 13 05:10:13.346822 kernel: PM: genpd: Disabling unused power domains Oct 13 05:10:13.346829 kernel: Freeing unused kernel memory: 12992K Oct 13 05:10:13.346838 kernel: Run /init as init process Oct 13 05:10:13.346845 kernel: with arguments: Oct 13 05:10:13.346853 kernel: /init Oct 13 05:10:13.346860 kernel: with environment: Oct 13 05:10:13.346867 kernel: HOME=/ Oct 13 05:10:13.346875 kernel: TERM=linux Oct 13 05:10:13.346882 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:10:13.347023 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 13 05:10:13.347114 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 13 05:10:13.347124 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:10:13.347132 kernel: GPT:16515071 != 27000831 Oct 13 05:10:13.347139 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:10:13.347147 kernel: GPT:16515071 != 27000831 Oct 13 05:10:13.347154 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:10:13.347165 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:10:13.347173 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347181 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347188 kernel: SCSI subsystem initialized Oct 13 05:10:13.347196 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347203 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:10:13.347211 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:10:13.347220 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:10:13.347227 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 13 05:10:13.347235 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347242 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347249 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347256 kernel: raid6: neonx8 gen() 15782 MB/s Oct 13 05:10:13.347264 kernel: raid6: neonx4 gen() 15817 MB/s Oct 13 05:10:13.347273 kernel: raid6: neonx2 gen() 13291 MB/s Oct 13 05:10:13.347280 kernel: raid6: neonx1 gen() 10448 MB/s Oct 13 05:10:13.347287 kernel: raid6: int64x8 gen() 6905 MB/s Oct 13 05:10:13.347295 kernel: raid6: int64x4 gen() 7349 MB/s Oct 13 05:10:13.347302 kernel: raid6: int64x2 gen() 6101 MB/s Oct 13 05:10:13.347310 kernel: raid6: int64x1 gen() 5034 MB/s Oct 13 05:10:13.347317 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s Oct 13 05:10:13.347325 kernel: raid6: .... xor() 11775 MB/s, rmw enabled Oct 13 05:10:13.347333 kernel: raid6: using neon recovery algorithm Oct 13 05:10:13.347341 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347349 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347356 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347363 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347370 kernel: xor: measuring software checksum speed Oct 13 05:10:13.347377 kernel: 8regs : 20398 MB/sec Oct 13 05:10:13.347385 kernel: 32regs : 21699 MB/sec Oct 13 05:10:13.347393 kernel: arm64_neon : 26804 MB/sec Oct 13 05:10:13.347401 kernel: xor: using function: arm64_neon (26804 MB/sec) Oct 13 05:10:13.347408 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347415 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:10:13.347423 kernel: BTRFS: device fsid 976d1a25-6e06-4ce9-b674-96d83e61f95d devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (203) Oct 13 05:10:13.347431 kernel: BTRFS info (device dm-0): first mount of filesystem 976d1a25-6e06-4ce9-b674-96d83e61f95d Oct 13 05:10:13.347439 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 13 05:10:13.347448 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:10:13.347456 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:10:13.347463 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:10:13.347470 kernel: loop: module loaded Oct 13 05:10:13.347478 kernel: loop0: detected capacity change from 0 to 91456 Oct 13 05:10:13.347485 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:10:13.347494 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:10:13.347506 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:10:13.347514 systemd[1]: Detected virtualization kvm. Oct 13 05:10:13.347522 systemd[1]: Detected architecture arm64. Oct 13 05:10:13.347530 systemd[1]: Running in initrd. Oct 13 05:10:13.347537 systemd[1]: No hostname configured, using default hostname. Oct 13 05:10:13.347546 systemd[1]: Hostname set to . Oct 13 05:10:13.347555 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:10:13.347563 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:10:13.347577 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:10:13.347586 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:10:13.347596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:10:13.347608 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:10:13.347626 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:10:13.347636 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:10:13.347646 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:10:13.347654 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:10:13.347664 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:10:13.347672 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:10:13.347681 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:10:13.347689 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:10:13.347697 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:10:13.347712 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:10:13.347721 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:10:13.347731 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:10:13.347739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:10:13.347748 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:10:13.347756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:10:13.347764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:10:13.347773 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:10:13.347781 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:10:13.347790 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:10:13.347799 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:10:13.347807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:10:13.347815 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:10:13.347824 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:10:13.347833 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:10:13.347841 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:10:13.347851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:10:13.347859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:10:13.347868 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:10:13.347931 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:10:13.347942 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:10:13.347999 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:10:13.348034 systemd-journald[344]: Collecting audit messages is disabled. Oct 13 05:10:13.348059 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:10:13.348067 kernel: Bridge firewalling registered Oct 13 05:10:13.348076 systemd-journald[344]: Journal started Oct 13 05:10:13.348094 systemd-journald[344]: Runtime Journal (/run/log/journal/e5c549cbb967455984c58e2a76c5a4df) is 6M, max 48.5M, 42.4M free. Oct 13 05:10:13.348252 systemd-modules-load[346]: Inserted module 'br_netfilter' Oct 13 05:10:13.356809 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:10:13.359286 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:10:13.359931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:10:13.363893 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:10:13.365879 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:10:13.368648 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:10:13.373395 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:10:13.377486 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:10:13.383483 systemd-tmpfiles[364]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:10:13.386261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:10:13.388719 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:10:13.392175 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:10:13.395424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:10:13.396671 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:10:13.399795 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:10:13.426896 dracut-cmdline[384]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1a81e36b39d22063d1d9b2ac3307af6d1e57cfd926c8fafd214fb74284e73d99 Oct 13 05:10:13.448060 systemd-resolved[383]: Positive Trust Anchors: Oct 13 05:10:13.448076 systemd-resolved[383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:10:13.448079 systemd-resolved[383]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:10:13.448110 systemd-resolved[383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:10:13.470250 systemd-resolved[383]: Defaulting to hostname 'linux'. Oct 13 05:10:13.471158 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:10:13.472342 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:10:13.508013 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:10:13.514997 kernel: iscsi: registered transport (tcp) Oct 13 05:10:13.528131 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:10:13.528193 kernel: QLogic iSCSI HBA Driver Oct 13 05:10:13.550931 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:10:13.565630 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:10:13.567263 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:10:13.613514 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:10:13.616135 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:10:13.617758 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:10:13.651965 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:10:13.655683 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:10:13.685930 systemd-udevd[627]: Using default interface naming scheme 'v257'. Oct 13 05:10:13.693815 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:10:13.698053 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:10:13.717956 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:10:13.721494 dracut-pre-trigger[703]: rd.md=0: removing MD RAID activation Oct 13 05:10:13.723129 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:10:13.746229 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:10:13.749362 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:10:13.770048 systemd-networkd[738]: lo: Link UP Oct 13 05:10:13.770058 systemd-networkd[738]: lo: Gained carrier Oct 13 05:10:13.770549 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:10:13.772027 systemd[1]: Reached target network.target - Network. Oct 13 05:10:13.810428 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:10:13.813645 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:10:13.869020 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 05:10:13.886777 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 05:10:13.896167 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:10:13.904679 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 05:10:13.912794 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:10:13.912935 systemd-networkd[738]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:10:13.912938 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:10:13.913649 systemd-networkd[738]: eth0: Link UP Oct 13 05:10:13.913808 systemd-networkd[738]: eth0: Gained carrier Oct 13 05:10:13.913828 systemd-networkd[738]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:10:13.918625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:10:13.918746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:10:13.920430 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:10:13.926203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:10:13.931034 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:10:13.932538 disk-uuid[800]: Primary Header is updated. Oct 13 05:10:13.932538 disk-uuid[800]: Secondary Entries is updated. Oct 13 05:10:13.932538 disk-uuid[800]: Secondary Header is updated. Oct 13 05:10:13.952032 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:10:13.957996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:10:13.985990 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:10:13.987231 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:10:13.989345 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:10:13.992251 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:10:14.022058 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:10:14.957096 disk-uuid[802]: Warning: The kernel is still using the old partition table. Oct 13 05:10:14.957096 disk-uuid[802]: The new table will be used at the next reboot or after you Oct 13 05:10:14.957096 disk-uuid[802]: run partprobe(8) or kpartx(8) Oct 13 05:10:14.957096 disk-uuid[802]: The operation has completed successfully. Oct 13 05:10:14.962842 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:10:14.962947 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:10:14.965074 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:10:14.999819 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (831) Oct 13 05:10:14.999871 kernel: BTRFS info (device vda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 05:10:14.999882 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 05:10:15.003083 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:10:15.003120 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:10:15.008990 kernel: BTRFS info (device vda6): last unmount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 05:10:15.010060 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:10:15.012099 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:10:15.112758 ignition[850]: Ignition 2.22.0 Oct 13 05:10:15.112771 ignition[850]: Stage: fetch-offline Oct 13 05:10:15.112805 ignition[850]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:10:15.112814 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:10:15.112890 ignition[850]: parsed url from cmdline: "" Oct 13 05:10:15.112893 ignition[850]: no config URL provided Oct 13 05:10:15.112897 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:10:15.112906 ignition[850]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:10:15.112944 ignition[850]: op(1): [started] loading QEMU firmware config module Oct 13 05:10:15.112949 ignition[850]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 05:10:15.117911 ignition[850]: op(1): [finished] loading QEMU firmware config module Oct 13 05:10:15.159141 ignition[850]: parsing config with SHA512: 62a791b3e67f87dac9646b5e4c180769d5217309619a488b80fce0ada114350e4731ab6c54ca5a0a1099624876bc5653b8a3752faf88b9b62850fbeb329e2b47 Oct 13 05:10:15.164301 unknown[850]: fetched base config from "system" Oct 13 05:10:15.164311 unknown[850]: fetched user config from "qemu" Oct 13 05:10:15.164701 ignition[850]: fetch-offline: fetch-offline passed Oct 13 05:10:15.167225 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:10:15.164759 ignition[850]: Ignition finished successfully Oct 13 05:10:15.168525 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 05:10:15.169353 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:10:15.200074 ignition[867]: Ignition 2.22.0 Oct 13 05:10:15.200086 ignition[867]: Stage: kargs Oct 13 05:10:15.200228 ignition[867]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:10:15.200236 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:10:15.200997 ignition[867]: kargs: kargs passed Oct 13 05:10:15.201038 ignition[867]: Ignition finished successfully Oct 13 05:10:15.204000 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:10:15.206476 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:10:15.244268 ignition[875]: Ignition 2.22.0 Oct 13 05:10:15.244282 ignition[875]: Stage: disks Oct 13 05:10:15.244408 ignition[875]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:10:15.244415 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:10:15.245203 ignition[875]: disks: disks passed Oct 13 05:10:15.245245 ignition[875]: Ignition finished successfully Oct 13 05:10:15.248047 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:10:15.249508 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:10:15.250749 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:10:15.252453 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:10:15.254138 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:10:15.255904 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:10:15.258514 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:10:15.303463 systemd-fsck[884]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 13 05:10:15.308486 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:10:15.310679 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:10:15.372920 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:10:15.374283 kernel: EXT4-fs (vda9): mounted filesystem a42694d5-feb9-4394-9ac1-a45818242d2d r/w with ordered data mode. Quota mode: none. Oct 13 05:10:15.374034 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:10:15.376117 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:10:15.377505 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:10:15.378267 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:10:15.378296 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:10:15.378319 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:10:15.390304 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:10:15.392649 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:10:15.397702 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Oct 13 05:10:15.397731 kernel: BTRFS info (device vda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 05:10:15.397743 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 05:10:15.401090 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:10:15.401654 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:10:15.401921 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:10:15.431928 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:10:15.436366 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:10:15.439755 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:10:15.442494 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:10:15.476077 systemd-networkd[738]: eth0: Gained IPv6LL Oct 13 05:10:15.508374 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:10:15.510763 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:10:15.513746 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:10:15.531834 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:10:15.532994 kernel: BTRFS info (device vda6): last unmount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 05:10:15.547014 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:10:15.560106 ignition[1006]: INFO : Ignition 2.22.0 Oct 13 05:10:15.560865 ignition[1006]: INFO : Stage: mount Oct 13 05:10:15.561550 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:10:15.563028 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:10:15.565257 ignition[1006]: INFO : mount: mount passed Oct 13 05:10:15.565257 ignition[1006]: INFO : Ignition finished successfully Oct 13 05:10:15.567022 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:10:15.570078 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:10:16.374414 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:10:16.393374 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1018) Oct 13 05:10:16.393408 kernel: BTRFS info (device vda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 05:10:16.393419 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 05:10:16.396142 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:10:16.396166 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:10:16.397341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:10:16.429793 ignition[1035]: INFO : Ignition 2.22.0 Oct 13 05:10:16.429793 ignition[1035]: INFO : Stage: files Oct 13 05:10:16.431517 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:10:16.431517 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:10:16.431517 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:10:16.435016 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:10:16.435016 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:10:16.435016 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:10:16.439273 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:10:16.439273 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:10:16.439273 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 13 05:10:16.439273 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 13 05:10:16.435533 unknown[1035]: wrote ssh authorized keys file for user: core Oct 13 05:10:16.489161 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:10:16.573047 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 13 05:10:16.575033 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 13 05:10:16.576959 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 13 05:10:16.776323 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 13 05:10:16.852219 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 13 05:10:16.852219 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:10:16.855842 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:10:16.855842 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:10:16.855842 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:10:16.855842 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:10:16.855842 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:10:16.855842 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:10:16.855842 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:10:16.855842 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:10:16.870059 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:10:16.870059 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 05:10:16.870059 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 05:10:16.870059 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 05:10:16.870059 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 13 05:10:17.054239 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 13 05:10:17.363392 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 05:10:17.363392 ignition[1035]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 13 05:10:17.367361 ignition[1035]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:10:17.369591 ignition[1035]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:10:17.369591 ignition[1035]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 13 05:10:17.369591 ignition[1035]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 13 05:10:17.369591 ignition[1035]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:10:17.369591 ignition[1035]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:10:17.369591 ignition[1035]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 13 05:10:17.369591 ignition[1035]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 05:10:17.387499 ignition[1035]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:10:17.391339 ignition[1035]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:10:17.393194 ignition[1035]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 05:10:17.393194 ignition[1035]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:10:17.393194 ignition[1035]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:10:17.393194 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:10:17.393194 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:10:17.393194 ignition[1035]: INFO : files: files passed Oct 13 05:10:17.393194 ignition[1035]: INFO : Ignition finished successfully Oct 13 05:10:17.394036 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:10:17.396638 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:10:17.398500 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:10:17.413394 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:10:17.413523 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:10:17.415939 initrd-setup-root-after-ignition[1064]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 05:10:17.418863 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:10:17.418863 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:10:17.422289 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:10:17.422304 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:10:17.423797 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:10:17.426582 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:10:17.480991 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:10:17.481849 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:10:17.484175 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:10:17.485761 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:10:17.487750 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:10:17.489885 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:10:17.520590 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:10:17.522837 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:10:17.553851 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:10:17.554056 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:10:17.556126 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:10:17.557977 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:10:17.559701 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:10:17.559830 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:10:17.562203 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:10:17.563959 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:10:17.565544 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:10:17.566979 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:10:17.568832 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:10:17.570635 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:10:17.572304 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:10:17.573907 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:10:17.575657 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:10:17.577430 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:10:17.578983 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:10:17.580458 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:10:17.580582 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:10:17.582995 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:10:17.584797 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:10:17.586664 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:10:17.590030 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:10:17.591002 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:10:17.591117 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:10:17.594082 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:10:17.594197 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:10:17.595883 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:10:17.597316 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:10:17.598281 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:10:17.599251 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:10:17.600591 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:10:17.602239 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:10:17.602320 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:10:17.604217 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:10:17.604296 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:10:17.605666 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:10:17.605781 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:10:17.607256 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:10:17.607357 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:10:17.609626 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:10:17.611203 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:10:17.611324 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:10:17.613656 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:10:17.614507 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:10:17.614631 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:10:17.616630 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:10:17.616740 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:10:17.618209 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:10:17.618312 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:10:17.623620 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:10:17.623716 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:10:17.634789 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:10:17.639331 ignition[1092]: INFO : Ignition 2.22.0 Oct 13 05:10:17.639331 ignition[1092]: INFO : Stage: umount Oct 13 05:10:17.640960 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:10:17.640960 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:10:17.640960 ignition[1092]: INFO : umount: umount passed Oct 13 05:10:17.640960 ignition[1092]: INFO : Ignition finished successfully Oct 13 05:10:17.642240 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:10:17.642357 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:10:17.645309 systemd[1]: Stopped target network.target - Network. Oct 13 05:10:17.646376 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:10:17.646425 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:10:17.647794 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:10:17.647841 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:10:17.649482 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:10:17.649531 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:10:17.651059 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:10:17.651102 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:10:17.652638 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:10:17.654265 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:10:17.662318 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:10:17.662425 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:10:17.667352 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:10:17.669130 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:10:17.673322 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:10:17.674210 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:10:17.674252 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:10:17.676832 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:10:17.677876 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:10:17.677939 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:10:17.679912 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:10:17.679957 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:10:17.683615 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:10:17.683657 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:10:17.685818 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:10:17.688539 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:10:17.695151 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:10:17.696311 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:10:17.696396 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:10:17.699565 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:10:17.699718 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:10:17.701140 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:10:17.701177 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:10:17.702731 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:10:17.702761 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:10:17.704698 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:10:17.704748 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:10:17.707394 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:10:17.707437 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:10:17.710079 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:10:17.710132 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:10:17.713484 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:10:17.714335 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:10:17.714387 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:10:17.716092 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:10:17.716130 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:10:17.717811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:10:17.717854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:10:17.732242 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:10:17.732371 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:10:17.736244 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:10:17.736325 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:10:17.738203 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:10:17.740113 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:10:17.754605 systemd[1]: Switching root. Oct 13 05:10:17.787017 systemd-journald[344]: Journal stopped Oct 13 05:10:18.607420 systemd-journald[344]: Received SIGTERM from PID 1 (systemd). Oct 13 05:10:18.607472 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:10:18.607487 kernel: SELinux: policy capability open_perms=1 Oct 13 05:10:18.607497 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:10:18.607508 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:10:18.607521 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:10:18.607532 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:10:18.607542 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:10:18.607554 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:10:18.607563 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:10:18.607574 systemd[1]: Successfully loaded SELinux policy in 59.643ms. Oct 13 05:10:18.607594 kernel: audit: type=1403 audit(1760332217.996:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:10:18.607607 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.814ms. Oct 13 05:10:18.607623 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:10:18.607635 systemd[1]: Detected virtualization kvm. Oct 13 05:10:18.607646 systemd[1]: Detected architecture arm64. Oct 13 05:10:18.607657 systemd[1]: Detected first boot. Oct 13 05:10:18.607668 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:10:18.607679 zram_generator::config[1140]: No configuration found. Oct 13 05:10:18.607705 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:10:18.607715 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:10:18.607728 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:10:18.607739 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:10:18.607750 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:10:18.607761 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:10:18.607772 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:10:18.607785 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:10:18.607795 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:10:18.607806 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:10:18.607817 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:10:18.607827 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:10:18.607838 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:10:18.607850 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:10:18.607861 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:10:18.607872 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:10:18.607883 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:10:18.607894 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:10:18.607905 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:10:18.607915 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 13 05:10:18.607927 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:10:18.607937 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:10:18.607948 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:10:18.607958 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:10:18.607996 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:10:18.608009 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:10:18.608022 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:10:18.608033 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:10:18.608044 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:10:18.608056 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:10:18.608066 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:10:18.608077 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:10:18.608088 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:10:18.608099 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:10:18.608110 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:10:18.608121 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:10:18.608132 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:10:18.608142 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:10:18.608153 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:10:18.608163 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:10:18.608174 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:10:18.608190 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:10:18.608200 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:10:18.608211 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:10:18.608221 systemd[1]: Reached target machines.target - Containers. Oct 13 05:10:18.608232 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:10:18.608243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:10:18.608253 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:10:18.608265 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:10:18.608276 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:10:18.608286 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:10:18.608297 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:10:18.608308 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:10:18.608318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:10:18.608329 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:10:18.608341 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:10:18.608352 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:10:18.608362 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:10:18.608372 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:10:18.608384 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:10:18.608395 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:10:18.608406 kernel: fuse: init (API version 7.41) Oct 13 05:10:18.608417 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:10:18.608427 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:10:18.608438 kernel: ACPI: bus type drm_connector registered Oct 13 05:10:18.608449 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:10:18.608460 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:10:18.608471 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:10:18.608483 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:10:18.608495 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:10:18.608506 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:10:18.608516 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:10:18.608528 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:10:18.608539 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:10:18.608550 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:10:18.608582 systemd-journald[1206]: Collecting audit messages is disabled. Oct 13 05:10:18.608604 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:10:18.608615 systemd-journald[1206]: Journal started Oct 13 05:10:18.608638 systemd-journald[1206]: Runtime Journal (/run/log/journal/e5c549cbb967455984c58e2a76c5a4df) is 6M, max 48.5M, 42.4M free. Oct 13 05:10:18.403495 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:10:18.414048 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 05:10:18.414508 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:10:18.611777 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:10:18.612871 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:10:18.613223 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:10:18.614623 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:10:18.614819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:10:18.616237 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:10:18.616398 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:10:18.617749 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:10:18.617925 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:10:18.619442 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:10:18.619620 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:10:18.621090 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:10:18.621253 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:10:18.622702 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:10:18.624281 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:10:18.626483 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:10:18.628508 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:10:18.641210 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:10:18.642730 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 05:10:18.645047 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:10:18.647048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:10:18.648182 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:10:18.648235 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:10:18.650161 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:10:18.651611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:10:18.657799 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:10:18.659985 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:10:18.661232 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:10:18.662337 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:10:18.663513 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:10:18.666096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:10:18.668080 systemd-journald[1206]: Time spent on flushing to /var/log/journal/e5c549cbb967455984c58e2a76c5a4df is 14.426ms for 886 entries. Oct 13 05:10:18.668080 systemd-journald[1206]: System Journal (/var/log/journal/e5c549cbb967455984c58e2a76c5a4df) is 8M, max 163.5M, 155.5M free. Oct 13 05:10:18.695225 systemd-journald[1206]: Received client request to flush runtime journal. Oct 13 05:10:18.695311 kernel: loop1: detected capacity change from 0 to 119344 Oct 13 05:10:18.668271 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:10:18.671114 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:10:18.674985 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:10:18.677337 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:10:18.679246 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:10:18.680804 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:10:18.683953 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:10:18.687226 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:10:18.699857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:10:18.701834 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:10:18.711993 kernel: loop2: detected capacity change from 0 to 100624 Oct 13 05:10:18.712599 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:10:18.720549 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:10:18.723811 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:10:18.726102 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:10:18.734001 kernel: loop3: detected capacity change from 0 to 200800 Oct 13 05:10:18.737627 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:10:18.751158 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Oct 13 05:10:18.751175 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Oct 13 05:10:18.754690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:10:18.760002 kernel: loop4: detected capacity change from 0 to 119344 Oct 13 05:10:18.766991 kernel: loop5: detected capacity change from 0 to 100624 Oct 13 05:10:18.767870 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:10:18.775074 kernel: loop6: detected capacity change from 0 to 200800 Oct 13 05:10:18.779750 (sd-merge)[1278]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 13 05:10:18.782877 (sd-merge)[1278]: Merged extensions into '/usr'. Oct 13 05:10:18.786451 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:10:18.786474 systemd[1]: Reloading... Oct 13 05:10:18.819331 systemd-resolved[1272]: Positive Trust Anchors: Oct 13 05:10:18.819347 systemd-resolved[1272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:10:18.819351 systemd-resolved[1272]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:10:18.819383 systemd-resolved[1272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:10:18.825996 systemd-resolved[1272]: Defaulting to hostname 'linux'. Oct 13 05:10:18.838017 zram_generator::config[1311]: No configuration found. Oct 13 05:10:19.000200 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:10:19.000417 systemd[1]: Reloading finished in 213 ms. Oct 13 05:10:19.039838 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:10:19.043014 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:10:19.046357 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:10:19.070310 systemd[1]: Starting ensure-sysext.service... Oct 13 05:10:19.072369 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:10:19.082128 systemd[1]: Reload requested from client PID 1345 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:10:19.082149 systemd[1]: Reloading... Oct 13 05:10:19.086488 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:10:19.086522 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:10:19.086742 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:10:19.086933 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:10:19.087552 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:10:19.087752 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Oct 13 05:10:19.087795 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Oct 13 05:10:19.091459 systemd-tmpfiles[1346]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:10:19.091472 systemd-tmpfiles[1346]: Skipping /boot Oct 13 05:10:19.097760 systemd-tmpfiles[1346]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:10:19.097772 systemd-tmpfiles[1346]: Skipping /boot Oct 13 05:10:19.119990 zram_generator::config[1373]: No configuration found. Oct 13 05:10:19.278454 systemd[1]: Reloading finished in 195 ms. Oct 13 05:10:19.302802 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:10:19.326476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:10:19.334886 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:10:19.337787 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:10:19.348132 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:10:19.350766 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:10:19.353553 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:10:19.358926 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:10:19.363216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:10:19.368204 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:10:19.372566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:10:19.375238 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:10:19.377470 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:10:19.377596 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:10:19.379054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:10:19.379274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:10:19.388203 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:10:19.389128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:10:19.391322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:10:19.391546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:10:19.394565 systemd-udevd[1417]: Using default interface naming scheme 'v257'. Oct 13 05:10:19.400752 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:10:19.403265 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:10:19.407609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:10:19.411253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:10:19.415253 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:10:19.418752 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:10:19.421160 augenrules[1447]: No rules Oct 13 05:10:19.422209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:10:19.426206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:10:19.426266 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:10:19.428254 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:10:19.431391 systemd[1]: Finished ensure-sysext.service. Oct 13 05:10:19.432636 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:10:19.432855 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:10:19.435929 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:10:19.438551 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:10:19.438795 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:10:19.440734 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:10:19.440954 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:10:19.442613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:10:19.442832 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:10:19.444600 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:10:19.444814 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:10:19.458235 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:10:19.459382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:10:19.459455 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:10:19.462232 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:10:19.463340 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:10:19.531561 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 13 05:10:19.540822 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:10:19.544433 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:10:19.554881 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:10:19.556108 systemd-networkd[1480]: lo: Link UP Oct 13 05:10:19.556121 systemd-networkd[1480]: lo: Gained carrier Oct 13 05:10:19.556613 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:10:19.557005 systemd-networkd[1480]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:10:19.557015 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:10:19.558061 systemd-networkd[1480]: eth0: Link UP Oct 13 05:10:19.558077 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:10:19.558199 systemd-networkd[1480]: eth0: Gained carrier Oct 13 05:10:19.558212 systemd-networkd[1480]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:10:19.559833 systemd[1]: Reached target network.target - Network. Oct 13 05:10:19.562474 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:10:19.569193 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:10:19.572528 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:10:19.578048 systemd-networkd[1480]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:10:19.578610 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Oct 13 05:10:20.076423 systemd-timesyncd[1481]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 05:10:20.076474 systemd-timesyncd[1481]: Initial clock synchronization to Mon 2025-10-13 05:10:20.076334 UTC. Oct 13 05:10:20.077519 systemd-resolved[1272]: Clock change detected. Flushing caches. Oct 13 05:10:20.086189 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:10:20.173387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:10:20.183435 ldconfig[1414]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:10:20.188642 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:10:20.193319 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:10:20.213731 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:10:20.224156 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:10:20.226911 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:10:20.228180 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:10:20.229553 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:10:20.230996 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:10:20.232259 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:10:20.233693 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:10:20.234970 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:10:20.235009 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:10:20.235977 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:10:20.237858 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:10:20.240313 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:10:20.243302 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:10:20.244698 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:10:20.246093 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:10:20.249378 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:10:20.250958 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:10:20.252744 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:10:20.253976 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:10:20.254990 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:10:20.255996 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:10:20.256032 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:10:20.256994 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:10:20.259046 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:10:20.260960 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:10:20.263139 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:10:20.265125 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:10:20.266216 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:10:20.268384 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:10:20.269913 jq[1529]: false Oct 13 05:10:20.271275 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:10:20.274288 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:10:20.276437 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:10:20.278630 extend-filesystems[1530]: Found /dev/vda6 Oct 13 05:10:20.281387 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:10:20.282416 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:10:20.283057 extend-filesystems[1530]: Found /dev/vda9 Oct 13 05:10:20.282898 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:10:20.283521 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:10:20.284163 extend-filesystems[1530]: Checking size of /dev/vda9 Oct 13 05:10:20.289264 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:10:20.292218 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:10:20.295532 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:10:20.295738 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:10:20.296053 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:10:20.296242 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:10:20.299608 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:10:20.299856 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:10:20.303511 jq[1549]: true Oct 13 05:10:20.305157 extend-filesystems[1530]: Resized partition /dev/vda9 Oct 13 05:10:20.308487 extend-filesystems[1564]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:10:20.317192 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 13 05:10:20.319595 jq[1562]: true Oct 13 05:10:20.326210 tar[1558]: linux-arm64/LICENSE Oct 13 05:10:20.326210 tar[1558]: linux-arm64/helm Oct 13 05:10:20.334194 update_engine[1546]: I20251013 05:10:20.333945 1546 main.cc:92] Flatcar Update Engine starting Oct 13 05:10:20.337284 dbus-daemon[1527]: [system] SELinux support is enabled Oct 13 05:10:20.337890 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:10:20.341789 (ntainerd)[1574]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:10:20.343993 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:10:20.344026 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:10:20.347050 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:10:20.347091 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:10:20.351155 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 13 05:10:20.365585 update_engine[1546]: I20251013 05:10:20.352329 1546 update_check_scheduler.cc:74] Next update check in 8m39s Oct 13 05:10:20.357468 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:10:20.360072 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:10:20.366425 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 05:10:20.366425 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 05:10:20.366425 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 13 05:10:20.373493 extend-filesystems[1530]: Resized filesystem in /dev/vda9 Oct 13 05:10:20.367171 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:10:20.369499 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:10:20.391611 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (Power Button) Oct 13 05:10:20.393380 systemd-logind[1542]: New seat seat0. Oct 13 05:10:20.394429 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:10:20.406170 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:10:20.406516 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:10:20.415087 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:10:20.450281 locksmithd[1581]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:10:20.523725 containerd[1574]: time="2025-10-13T05:10:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:10:20.524542 containerd[1574]: time="2025-10-13T05:10:20.524497111Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534177151Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.12µs" Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534214951Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534240591Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534378151Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534392831Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534415431Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534463111Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534474271Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534658831Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534672711Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534684071Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:10:20.535103 containerd[1574]: time="2025-10-13T05:10:20.534691991Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:10:20.535359 containerd[1574]: time="2025-10-13T05:10:20.534764951Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:10:20.535359 containerd[1574]: time="2025-10-13T05:10:20.534961871Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:10:20.535359 containerd[1574]: time="2025-10-13T05:10:20.534988951Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:10:20.535359 containerd[1574]: time="2025-10-13T05:10:20.534999791Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:10:20.535359 containerd[1574]: time="2025-10-13T05:10:20.535034431Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:10:20.535359 containerd[1574]: time="2025-10-13T05:10:20.535289151Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:10:20.535359 containerd[1574]: time="2025-10-13T05:10:20.535353111Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:10:20.539190 containerd[1574]: time="2025-10-13T05:10:20.539148391Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:10:20.539256 containerd[1574]: time="2025-10-13T05:10:20.539200071Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:10:20.539256 containerd[1574]: time="2025-10-13T05:10:20.539223631Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:10:20.539256 containerd[1574]: time="2025-10-13T05:10:20.539239071Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:10:20.539256 containerd[1574]: time="2025-10-13T05:10:20.539250631Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:10:20.539358 containerd[1574]: time="2025-10-13T05:10:20.539338071Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:10:20.539358 containerd[1574]: time="2025-10-13T05:10:20.539355471Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:10:20.539401 containerd[1574]: time="2025-10-13T05:10:20.539368071Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:10:20.539401 containerd[1574]: time="2025-10-13T05:10:20.539381871Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:10:20.539401 containerd[1574]: time="2025-10-13T05:10:20.539392591Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:10:20.539497 containerd[1574]: time="2025-10-13T05:10:20.539401631Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:10:20.539497 containerd[1574]: time="2025-10-13T05:10:20.539414071Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:10:20.539593 containerd[1574]: time="2025-10-13T05:10:20.539556591Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:10:20.539593 containerd[1574]: time="2025-10-13T05:10:20.539585911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:10:20.539635 containerd[1574]: time="2025-10-13T05:10:20.539601911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:10:20.539635 containerd[1574]: time="2025-10-13T05:10:20.539618391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:10:20.539635 containerd[1574]: time="2025-10-13T05:10:20.539630311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:10:20.539687 containerd[1574]: time="2025-10-13T05:10:20.539640671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:10:20.539687 containerd[1574]: time="2025-10-13T05:10:20.539651551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:10:20.539687 containerd[1574]: time="2025-10-13T05:10:20.539660871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:10:20.539687 containerd[1574]: time="2025-10-13T05:10:20.539674711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:10:20.539687 containerd[1574]: time="2025-10-13T05:10:20.539685991Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:10:20.539770 containerd[1574]: time="2025-10-13T05:10:20.539696671Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:10:20.539917 containerd[1574]: time="2025-10-13T05:10:20.539888391Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:10:20.539942 containerd[1574]: time="2025-10-13T05:10:20.539928391Z" level=info msg="Start snapshots syncer" Oct 13 05:10:20.539976 containerd[1574]: time="2025-10-13T05:10:20.539950671Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:10:20.540240 containerd[1574]: time="2025-10-13T05:10:20.540188311Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:10:20.540354 containerd[1574]: time="2025-10-13T05:10:20.540242391Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:10:20.540354 containerd[1574]: time="2025-10-13T05:10:20.540312191Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:10:20.540446 containerd[1574]: time="2025-10-13T05:10:20.540412351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:10:20.540446 containerd[1574]: time="2025-10-13T05:10:20.540442071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:10:20.540493 containerd[1574]: time="2025-10-13T05:10:20.540453791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:10:20.540493 containerd[1574]: time="2025-10-13T05:10:20.540465511Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:10:20.540493 containerd[1574]: time="2025-10-13T05:10:20.540482791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:10:20.540493 containerd[1574]: time="2025-10-13T05:10:20.540493431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:10:20.540565 containerd[1574]: time="2025-10-13T05:10:20.540505231Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:10:20.540565 containerd[1574]: time="2025-10-13T05:10:20.540530591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:10:20.540565 containerd[1574]: time="2025-10-13T05:10:20.540542591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:10:20.540565 containerd[1574]: time="2025-10-13T05:10:20.540554271Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:10:20.540631 containerd[1574]: time="2025-10-13T05:10:20.540588191Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:10:20.540631 containerd[1574]: time="2025-10-13T05:10:20.540602231Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:10:20.540631 containerd[1574]: time="2025-10-13T05:10:20.540610351Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:10:20.540631 containerd[1574]: time="2025-10-13T05:10:20.540619351Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:10:20.540631 containerd[1574]: time="2025-10-13T05:10:20.540626991Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:10:20.540709 containerd[1574]: time="2025-10-13T05:10:20.540636511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:10:20.540709 containerd[1574]: time="2025-10-13T05:10:20.540647111Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:10:20.540744 containerd[1574]: time="2025-10-13T05:10:20.540724231Z" level=info msg="runtime interface created" Oct 13 05:10:20.540744 containerd[1574]: time="2025-10-13T05:10:20.540729351Z" level=info msg="created NRI interface" Oct 13 05:10:20.540744 containerd[1574]: time="2025-10-13T05:10:20.540737191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:10:20.540790 containerd[1574]: time="2025-10-13T05:10:20.540747191Z" level=info msg="Connect containerd service" Oct 13 05:10:20.540790 containerd[1574]: time="2025-10-13T05:10:20.540776311Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:10:20.541547 containerd[1574]: time="2025-10-13T05:10:20.541499831Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:10:20.617202 containerd[1574]: time="2025-10-13T05:10:20.616891551Z" level=info msg="Start subscribing containerd event" Oct 13 05:10:20.617362 containerd[1574]: time="2025-10-13T05:10:20.617345951Z" level=info msg="Start recovering state" Oct 13 05:10:20.617496 containerd[1574]: time="2025-10-13T05:10:20.617239471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:10:20.617529 containerd[1574]: time="2025-10-13T05:10:20.617520631Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:10:20.617575 containerd[1574]: time="2025-10-13T05:10:20.617560951Z" level=info msg="Start event monitor" Oct 13 05:10:20.617635 containerd[1574]: time="2025-10-13T05:10:20.617617231Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:10:20.617769 containerd[1574]: time="2025-10-13T05:10:20.617696311Z" level=info msg="Start streaming server" Oct 13 05:10:20.617769 containerd[1574]: time="2025-10-13T05:10:20.617713031Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:10:20.617769 containerd[1574]: time="2025-10-13T05:10:20.617720951Z" level=info msg="runtime interface starting up..." Oct 13 05:10:20.617769 containerd[1574]: time="2025-10-13T05:10:20.617727231Z" level=info msg="starting plugins..." Oct 13 05:10:20.617769 containerd[1574]: time="2025-10-13T05:10:20.617742791Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:10:20.618352 containerd[1574]: time="2025-10-13T05:10:20.618328631Z" level=info msg="containerd successfully booted in 0.095162s" Oct 13 05:10:20.618463 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:10:20.653795 tar[1558]: linux-arm64/README.md Oct 13 05:10:20.671313 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:10:21.797326 systemd-networkd[1480]: eth0: Gained IPv6LL Oct 13 05:10:21.800565 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:10:21.801992 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:10:21.805689 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 05:10:21.807932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:10:21.810028 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:10:21.839281 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 05:10:21.839508 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 05:10:21.841741 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:10:21.844499 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:10:21.872364 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:10:21.892646 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:10:21.896035 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:10:21.911906 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:10:21.912221 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:10:21.916038 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:10:21.933277 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:10:21.936888 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:10:21.939581 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 13 05:10:21.941058 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:10:22.374045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:10:22.375738 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:10:22.376775 systemd[1]: Startup finished in 1.117s (kernel) + 4.895s (initrd) + 3.942s (userspace) = 9.955s. Oct 13 05:10:22.378477 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:10:22.682871 kubelet[1665]: E1013 05:10:22.682736 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:10:22.685408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:10:22.685552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:10:22.685906 systemd[1]: kubelet.service: Consumed 694ms CPU time, 249.1M memory peak. Oct 13 05:10:24.765649 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:10:24.766790 systemd[1]: Started sshd@0-10.0.0.119:22-10.0.0.1:44688.service - OpenSSH per-connection server daemon (10.0.0.1:44688). Oct 13 05:10:24.829903 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 44688 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:10:24.833210 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:10:24.840731 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:10:24.841743 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:10:24.849725 systemd-logind[1542]: New session 1 of user core. Oct 13 05:10:24.863493 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:10:24.865977 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:10:24.881452 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:10:24.883662 systemd-logind[1542]: New session c1 of user core. Oct 13 05:10:24.996968 systemd[1683]: Queued start job for default target default.target. Oct 13 05:10:25.019101 systemd[1683]: Created slice app.slice - User Application Slice. Oct 13 05:10:25.019155 systemd[1683]: Reached target paths.target - Paths. Oct 13 05:10:25.019200 systemd[1683]: Reached target timers.target - Timers. Oct 13 05:10:25.020359 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:10:25.029993 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:10:25.030054 systemd[1683]: Reached target sockets.target - Sockets. Oct 13 05:10:25.030089 systemd[1683]: Reached target basic.target - Basic System. Oct 13 05:10:25.030117 systemd[1683]: Reached target default.target - Main User Target. Oct 13 05:10:25.030163 systemd[1683]: Startup finished in 140ms. Oct 13 05:10:25.030300 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:10:25.031860 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:10:25.093579 systemd[1]: Started sshd@1-10.0.0.119:22-10.0.0.1:44702.service - OpenSSH per-connection server daemon (10.0.0.1:44702). Oct 13 05:10:25.159364 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 44702 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:10:25.160662 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:10:25.165225 systemd-logind[1542]: New session 2 of user core. Oct 13 05:10:25.176310 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:10:25.226922 sshd[1697]: Connection closed by 10.0.0.1 port 44702 Oct 13 05:10:25.227356 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Oct 13 05:10:25.242542 systemd[1]: sshd@1-10.0.0.119:22-10.0.0.1:44702.service: Deactivated successfully. Oct 13 05:10:25.244086 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:10:25.244753 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:10:25.248085 systemd[1]: Started sshd@2-10.0.0.119:22-10.0.0.1:44704.service - OpenSSH per-connection server daemon (10.0.0.1:44704). Oct 13 05:10:25.249076 systemd-logind[1542]: Removed session 2. Oct 13 05:10:25.309842 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 44704 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:10:25.311124 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:10:25.315766 systemd-logind[1542]: New session 3 of user core. Oct 13 05:10:25.320287 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:10:25.369348 sshd[1706]: Connection closed by 10.0.0.1 port 44704 Oct 13 05:10:25.369754 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Oct 13 05:10:25.385343 systemd[1]: sshd@2-10.0.0.119:22-10.0.0.1:44704.service: Deactivated successfully. Oct 13 05:10:25.386752 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:10:25.389171 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:10:25.390916 systemd[1]: Started sshd@3-10.0.0.119:22-10.0.0.1:44718.service - OpenSSH per-connection server daemon (10.0.0.1:44718). Oct 13 05:10:25.391510 systemd-logind[1542]: Removed session 3. Oct 13 05:10:25.457727 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 44718 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:10:25.458906 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:10:25.462714 systemd-logind[1542]: New session 4 of user core. Oct 13 05:10:25.479278 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:10:25.529982 sshd[1715]: Connection closed by 10.0.0.1 port 44718 Oct 13 05:10:25.530324 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Oct 13 05:10:25.543248 systemd[1]: sshd@3-10.0.0.119:22-10.0.0.1:44718.service: Deactivated successfully. Oct 13 05:10:25.545463 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:10:25.546167 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:10:25.548544 systemd-logind[1542]: Removed session 4. Oct 13 05:10:25.549499 systemd[1]: Started sshd@4-10.0.0.119:22-10.0.0.1:48894.service - OpenSSH per-connection server daemon (10.0.0.1:48894). Oct 13 05:10:25.606282 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 48894 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:10:25.607398 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:10:25.611205 systemd-logind[1542]: New session 5 of user core. Oct 13 05:10:25.623264 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:10:25.679889 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:10:25.680170 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:10:25.704005 sudo[1725]: pam_unix(sudo:session): session closed for user root Oct 13 05:10:25.705840 sshd[1724]: Connection closed by 10.0.0.1 port 48894 Oct 13 05:10:25.706296 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Oct 13 05:10:25.729620 systemd[1]: sshd@4-10.0.0.119:22-10.0.0.1:48894.service: Deactivated successfully. Oct 13 05:10:25.731281 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:10:25.732786 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:10:25.734762 systemd[1]: Started sshd@5-10.0.0.119:22-10.0.0.1:48898.service - OpenSSH per-connection server daemon (10.0.0.1:48898). Oct 13 05:10:25.735582 systemd-logind[1542]: Removed session 5. Oct 13 05:10:25.784433 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 48898 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:10:25.785656 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:10:25.789681 systemd-logind[1542]: New session 6 of user core. Oct 13 05:10:25.797278 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:10:25.848487 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:10:25.849052 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:10:25.905516 sudo[1736]: pam_unix(sudo:session): session closed for user root Oct 13 05:10:25.911806 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:10:25.912068 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:10:25.920353 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:10:25.960146 augenrules[1758]: No rules Oct 13 05:10:25.961208 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:10:25.963178 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:10:25.964007 sudo[1735]: pam_unix(sudo:session): session closed for user root Oct 13 05:10:25.965503 sshd[1734]: Connection closed by 10.0.0.1 port 48898 Oct 13 05:10:25.965878 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Oct 13 05:10:25.977213 systemd[1]: sshd@5-10.0.0.119:22-10.0.0.1:48898.service: Deactivated successfully. Oct 13 05:10:25.979452 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:10:25.980323 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:10:25.982601 systemd[1]: Started sshd@6-10.0.0.119:22-10.0.0.1:48910.service - OpenSSH per-connection server daemon (10.0.0.1:48910). Oct 13 05:10:25.983185 systemd-logind[1542]: Removed session 6. Oct 13 05:10:26.045645 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 48910 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:10:26.046961 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:10:26.051196 systemd-logind[1542]: New session 7 of user core. Oct 13 05:10:26.061307 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:10:26.113509 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:10:26.114100 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:10:26.389367 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:10:26.412462 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:10:26.610554 dockerd[1791]: time="2025-10-13T05:10:26.610477591Z" level=info msg="Starting up" Oct 13 05:10:26.611431 dockerd[1791]: time="2025-10-13T05:10:26.611407991Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:10:26.622376 dockerd[1791]: time="2025-10-13T05:10:26.622336311Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:10:26.765697 dockerd[1791]: time="2025-10-13T05:10:26.765378471Z" level=info msg="Loading containers: start." Oct 13 05:10:26.776878 kernel: Initializing XFRM netlink socket Oct 13 05:10:26.974839 systemd-networkd[1480]: docker0: Link UP Oct 13 05:10:26.977865 dockerd[1791]: time="2025-10-13T05:10:26.977824271Z" level=info msg="Loading containers: done." Oct 13 05:10:26.990508 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4091231083-merged.mount: Deactivated successfully. Oct 13 05:10:26.991721 dockerd[1791]: time="2025-10-13T05:10:26.991244471Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:10:26.991721 dockerd[1791]: time="2025-10-13T05:10:26.991334511Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:10:26.991721 dockerd[1791]: time="2025-10-13T05:10:26.991414991Z" level=info msg="Initializing buildkit" Oct 13 05:10:27.013911 dockerd[1791]: time="2025-10-13T05:10:27.013850111Z" level=info msg="Completed buildkit initialization" Oct 13 05:10:27.021398 dockerd[1791]: time="2025-10-13T05:10:27.020900311Z" level=info msg="Daemon has completed initialization" Oct 13 05:10:27.021398 dockerd[1791]: time="2025-10-13T05:10:27.021083671Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:10:27.021244 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:10:27.421744 containerd[1574]: time="2025-10-13T05:10:27.421649791Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 05:10:27.982730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498703256.mount: Deactivated successfully. Oct 13 05:10:28.955745 containerd[1574]: time="2025-10-13T05:10:28.955679031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:28.957045 containerd[1574]: time="2025-10-13T05:10:28.956783791Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Oct 13 05:10:28.957857 containerd[1574]: time="2025-10-13T05:10:28.957825471Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:28.960372 containerd[1574]: time="2025-10-13T05:10:28.960340751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:28.961537 containerd[1574]: time="2025-10-13T05:10:28.961377831Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.53969256s" Oct 13 05:10:28.961537 containerd[1574]: time="2025-10-13T05:10:28.961416591Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Oct 13 05:10:28.962094 containerd[1574]: time="2025-10-13T05:10:28.962060831Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 05:10:29.916420 containerd[1574]: time="2025-10-13T05:10:29.916367431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:29.916833 containerd[1574]: time="2025-10-13T05:10:29.916792151Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Oct 13 05:10:29.917667 containerd[1574]: time="2025-10-13T05:10:29.917637031Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:29.920276 containerd[1574]: time="2025-10-13T05:10:29.920220231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:29.921348 containerd[1574]: time="2025-10-13T05:10:29.921316391Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 959.10364ms" Oct 13 05:10:29.921422 containerd[1574]: time="2025-10-13T05:10:29.921352551Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Oct 13 05:10:29.922101 containerd[1574]: time="2025-10-13T05:10:29.922069191Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 05:10:30.817754 containerd[1574]: time="2025-10-13T05:10:30.817696271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:30.818186 containerd[1574]: time="2025-10-13T05:10:30.818156671Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Oct 13 05:10:30.819063 containerd[1574]: time="2025-10-13T05:10:30.819036791Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:30.821412 containerd[1574]: time="2025-10-13T05:10:30.821383591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:30.822802 containerd[1574]: time="2025-10-13T05:10:30.822763631Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 900.65912ms" Oct 13 05:10:30.822841 containerd[1574]: time="2025-10-13T05:10:30.822804911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Oct 13 05:10:30.823355 containerd[1574]: time="2025-10-13T05:10:30.823233511Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 05:10:31.722911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852455271.mount: Deactivated successfully. Oct 13 05:10:32.008243 containerd[1574]: time="2025-10-13T05:10:32.008121551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:32.009218 containerd[1574]: time="2025-10-13T05:10:32.009140591Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Oct 13 05:10:32.010603 containerd[1574]: time="2025-10-13T05:10:32.010578711Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:32.012427 containerd[1574]: time="2025-10-13T05:10:32.012395391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:32.013365 containerd[1574]: time="2025-10-13T05:10:32.013238911Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.1899722s" Oct 13 05:10:32.013365 containerd[1574]: time="2025-10-13T05:10:32.013272991Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 13 05:10:32.013748 containerd[1574]: time="2025-10-13T05:10:32.013710551Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 05:10:32.578817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907367709.mount: Deactivated successfully. Oct 13 05:10:32.935936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:10:32.937355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:10:33.072377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:10:33.076465 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:10:33.111535 kubelet[2143]: E1013 05:10:33.111482 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:10:33.114343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:10:33.114483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:10:33.116206 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.6M memory peak. Oct 13 05:10:33.634927 containerd[1574]: time="2025-10-13T05:10:33.634564071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:33.635270 containerd[1574]: time="2025-10-13T05:10:33.635243191Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Oct 13 05:10:33.636237 containerd[1574]: time="2025-10-13T05:10:33.636186191Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:33.638746 containerd[1574]: time="2025-10-13T05:10:33.638687071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:33.640582 containerd[1574]: time="2025-10-13T05:10:33.640555391Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.62682288s" Oct 13 05:10:33.640582 containerd[1574]: time="2025-10-13T05:10:33.640585591Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 13 05:10:33.641089 containerd[1574]: time="2025-10-13T05:10:33.641000671Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 05:10:34.273210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901293088.mount: Deactivated successfully. Oct 13 05:10:34.277871 containerd[1574]: time="2025-10-13T05:10:34.277824711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:34.278769 containerd[1574]: time="2025-10-13T05:10:34.278691791Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Oct 13 05:10:34.279706 containerd[1574]: time="2025-10-13T05:10:34.279641431Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:34.281701 containerd[1574]: time="2025-10-13T05:10:34.281645591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:34.282878 containerd[1574]: time="2025-10-13T05:10:34.282216351Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 641.18104ms" Oct 13 05:10:34.282878 containerd[1574]: time="2025-10-13T05:10:34.282255991Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Oct 13 05:10:34.283079 containerd[1574]: time="2025-10-13T05:10:34.283046871Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 05:10:36.858324 containerd[1574]: time="2025-10-13T05:10:36.858270471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:36.859668 containerd[1574]: time="2025-10-13T05:10:36.859383831Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Oct 13 05:10:36.860451 containerd[1574]: time="2025-10-13T05:10:36.860424191Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:36.864007 containerd[1574]: time="2025-10-13T05:10:36.863975311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:36.866172 containerd[1574]: time="2025-10-13T05:10:36.866121991Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.5829004s" Oct 13 05:10:36.867325 containerd[1574]: time="2025-10-13T05:10:36.867163991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Oct 13 05:10:43.261779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:10:43.265696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:10:43.287763 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:10:43.287837 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:10:43.288064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:10:43.293404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:10:43.319586 systemd[1]: Reload requested from client PID 2234 ('systemctl') (unit session-7.scope)... Oct 13 05:10:43.319607 systemd[1]: Reloading... Oct 13 05:10:43.390206 zram_generator::config[2277]: No configuration found. Oct 13 05:10:43.599777 systemd[1]: Reloading finished in 279 ms. Oct 13 05:10:43.641590 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:10:43.641667 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:10:43.641952 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:10:43.641998 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.3M memory peak. Oct 13 05:10:43.644416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:10:43.758818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:10:43.767193 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:10:43.782181 kernel: hrtimer: interrupt took 3527840 ns Oct 13 05:10:43.812888 kubelet[2322]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:10:43.812888 kubelet[2322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:10:43.813220 kubelet[2322]: I1013 05:10:43.812932 2322 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:10:44.930425 kubelet[2322]: I1013 05:10:44.930370 2322 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:10:44.930425 kubelet[2322]: I1013 05:10:44.930406 2322 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:10:44.930425 kubelet[2322]: I1013 05:10:44.930437 2322 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:10:44.930783 kubelet[2322]: I1013 05:10:44.930442 2322 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:10:44.930783 kubelet[2322]: I1013 05:10:44.930643 2322 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:10:45.016405 kubelet[2322]: E1013 05:10:45.014983 2322 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:10:45.018238 kubelet[2322]: I1013 05:10:45.018214 2322 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:10:45.022408 kubelet[2322]: I1013 05:10:45.022355 2322 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:10:45.024450 kubelet[2322]: I1013 05:10:45.024405 2322 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:10:45.024609 kubelet[2322]: I1013 05:10:45.024571 2322 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:10:45.024801 kubelet[2322]: I1013 05:10:45.024600 2322 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:10:45.024801 kubelet[2322]: I1013 05:10:45.024734 2322 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:10:45.024801 kubelet[2322]: I1013 05:10:45.024752 2322 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:10:45.024948 kubelet[2322]: I1013 05:10:45.024841 2322 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:10:45.027108 kubelet[2322]: I1013 05:10:45.027066 2322 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:10:45.028229 kubelet[2322]: I1013 05:10:45.028203 2322 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:10:45.028229 kubelet[2322]: I1013 05:10:45.028232 2322 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:10:45.029175 kubelet[2322]: I1013 05:10:45.029156 2322 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:10:45.029175 kubelet[2322]: I1013 05:10:45.029176 2322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:10:45.040923 kubelet[2322]: E1013 05:10:45.040884 2322 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:10:45.040923 kubelet[2322]: E1013 05:10:45.040907 2322 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:10:45.041164 kubelet[2322]: I1013 05:10:45.041146 2322 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:10:45.041966 kubelet[2322]: I1013 05:10:45.041935 2322 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:10:45.042000 kubelet[2322]: I1013 05:10:45.041971 2322 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:10:45.042033 kubelet[2322]: W1013 05:10:45.042006 2322 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:10:45.043928 kubelet[2322]: I1013 05:10:45.043899 2322 server.go:1262] "Started kubelet" Oct 13 05:10:45.046759 kubelet[2322]: I1013 05:10:45.044473 2322 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:10:45.049901 kubelet[2322]: I1013 05:10:45.044577 2322 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:10:45.049946 kubelet[2322]: I1013 05:10:45.049905 2322 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:10:45.050210 kubelet[2322]: I1013 05:10:45.050186 2322 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:10:45.050841 kubelet[2322]: I1013 05:10:45.050807 2322 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:10:45.050896 kubelet[2322]: I1013 05:10:45.045675 2322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:10:45.051752 kubelet[2322]: I1013 05:10:45.047057 2322 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:10:45.051892 kubelet[2322]: I1013 05:10:45.051876 2322 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:10:45.052035 kubelet[2322]: I1013 05:10:45.052022 2322 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:10:45.052095 kubelet[2322]: E1013 05:10:45.052051 2322 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:10:45.052213 kubelet[2322]: I1013 05:10:45.052202 2322 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:10:45.052617 kubelet[2322]: E1013 05:10:45.052593 2322 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:10:45.053505 kubelet[2322]: E1013 05:10:45.053460 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="200ms" Oct 13 05:10:45.053615 kubelet[2322]: E1013 05:10:45.053594 2322 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:10:45.053713 kubelet[2322]: I1013 05:10:45.053679 2322 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:10:45.053873 kubelet[2322]: I1013 05:10:45.053805 2322 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:10:45.054917 kubelet[2322]: E1013 05:10:45.053667 2322 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df4d9a8816257 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:10:45.043864151 +0000 UTC m=+1.267107081,LastTimestamp:2025-10-13 05:10:45.043864151 +0000 UTC m=+1.267107081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:10:45.056483 kubelet[2322]: I1013 05:10:45.056464 2322 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:10:45.071397 kubelet[2322]: I1013 05:10:45.071355 2322 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:10:45.071397 kubelet[2322]: I1013 05:10:45.071373 2322 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:10:45.071397 kubelet[2322]: I1013 05:10:45.071389 2322 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:10:45.073124 kubelet[2322]: I1013 05:10:45.073093 2322 policy_none.go:49] "None policy: Start" Oct 13 05:10:45.073124 kubelet[2322]: I1013 05:10:45.073119 2322 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:10:45.073124 kubelet[2322]: I1013 05:10:45.073167 2322 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:10:45.074207 kubelet[2322]: I1013 05:10:45.074185 2322 policy_none.go:47] "Start" Oct 13 05:10:45.079183 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:10:45.080411 kubelet[2322]: I1013 05:10:45.080364 2322 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:10:45.081349 kubelet[2322]: I1013 05:10:45.081321 2322 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:10:45.081349 kubelet[2322]: I1013 05:10:45.081342 2322 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:10:45.081406 kubelet[2322]: I1013 05:10:45.081362 2322 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:10:45.081406 kubelet[2322]: E1013 05:10:45.081398 2322 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:10:45.083719 kubelet[2322]: E1013 05:10:45.083683 2322 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:10:45.092463 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:10:45.095237 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:10:45.114962 kubelet[2322]: E1013 05:10:45.114933 2322 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:10:45.115282 kubelet[2322]: I1013 05:10:45.115264 2322 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:10:45.115380 kubelet[2322]: I1013 05:10:45.115348 2322 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:10:45.115612 kubelet[2322]: I1013 05:10:45.115596 2322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:10:45.117243 kubelet[2322]: E1013 05:10:45.117215 2322 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:10:45.117363 kubelet[2322]: E1013 05:10:45.117254 2322 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:10:45.192074 systemd[1]: Created slice kubepods-burstable-pod9ae5e56744b1019b9787c08783a30996.slice - libcontainer container kubepods-burstable-pod9ae5e56744b1019b9787c08783a30996.slice. Oct 13 05:10:45.209528 kubelet[2322]: E1013 05:10:45.209479 2322 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:10:45.212376 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 13 05:10:45.217325 kubelet[2322]: I1013 05:10:45.217292 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:10:45.217697 kubelet[2322]: E1013 05:10:45.217658 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Oct 13 05:10:45.223349 kubelet[2322]: E1013 05:10:45.223213 2322 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:10:45.225638 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 13 05:10:45.228065 kubelet[2322]: E1013 05:10:45.228040 2322 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:10:45.253233 kubelet[2322]: I1013 05:10:45.253201 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ae5e56744b1019b9787c08783a30996-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ae5e56744b1019b9787c08783a30996\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:10:45.253400 kubelet[2322]: I1013 05:10:45.253349 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ae5e56744b1019b9787c08783a30996-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9ae5e56744b1019b9787c08783a30996\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:10:45.253400 kubelet[2322]: I1013 05:10:45.253376 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:45.253508 kubelet[2322]: I1013 05:10:45.253494 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:45.253631 kubelet[2322]: I1013 05:10:45.253565 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:45.253631 kubelet[2322]: I1013 05:10:45.253584 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:10:45.253631 kubelet[2322]: I1013 05:10:45.253600 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ae5e56744b1019b9787c08783a30996-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ae5e56744b1019b9787c08783a30996\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:10:45.253736 kubelet[2322]: I1013 05:10:45.253613 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:45.253796 kubelet[2322]: I1013 05:10:45.253764 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:45.253985 kubelet[2322]: E1013 05:10:45.253953 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="400ms" Oct 13 05:10:45.419017 kubelet[2322]: I1013 05:10:45.418981 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:10:45.419323 kubelet[2322]: E1013 05:10:45.419298 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Oct 13 05:10:45.512038 kubelet[2322]: E1013 05:10:45.511924 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:45.513137 containerd[1574]: time="2025-10-13T05:10:45.513093191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9ae5e56744b1019b9787c08783a30996,Namespace:kube-system,Attempt:0,}" Oct 13 05:10:45.526739 kubelet[2322]: E1013 05:10:45.526709 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:45.527229 containerd[1574]: time="2025-10-13T05:10:45.527193031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 13 05:10:45.535014 kubelet[2322]: E1013 05:10:45.534974 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:45.535549 containerd[1574]: time="2025-10-13T05:10:45.535513031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 13 05:10:45.655258 kubelet[2322]: E1013 05:10:45.655187 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="800ms" Oct 13 05:10:45.821184 kubelet[2322]: I1013 05:10:45.821055 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:10:45.821428 kubelet[2322]: E1013 05:10:45.821386 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Oct 13 05:10:45.922348 kubelet[2322]: E1013 05:10:45.922296 2322 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:10:46.027053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983452226.mount: Deactivated successfully. Oct 13 05:10:46.034485 containerd[1574]: time="2025-10-13T05:10:46.034401871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:10:46.036957 containerd[1574]: time="2025-10-13T05:10:46.036915071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 13 05:10:46.039309 containerd[1574]: time="2025-10-13T05:10:46.039240111Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:10:46.041445 containerd[1574]: time="2025-10-13T05:10:46.041287511Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:10:46.043165 containerd[1574]: time="2025-10-13T05:10:46.042366231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:10:46.043757 containerd[1574]: time="2025-10-13T05:10:46.043687191Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:10:46.043983 containerd[1574]: time="2025-10-13T05:10:46.043943991Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:10:46.044736 containerd[1574]: time="2025-10-13T05:10:46.044694791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:10:46.050072 containerd[1574]: time="2025-10-13T05:10:46.047654791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 532.1922ms" Oct 13 05:10:46.050072 containerd[1574]: time="2025-10-13T05:10:46.049800871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 520.47668ms" Oct 13 05:10:46.050638 containerd[1574]: time="2025-10-13T05:10:46.050609471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 511.714ms" Oct 13 05:10:46.082951 containerd[1574]: time="2025-10-13T05:10:46.082765271Z" level=info msg="connecting to shim 1e5aab3202b0bb8255aa6bf8f8803ded361b485cb13abefbd6121b924f3f6ac7" address="unix:///run/containerd/s/fa904b17ea43b5b09e33e60f7faa907ceda8731167280e2d421e463c48411855" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:10:46.089267 containerd[1574]: time="2025-10-13T05:10:46.089224751Z" level=info msg="connecting to shim 9883993beaa6a6e81a1a8ce769f98ed721bca0179d8151b268f40e972691b1ae" address="unix:///run/containerd/s/e30395199604f34e5543e137871152e64cd1ce005fad7e7bbdde39c249380104" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:10:46.099424 containerd[1574]: time="2025-10-13T05:10:46.099363991Z" level=info msg="connecting to shim 0d1bf457857aebf92a9372d4c39f95c05ee65ea2272b48be2b401655b0510c67" address="unix:///run/containerd/s/b429d6185aa074bf9aec7426be671ff073195da605d1b67dd6e6551fc0c55101" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:10:46.113347 systemd[1]: Started cri-containerd-1e5aab3202b0bb8255aa6bf8f8803ded361b485cb13abefbd6121b924f3f6ac7.scope - libcontainer container 1e5aab3202b0bb8255aa6bf8f8803ded361b485cb13abefbd6121b924f3f6ac7. Oct 13 05:10:46.118094 systemd[1]: Started cri-containerd-9883993beaa6a6e81a1a8ce769f98ed721bca0179d8151b268f40e972691b1ae.scope - libcontainer container 9883993beaa6a6e81a1a8ce769f98ed721bca0179d8151b268f40e972691b1ae. Oct 13 05:10:46.128084 systemd[1]: Started cri-containerd-0d1bf457857aebf92a9372d4c39f95c05ee65ea2272b48be2b401655b0510c67.scope - libcontainer container 0d1bf457857aebf92a9372d4c39f95c05ee65ea2272b48be2b401655b0510c67. Oct 13 05:10:46.161572 containerd[1574]: time="2025-10-13T05:10:46.161518311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9ae5e56744b1019b9787c08783a30996,Namespace:kube-system,Attempt:0,} returns sandbox id \"9883993beaa6a6e81a1a8ce769f98ed721bca0179d8151b268f40e972691b1ae\"" Oct 13 05:10:46.163376 kubelet[2322]: E1013 05:10:46.163346 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:46.164025 containerd[1574]: time="2025-10-13T05:10:46.163982591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e5aab3202b0bb8255aa6bf8f8803ded361b485cb13abefbd6121b924f3f6ac7\"" Oct 13 05:10:46.164599 kubelet[2322]: E1013 05:10:46.164578 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:46.165229 kubelet[2322]: E1013 05:10:46.165201 2322 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:10:46.168733 containerd[1574]: time="2025-10-13T05:10:46.168657071Z" level=info msg="CreateContainer within sandbox \"9883993beaa6a6e81a1a8ce769f98ed721bca0179d8151b268f40e972691b1ae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:10:46.170257 containerd[1574]: time="2025-10-13T05:10:46.170222791Z" level=info msg="CreateContainer within sandbox \"1e5aab3202b0bb8255aa6bf8f8803ded361b485cb13abefbd6121b924f3f6ac7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:10:46.177350 containerd[1574]: time="2025-10-13T05:10:46.177290951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d1bf457857aebf92a9372d4c39f95c05ee65ea2272b48be2b401655b0510c67\"" Oct 13 05:10:46.178209 kubelet[2322]: E1013 05:10:46.178165 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:46.184444 containerd[1574]: time="2025-10-13T05:10:46.184401391Z" level=info msg="CreateContainer within sandbox \"0d1bf457857aebf92a9372d4c39f95c05ee65ea2272b48be2b401655b0510c67\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:10:46.185405 containerd[1574]: time="2025-10-13T05:10:46.185352191Z" level=info msg="Container f88303455e1041bb06ea7878e00c3dd203ef5fe43aa091d538b07a64724f3f1b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:10:46.187855 containerd[1574]: time="2025-10-13T05:10:46.187810031Z" level=info msg="Container 4f919e8bdcefb646e032a88a102919a5004fad84cb4a24587865a9032539ff30: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:10:46.197795 containerd[1574]: time="2025-10-13T05:10:46.197719231Z" level=info msg="CreateContainer within sandbox \"9883993beaa6a6e81a1a8ce769f98ed721bca0179d8151b268f40e972691b1ae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f88303455e1041bb06ea7878e00c3dd203ef5fe43aa091d538b07a64724f3f1b\"" Oct 13 05:10:46.198491 containerd[1574]: time="2025-10-13T05:10:46.198467951Z" level=info msg="StartContainer for \"f88303455e1041bb06ea7878e00c3dd203ef5fe43aa091d538b07a64724f3f1b\"" Oct 13 05:10:46.199596 containerd[1574]: time="2025-10-13T05:10:46.199565311Z" level=info msg="connecting to shim f88303455e1041bb06ea7878e00c3dd203ef5fe43aa091d538b07a64724f3f1b" address="unix:///run/containerd/s/e30395199604f34e5543e137871152e64cd1ce005fad7e7bbdde39c249380104" protocol=ttrpc version=3 Oct 13 05:10:46.202239 containerd[1574]: time="2025-10-13T05:10:46.202192831Z" level=info msg="CreateContainer within sandbox \"1e5aab3202b0bb8255aa6bf8f8803ded361b485cb13abefbd6121b924f3f6ac7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4f919e8bdcefb646e032a88a102919a5004fad84cb4a24587865a9032539ff30\"" Oct 13 05:10:46.202651 containerd[1574]: time="2025-10-13T05:10:46.202605231Z" level=info msg="StartContainer for \"4f919e8bdcefb646e032a88a102919a5004fad84cb4a24587865a9032539ff30\"" Oct 13 05:10:46.203023 containerd[1574]: time="2025-10-13T05:10:46.202997751Z" level=info msg="Container b7a282ec2c4e0aa2f62d82e902a4cc2dc1420d834e8e50a2b8a98c30cbc42ca4: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:10:46.203757 containerd[1574]: time="2025-10-13T05:10:46.203716111Z" level=info msg="connecting to shim 4f919e8bdcefb646e032a88a102919a5004fad84cb4a24587865a9032539ff30" address="unix:///run/containerd/s/fa904b17ea43b5b09e33e60f7faa907ceda8731167280e2d421e463c48411855" protocol=ttrpc version=3 Oct 13 05:10:46.212612 containerd[1574]: time="2025-10-13T05:10:46.212570671Z" level=info msg="CreateContainer within sandbox \"0d1bf457857aebf92a9372d4c39f95c05ee65ea2272b48be2b401655b0510c67\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b7a282ec2c4e0aa2f62d82e902a4cc2dc1420d834e8e50a2b8a98c30cbc42ca4\"" Oct 13 05:10:46.213574 containerd[1574]: time="2025-10-13T05:10:46.213413471Z" level=info msg="StartContainer for \"b7a282ec2c4e0aa2f62d82e902a4cc2dc1420d834e8e50a2b8a98c30cbc42ca4\"" Oct 13 05:10:46.215599 containerd[1574]: time="2025-10-13T05:10:46.215470671Z" level=info msg="connecting to shim b7a282ec2c4e0aa2f62d82e902a4cc2dc1420d834e8e50a2b8a98c30cbc42ca4" address="unix:///run/containerd/s/b429d6185aa074bf9aec7426be671ff073195da605d1b67dd6e6551fc0c55101" protocol=ttrpc version=3 Oct 13 05:10:46.221332 systemd[1]: Started cri-containerd-f88303455e1041bb06ea7878e00c3dd203ef5fe43aa091d538b07a64724f3f1b.scope - libcontainer container f88303455e1041bb06ea7878e00c3dd203ef5fe43aa091d538b07a64724f3f1b. Oct 13 05:10:46.233335 systemd[1]: Started cri-containerd-4f919e8bdcefb646e032a88a102919a5004fad84cb4a24587865a9032539ff30.scope - libcontainer container 4f919e8bdcefb646e032a88a102919a5004fad84cb4a24587865a9032539ff30. Oct 13 05:10:46.236714 systemd[1]: Started cri-containerd-b7a282ec2c4e0aa2f62d82e902a4cc2dc1420d834e8e50a2b8a98c30cbc42ca4.scope - libcontainer container b7a282ec2c4e0aa2f62d82e902a4cc2dc1420d834e8e50a2b8a98c30cbc42ca4. Oct 13 05:10:46.268526 containerd[1574]: time="2025-10-13T05:10:46.268428311Z" level=info msg="StartContainer for \"f88303455e1041bb06ea7878e00c3dd203ef5fe43aa091d538b07a64724f3f1b\" returns successfully" Oct 13 05:10:46.288860 containerd[1574]: time="2025-10-13T05:10:46.288243231Z" level=info msg="StartContainer for \"b7a282ec2c4e0aa2f62d82e902a4cc2dc1420d834e8e50a2b8a98c30cbc42ca4\" returns successfully" Oct 13 05:10:46.296858 containerd[1574]: time="2025-10-13T05:10:46.296662951Z" level=info msg="StartContainer for \"4f919e8bdcefb646e032a88a102919a5004fad84cb4a24587865a9032539ff30\" returns successfully" Oct 13 05:10:46.625712 kubelet[2322]: I1013 05:10:46.625469 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:10:47.094149 kubelet[2322]: E1013 05:10:47.091974 2322 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:10:47.094149 kubelet[2322]: E1013 05:10:47.092096 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:47.095805 kubelet[2322]: E1013 05:10:47.095780 2322 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:10:47.095912 kubelet[2322]: E1013 05:10:47.095895 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:47.098971 kubelet[2322]: E1013 05:10:47.098946 2322 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:10:47.099087 kubelet[2322]: E1013 05:10:47.099071 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:48.102560 kubelet[2322]: E1013 05:10:48.102511 2322 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:10:48.102909 kubelet[2322]: E1013 05:10:48.102650 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:48.103082 kubelet[2322]: E1013 05:10:48.103050 2322 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:10:48.103203 kubelet[2322]: E1013 05:10:48.103182 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:48.480584 kubelet[2322]: E1013 05:10:48.480329 2322 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 05:10:48.569536 kubelet[2322]: E1013 05:10:48.569438 2322 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186df4d9a8816257 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:10:45.043864151 +0000 UTC m=+1.267107081,LastTimestamp:2025-10-13 05:10:45.043864151 +0000 UTC m=+1.267107081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:10:48.593640 kubelet[2322]: I1013 05:10:48.593601 2322 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:10:48.593640 kubelet[2322]: E1013 05:10:48.593644 2322 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 05:10:48.605335 kubelet[2322]: E1013 05:10:48.605301 2322 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:10:48.705815 kubelet[2322]: E1013 05:10:48.705757 2322 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:10:48.806547 kubelet[2322]: E1013 05:10:48.806241 2322 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:10:48.906595 kubelet[2322]: E1013 05:10:48.906544 2322 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:10:49.007087 kubelet[2322]: E1013 05:10:49.007028 2322 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:10:49.107584 kubelet[2322]: E1013 05:10:49.107470 2322 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:10:49.208336 kubelet[2322]: E1013 05:10:49.208269 2322 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:10:49.309028 kubelet[2322]: E1013 05:10:49.308980 2322 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:10:49.452819 kubelet[2322]: I1013 05:10:49.452723 2322 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:10:49.459953 kubelet[2322]: I1013 05:10:49.459886 2322 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:10:49.465457 kubelet[2322]: I1013 05:10:49.465422 2322 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:50.036258 kubelet[2322]: I1013 05:10:50.036218 2322 apiserver.go:52] "Watching apiserver" Oct 13 05:10:50.039872 kubelet[2322]: E1013 05:10:50.038974 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:50.039872 kubelet[2322]: E1013 05:10:50.039654 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:50.040031 kubelet[2322]: E1013 05:10:50.040003 2322 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:50.053177 kubelet[2322]: I1013 05:10:50.053150 2322 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:10:50.720353 systemd[1]: Reload requested from client PID 2618 ('systemctl') (unit session-7.scope)... Oct 13 05:10:50.720371 systemd[1]: Reloading... Oct 13 05:10:50.773161 zram_generator::config[2662]: No configuration found. Oct 13 05:10:50.970372 systemd[1]: Reloading finished in 249 ms. Oct 13 05:10:50.997393 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:10:51.009964 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:10:51.010237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:10:51.010294 systemd[1]: kubelet.service: Consumed 1.567s CPU time, 123.8M memory peak. Oct 13 05:10:51.012021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:10:51.145896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:10:51.150918 (kubelet)[2704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:10:51.192714 kubelet[2704]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:10:51.192714 kubelet[2704]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:10:51.193509 kubelet[2704]: I1013 05:10:51.192771 2704 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:10:51.204592 kubelet[2704]: I1013 05:10:51.204554 2704 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:10:51.204592 kubelet[2704]: I1013 05:10:51.204583 2704 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:10:51.204711 kubelet[2704]: I1013 05:10:51.204612 2704 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:10:51.204711 kubelet[2704]: I1013 05:10:51.204618 2704 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:10:51.204960 kubelet[2704]: I1013 05:10:51.204943 2704 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:10:51.206526 kubelet[2704]: I1013 05:10:51.206506 2704 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:10:51.210257 kubelet[2704]: I1013 05:10:51.210224 2704 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:10:51.219152 kubelet[2704]: I1013 05:10:51.218935 2704 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:10:51.224360 kubelet[2704]: I1013 05:10:51.224243 2704 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:10:51.224624 kubelet[2704]: I1013 05:10:51.224595 2704 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:10:51.224899 kubelet[2704]: I1013 05:10:51.224743 2704 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:10:51.225349 kubelet[2704]: I1013 05:10:51.225333 2704 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:10:51.226147 kubelet[2704]: I1013 05:10:51.225404 2704 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:10:51.226266 kubelet[2704]: I1013 05:10:51.226241 2704 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:10:51.227191 kubelet[2704]: I1013 05:10:51.227173 2704 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:10:51.227542 kubelet[2704]: I1013 05:10:51.227522 2704 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:10:51.227542 kubelet[2704]: I1013 05:10:51.227543 2704 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:10:51.227616 kubelet[2704]: I1013 05:10:51.227574 2704 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:10:51.227616 kubelet[2704]: I1013 05:10:51.227585 2704 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:10:51.230156 kubelet[2704]: I1013 05:10:51.229072 2704 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:10:51.230156 kubelet[2704]: I1013 05:10:51.229794 2704 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:10:51.230156 kubelet[2704]: I1013 05:10:51.229827 2704 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:10:51.231765 kubelet[2704]: I1013 05:10:51.231736 2704 server.go:1262] "Started kubelet" Oct 13 05:10:51.232316 kubelet[2704]: I1013 05:10:51.232294 2704 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:10:51.234342 kubelet[2704]: I1013 05:10:51.234303 2704 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:10:51.235242 kubelet[2704]: I1013 05:10:51.235221 2704 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:10:51.237961 kubelet[2704]: I1013 05:10:51.237772 2704 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:10:51.237961 kubelet[2704]: I1013 05:10:51.237876 2704 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:10:51.239192 kubelet[2704]: I1013 05:10:51.235280 2704 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:10:51.243307 kubelet[2704]: E1013 05:10:51.242863 2704 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:10:51.243307 kubelet[2704]: I1013 05:10:51.243174 2704 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:10:51.243411 kubelet[2704]: I1013 05:10:51.243357 2704 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:10:51.243436 kubelet[2704]: I1013 05:10:51.243417 2704 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:10:51.243593 kubelet[2704]: I1013 05:10:51.243557 2704 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:10:51.253541 kubelet[2704]: I1013 05:10:51.235270 2704 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:10:51.259710 kubelet[2704]: I1013 05:10:51.258943 2704 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:10:51.259884 kubelet[2704]: I1013 05:10:51.259855 2704 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:10:51.261883 kubelet[2704]: E1013 05:10:51.261861 2704 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:10:51.269356 kubelet[2704]: I1013 05:10:51.269321 2704 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:10:51.270263 kubelet[2704]: I1013 05:10:51.270234 2704 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:10:51.270263 kubelet[2704]: I1013 05:10:51.270255 2704 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:10:51.270358 kubelet[2704]: I1013 05:10:51.270277 2704 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:10:51.270358 kubelet[2704]: E1013 05:10:51.270321 2704 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:10:51.299579 kubelet[2704]: I1013 05:10:51.299526 2704 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:10:51.299579 kubelet[2704]: I1013 05:10:51.299553 2704 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:10:51.299579 kubelet[2704]: I1013 05:10:51.299574 2704 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:10:51.309243 kubelet[2704]: I1013 05:10:51.309210 2704 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:10:51.309377 kubelet[2704]: I1013 05:10:51.309236 2704 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:10:51.309434 kubelet[2704]: I1013 05:10:51.309425 2704 policy_none.go:49] "None policy: Start" Oct 13 05:10:51.309491 kubelet[2704]: I1013 05:10:51.309483 2704 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:10:51.309548 kubelet[2704]: I1013 05:10:51.309537 2704 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:10:51.309800 kubelet[2704]: I1013 05:10:51.309743 2704 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 05:10:51.309800 kubelet[2704]: I1013 05:10:51.309758 2704 policy_none.go:47] "Start" Oct 13 05:10:51.316223 kubelet[2704]: E1013 05:10:51.316201 2704 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:10:51.316569 kubelet[2704]: I1013 05:10:51.316465 2704 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:10:51.316569 kubelet[2704]: I1013 05:10:51.316480 2704 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:10:51.317058 kubelet[2704]: I1013 05:10:51.317023 2704 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:10:51.320445 kubelet[2704]: E1013 05:10:51.320263 2704 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:10:51.372163 kubelet[2704]: I1013 05:10:51.371796 2704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:10:51.372163 kubelet[2704]: I1013 05:10:51.371943 2704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:10:51.372308 kubelet[2704]: I1013 05:10:51.372224 2704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:51.379009 kubelet[2704]: E1013 05:10:51.378970 2704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:10:51.379121 kubelet[2704]: E1013 05:10:51.379084 2704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:51.379525 kubelet[2704]: E1013 05:10:51.379491 2704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:10:51.420911 kubelet[2704]: I1013 05:10:51.420873 2704 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:10:51.426521 kubelet[2704]: I1013 05:10:51.426490 2704 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 05:10:51.426686 kubelet[2704]: I1013 05:10:51.426670 2704 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:10:51.461251 kubelet[2704]: I1013 05:10:51.461151 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:51.461251 kubelet[2704]: I1013 05:10:51.461195 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ae5e56744b1019b9787c08783a30996-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ae5e56744b1019b9787c08783a30996\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:10:51.461251 kubelet[2704]: I1013 05:10:51.461214 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ae5e56744b1019b9787c08783a30996-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9ae5e56744b1019b9787c08783a30996\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:10:51.461251 kubelet[2704]: I1013 05:10:51.461228 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:51.461440 kubelet[2704]: I1013 05:10:51.461291 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:51.461440 kubelet[2704]: I1013 05:10:51.461344 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:10:51.461440 kubelet[2704]: I1013 05:10:51.461377 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ae5e56744b1019b9787c08783a30996-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9ae5e56744b1019b9787c08783a30996\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:10:51.461440 kubelet[2704]: I1013 05:10:51.461402 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:51.461440 kubelet[2704]: I1013 05:10:51.461415 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:10:51.680430 kubelet[2704]: E1013 05:10:51.680330 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:51.680430 kubelet[2704]: E1013 05:10:51.680346 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:51.680575 kubelet[2704]: E1013 05:10:51.680455 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:51.722831 sudo[2743]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 13 05:10:51.723101 sudo[2743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 13 05:10:52.040480 sudo[2743]: pam_unix(sudo:session): session closed for user root Oct 13 05:10:52.232300 kubelet[2704]: I1013 05:10:52.232264 2704 apiserver.go:52] "Watching apiserver" Oct 13 05:10:52.286558 kubelet[2704]: I1013 05:10:52.286312 2704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:10:52.286730 kubelet[2704]: E1013 05:10:52.286689 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:52.286861 kubelet[2704]: I1013 05:10:52.286844 2704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:10:52.295398 kubelet[2704]: E1013 05:10:52.295284 2704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:10:52.295398 kubelet[2704]: E1013 05:10:52.295559 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:52.298099 kubelet[2704]: E1013 05:10:52.296979 2704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:10:52.298099 kubelet[2704]: E1013 05:10:52.298016 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:52.310584 kubelet[2704]: I1013 05:10:52.310528 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.310497271 podStartE2EDuration="3.310497271s" podCreationTimestamp="2025-10-13 05:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:10:52.309400671 +0000 UTC m=+1.155057281" watchObservedRunningTime="2025-10-13 05:10:52.310497271 +0000 UTC m=+1.156153921" Oct 13 05:10:52.338353 kubelet[2704]: I1013 05:10:52.338298 2704 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:10:52.346524 kubelet[2704]: I1013 05:10:52.344146 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.344120391 podStartE2EDuration="3.344120391s" podCreationTimestamp="2025-10-13 05:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:10:52.343833111 +0000 UTC m=+1.189489721" watchObservedRunningTime="2025-10-13 05:10:52.344120391 +0000 UTC m=+1.189776961" Oct 13 05:10:52.370550 kubelet[2704]: I1013 05:10:52.370250 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.370231911 podStartE2EDuration="3.370231911s" podCreationTimestamp="2025-10-13 05:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:10:52.356004271 +0000 UTC m=+1.201660881" watchObservedRunningTime="2025-10-13 05:10:52.370231911 +0000 UTC m=+1.215888481" Oct 13 05:10:53.288260 kubelet[2704]: E1013 05:10:53.288228 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:53.288587 kubelet[2704]: E1013 05:10:53.288276 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:53.557947 kubelet[2704]: E1013 05:10:53.557848 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:53.962180 sudo[1771]: pam_unix(sudo:session): session closed for user root Oct 13 05:10:53.964152 sshd[1770]: Connection closed by 10.0.0.1 port 48910 Oct 13 05:10:53.964485 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Oct 13 05:10:53.968359 systemd[1]: sshd@6-10.0.0.119:22-10.0.0.1:48910.service: Deactivated successfully. Oct 13 05:10:53.970509 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:10:53.970704 systemd[1]: session-7.scope: Consumed 8.800s CPU time, 253.5M memory peak. Oct 13 05:10:53.971914 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:10:53.973102 systemd-logind[1542]: Removed session 7. Oct 13 05:10:54.290500 kubelet[2704]: E1013 05:10:54.290354 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:56.377289 kubelet[2704]: I1013 05:10:56.377012 2704 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:10:56.377910 containerd[1574]: time="2025-10-13T05:10:56.377816808Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:10:56.378219 kubelet[2704]: I1013 05:10:56.378024 2704 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:10:56.994330 systemd[1]: Created slice kubepods-besteffort-pod2155505b_3113_459f_bbe8_a7a98b81da02.slice - libcontainer container kubepods-besteffort-pod2155505b_3113_459f_bbe8_a7a98b81da02.slice. Oct 13 05:10:57.010533 systemd[1]: Created slice kubepods-burstable-pod596e6789_6ea7_414e_82c9_7454b7e2d1ab.slice - libcontainer container kubepods-burstable-pod596e6789_6ea7_414e_82c9_7454b7e2d1ab.slice. Oct 13 05:10:57.098480 kubelet[2704]: I1013 05:10:57.098434 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-hostproc\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.098480 kubelet[2704]: I1013 05:10:57.098476 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-lib-modules\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.098640 kubelet[2704]: I1013 05:10:57.098513 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-config-path\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.098640 kubelet[2704]: I1013 05:10:57.098592 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-host-proc-sys-kernel\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.098640 kubelet[2704]: I1013 05:10:57.098630 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2155505b-3113-459f-bbe8-a7a98b81da02-lib-modules\") pod \"kube-proxy-2nmgx\" (UID: \"2155505b-3113-459f-bbe8-a7a98b81da02\") " pod="kube-system/kube-proxy-2nmgx" Oct 13 05:10:57.098735 kubelet[2704]: I1013 05:10:57.098646 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rc5g\" (UniqueName: \"kubernetes.io/projected/2155505b-3113-459f-bbe8-a7a98b81da02-kube-api-access-7rc5g\") pod \"kube-proxy-2nmgx\" (UID: \"2155505b-3113-459f-bbe8-a7a98b81da02\") " pod="kube-system/kube-proxy-2nmgx" Oct 13 05:10:57.098735 kubelet[2704]: I1013 05:10:57.098662 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cni-path\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.098735 kubelet[2704]: I1013 05:10:57.098677 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/596e6789-6ea7-414e-82c9-7454b7e2d1ab-hubble-tls\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.098735 kubelet[2704]: I1013 05:10:57.098693 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2155505b-3113-459f-bbe8-a7a98b81da02-xtables-lock\") pod \"kube-proxy-2nmgx\" (UID: \"2155505b-3113-459f-bbe8-a7a98b81da02\") " pod="kube-system/kube-proxy-2nmgx" Oct 13 05:10:57.098735 kubelet[2704]: I1013 05:10:57.098708 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-run\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.099087 kubelet[2704]: I1013 05:10:57.098900 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-cgroup\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.099087 kubelet[2704]: I1013 05:10:57.098942 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-etc-cni-netd\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.099087 kubelet[2704]: I1013 05:10:57.098983 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-host-proc-sys-net\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.099087 kubelet[2704]: I1013 05:10:57.099009 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zhgt\" (UniqueName: \"kubernetes.io/projected/596e6789-6ea7-414e-82c9-7454b7e2d1ab-kube-api-access-7zhgt\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.099087 kubelet[2704]: I1013 05:10:57.099033 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2155505b-3113-459f-bbe8-a7a98b81da02-kube-proxy\") pod \"kube-proxy-2nmgx\" (UID: \"2155505b-3113-459f-bbe8-a7a98b81da02\") " pod="kube-system/kube-proxy-2nmgx" Oct 13 05:10:57.099087 kubelet[2704]: I1013 05:10:57.099057 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-bpf-maps\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.099227 kubelet[2704]: I1013 05:10:57.099089 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-xtables-lock\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.099227 kubelet[2704]: I1013 05:10:57.099116 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/596e6789-6ea7-414e-82c9-7454b7e2d1ab-clustermesh-secrets\") pod \"cilium-4pf6j\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " pod="kube-system/cilium-4pf6j" Oct 13 05:10:57.213156 kubelet[2704]: E1013 05:10:57.213108 2704 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 13 05:10:57.213156 kubelet[2704]: E1013 05:10:57.213152 2704 projected.go:196] Error preparing data for projected volume kube-api-access-7zhgt for pod kube-system/cilium-4pf6j: configmap "kube-root-ca.crt" not found Oct 13 05:10:57.213289 kubelet[2704]: E1013 05:10:57.213215 2704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/596e6789-6ea7-414e-82c9-7454b7e2d1ab-kube-api-access-7zhgt podName:596e6789-6ea7-414e-82c9-7454b7e2d1ab nodeName:}" failed. No retries permitted until 2025-10-13 05:10:57.713193593 +0000 UTC m=+6.558850203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7zhgt" (UniqueName: "kubernetes.io/projected/596e6789-6ea7-414e-82c9-7454b7e2d1ab-kube-api-access-7zhgt") pod "cilium-4pf6j" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab") : configmap "kube-root-ca.crt" not found Oct 13 05:10:57.213711 kubelet[2704]: E1013 05:10:57.213585 2704 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 13 05:10:57.213711 kubelet[2704]: E1013 05:10:57.213608 2704 projected.go:196] Error preparing data for projected volume kube-api-access-7rc5g for pod kube-system/kube-proxy-2nmgx: configmap "kube-root-ca.crt" not found Oct 13 05:10:57.213711 kubelet[2704]: E1013 05:10:57.213649 2704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2155505b-3113-459f-bbe8-a7a98b81da02-kube-api-access-7rc5g podName:2155505b-3113-459f-bbe8-a7a98b81da02 nodeName:}" failed. No retries permitted until 2025-10-13 05:10:57.713634476 +0000 UTC m=+6.559291086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7rc5g" (UniqueName: "kubernetes.io/projected/2155505b-3113-459f-bbe8-a7a98b81da02-kube-api-access-7rc5g") pod "kube-proxy-2nmgx" (UID: "2155505b-3113-459f-bbe8-a7a98b81da02") : configmap "kube-root-ca.crt" not found Oct 13 05:10:57.504463 systemd[1]: Created slice kubepods-besteffort-podc6015f34_65bf_48d7_9be0_e207a43cff58.slice - libcontainer container kubepods-besteffort-podc6015f34_65bf_48d7_9be0_e207a43cff58.slice. Oct 13 05:10:57.602931 kubelet[2704]: I1013 05:10:57.602811 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6015f34-65bf-48d7-9be0-e207a43cff58-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-x547l\" (UID: \"c6015f34-65bf-48d7-9be0-e207a43cff58\") " pod="kube-system/cilium-operator-6f9c7c5859-x547l" Oct 13 05:10:57.602931 kubelet[2704]: I1013 05:10:57.602882 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4r92\" (UniqueName: \"kubernetes.io/projected/c6015f34-65bf-48d7-9be0-e207a43cff58-kube-api-access-z4r92\") pod \"cilium-operator-6f9c7c5859-x547l\" (UID: \"c6015f34-65bf-48d7-9be0-e207a43cff58\") " pod="kube-system/cilium-operator-6f9c7c5859-x547l" Oct 13 05:10:57.737682 kubelet[2704]: E1013 05:10:57.737631 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:57.813738 kubelet[2704]: E1013 05:10:57.813624 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:57.815074 containerd[1574]: time="2025-10-13T05:10:57.815026252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-x547l,Uid:c6015f34-65bf-48d7-9be0-e207a43cff58,Namespace:kube-system,Attempt:0,}" Oct 13 05:10:57.841245 containerd[1574]: time="2025-10-13T05:10:57.841153957Z" level=info msg="connecting to shim 50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945" address="unix:///run/containerd/s/729c7c67a14132a54262f3aa8595143b166680407abb5c55ef9cad415e4e1e16" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:10:57.864323 systemd[1]: Started cri-containerd-50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945.scope - libcontainer container 50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945. Oct 13 05:10:57.897818 containerd[1574]: time="2025-10-13T05:10:57.897778751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-x547l,Uid:c6015f34-65bf-48d7-9be0-e207a43cff58,Namespace:kube-system,Attempt:0,} returns sandbox id \"50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945\"" Oct 13 05:10:57.899004 kubelet[2704]: E1013 05:10:57.898531 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:57.901297 containerd[1574]: time="2025-10-13T05:10:57.901265891Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 13 05:10:57.907761 kubelet[2704]: E1013 05:10:57.907713 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:57.908553 containerd[1574]: time="2025-10-13T05:10:57.908497091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nmgx,Uid:2155505b-3113-459f-bbe8-a7a98b81da02,Namespace:kube-system,Attempt:0,}" Oct 13 05:10:57.914562 kubelet[2704]: E1013 05:10:57.914530 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:57.927632 containerd[1574]: time="2025-10-13T05:10:57.927576197Z" level=info msg="connecting to shim 98512bc56f7d4487393453f13eeade85249c7fd387b49ca8801aaa2513ba9d3b" address="unix:///run/containerd/s/d19b90de32eef9d90019677647cfda2077e73a111893a8bdc20c4999dd6e5964" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:10:57.930373 containerd[1574]: time="2025-10-13T05:10:57.930338412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4pf6j,Uid:596e6789-6ea7-414e-82c9-7454b7e2d1ab,Namespace:kube-system,Attempt:0,}" Oct 13 05:10:57.950049 containerd[1574]: time="2025-10-13T05:10:57.950003521Z" level=info msg="connecting to shim 448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c" address="unix:///run/containerd/s/95b6dcc8f752e2d68540c7c0dfc7793801444468d5b8a176202e95df516e796a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:10:57.954356 systemd[1]: Started cri-containerd-98512bc56f7d4487393453f13eeade85249c7fd387b49ca8801aaa2513ba9d3b.scope - libcontainer container 98512bc56f7d4487393453f13eeade85249c7fd387b49ca8801aaa2513ba9d3b. Oct 13 05:10:57.973386 systemd[1]: Started cri-containerd-448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c.scope - libcontainer container 448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c. Oct 13 05:10:57.984405 containerd[1574]: time="2025-10-13T05:10:57.984369712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nmgx,Uid:2155505b-3113-459f-bbe8-a7a98b81da02,Namespace:kube-system,Attempt:0,} returns sandbox id \"98512bc56f7d4487393453f13eeade85249c7fd387b49ca8801aaa2513ba9d3b\"" Oct 13 05:10:57.985250 kubelet[2704]: E1013 05:10:57.985221 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:57.995628 containerd[1574]: time="2025-10-13T05:10:57.995590734Z" level=info msg="CreateContainer within sandbox \"98512bc56f7d4487393453f13eeade85249c7fd387b49ca8801aaa2513ba9d3b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:10:58.005768 containerd[1574]: time="2025-10-13T05:10:58.005701069Z" level=info msg="Container 55fb5f1f2f902adb44767d5130b7381d273c12c39dad9fb0ede1ecba5c2d161a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:10:58.009197 containerd[1574]: time="2025-10-13T05:10:58.009165487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4pf6j,Uid:596e6789-6ea7-414e-82c9-7454b7e2d1ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\"" Oct 13 05:10:58.009918 kubelet[2704]: E1013 05:10:58.009882 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:58.027004 containerd[1574]: time="2025-10-13T05:10:58.026935619Z" level=info msg="CreateContainer within sandbox \"98512bc56f7d4487393453f13eeade85249c7fd387b49ca8801aaa2513ba9d3b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"55fb5f1f2f902adb44767d5130b7381d273c12c39dad9fb0ede1ecba5c2d161a\"" Oct 13 05:10:58.027466 containerd[1574]: time="2025-10-13T05:10:58.027440342Z" level=info msg="StartContainer for \"55fb5f1f2f902adb44767d5130b7381d273c12c39dad9fb0ede1ecba5c2d161a\"" Oct 13 05:10:58.028761 containerd[1574]: time="2025-10-13T05:10:58.028733948Z" level=info msg="connecting to shim 55fb5f1f2f902adb44767d5130b7381d273c12c39dad9fb0ede1ecba5c2d161a" address="unix:///run/containerd/s/d19b90de32eef9d90019677647cfda2077e73a111893a8bdc20c4999dd6e5964" protocol=ttrpc version=3 Oct 13 05:10:58.055288 systemd[1]: Started cri-containerd-55fb5f1f2f902adb44767d5130b7381d273c12c39dad9fb0ede1ecba5c2d161a.scope - libcontainer container 55fb5f1f2f902adb44767d5130b7381d273c12c39dad9fb0ede1ecba5c2d161a. Oct 13 05:10:58.089253 containerd[1574]: time="2025-10-13T05:10:58.089044942Z" level=info msg="StartContainer for \"55fb5f1f2f902adb44767d5130b7381d273c12c39dad9fb0ede1ecba5c2d161a\" returns successfully" Oct 13 05:10:58.319543 kubelet[2704]: E1013 05:10:58.319480 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:58.321286 kubelet[2704]: E1013 05:10:58.320470 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:10:58.342618 kubelet[2704]: I1013 05:10:58.342487 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2nmgx" podStartSLOduration=2.34247078 podStartE2EDuration="2.34247078s" podCreationTimestamp="2025-10-13 05:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:10:58.331582924 +0000 UTC m=+7.177239574" watchObservedRunningTime="2025-10-13 05:10:58.34247078 +0000 UTC m=+7.188127390" Oct 13 05:10:59.251773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834513671.mount: Deactivated successfully. Oct 13 05:10:59.680091 containerd[1574]: time="2025-10-13T05:10:59.680024997Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:59.681075 containerd[1574]: time="2025-10-13T05:10:59.680947121Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Oct 13 05:10:59.682330 containerd[1574]: time="2025-10-13T05:10:59.682295248Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:10:59.683483 containerd[1574]: time="2025-10-13T05:10:59.683444973Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.782142242s" Oct 13 05:10:59.683561 containerd[1574]: time="2025-10-13T05:10:59.683482734Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 13 05:10:59.684488 containerd[1574]: time="2025-10-13T05:10:59.684432778Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 13 05:10:59.687810 containerd[1574]: time="2025-10-13T05:10:59.687772714Z" level=info msg="CreateContainer within sandbox \"50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 13 05:10:59.695894 containerd[1574]: time="2025-10-13T05:10:59.694845949Z" level=info msg="Container da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:10:59.701765 containerd[1574]: time="2025-10-13T05:10:59.701719022Z" level=info msg="CreateContainer within sandbox \"50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\"" Oct 13 05:10:59.702669 containerd[1574]: time="2025-10-13T05:10:59.702639147Z" level=info msg="StartContainer for \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\"" Oct 13 05:10:59.704049 containerd[1574]: time="2025-10-13T05:10:59.704020074Z" level=info msg="connecting to shim da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc" address="unix:///run/containerd/s/729c7c67a14132a54262f3aa8595143b166680407abb5c55ef9cad415e4e1e16" protocol=ttrpc version=3 Oct 13 05:10:59.744289 systemd[1]: Started cri-containerd-da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc.scope - libcontainer container da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc. Oct 13 05:10:59.771993 containerd[1574]: time="2025-10-13T05:10:59.771955645Z" level=info msg="StartContainer for \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\" returns successfully" Oct 13 05:11:00.330897 kubelet[2704]: E1013 05:11:00.330855 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:01.333165 kubelet[2704]: E1013 05:11:01.332776 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:03.205301 kubelet[2704]: E1013 05:11:03.205119 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:03.219095 kubelet[2704]: I1013 05:11:03.219035 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-x547l" podStartSLOduration=4.435063326 podStartE2EDuration="6.219020658s" podCreationTimestamp="2025-10-13 05:10:57 +0000 UTC" firstStartedPulling="2025-10-13 05:10:57.900273165 +0000 UTC m=+6.745929775" lastFinishedPulling="2025-10-13 05:10:59.684230497 +0000 UTC m=+8.529887107" observedRunningTime="2025-10-13 05:11:00.349153794 +0000 UTC m=+9.194810404" watchObservedRunningTime="2025-10-13 05:11:03.219020658 +0000 UTC m=+12.064677268" Oct 13 05:11:03.565461 kubelet[2704]: E1013 05:11:03.565365 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:05.427168 update_engine[1546]: I20251013 05:11:05.426836 1546 update_attempter.cc:509] Updating boot flags... Oct 13 05:11:08.037954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715781087.mount: Deactivated successfully. Oct 13 05:11:10.193605 containerd[1574]: time="2025-10-13T05:11:10.193553727Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:11:10.194291 containerd[1574]: time="2025-10-13T05:11:10.194251128Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Oct 13 05:11:10.194900 containerd[1574]: time="2025-10-13T05:11:10.194854650Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:11:10.196907 containerd[1574]: time="2025-10-13T05:11:10.196873815Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.512405037s" Oct 13 05:11:10.197022 containerd[1574]: time="2025-10-13T05:11:10.197006975Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 13 05:11:10.200792 containerd[1574]: time="2025-10-13T05:11:10.200207383Z" level=info msg="CreateContainer within sandbox \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 13 05:11:10.206920 containerd[1574]: time="2025-10-13T05:11:10.206649758Z" level=info msg="Container c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:11:10.214198 containerd[1574]: time="2025-10-13T05:11:10.214160856Z" level=info msg="CreateContainer within sandbox \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\"" Oct 13 05:11:10.214887 containerd[1574]: time="2025-10-13T05:11:10.214846738Z" level=info msg="StartContainer for \"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\"" Oct 13 05:11:10.215748 containerd[1574]: time="2025-10-13T05:11:10.215713300Z" level=info msg="connecting to shim c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd" address="unix:///run/containerd/s/95b6dcc8f752e2d68540c7c0dfc7793801444468d5b8a176202e95df516e796a" protocol=ttrpc version=3 Oct 13 05:11:10.246309 systemd[1]: Started cri-containerd-c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd.scope - libcontainer container c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd. Oct 13 05:11:10.280686 containerd[1574]: time="2025-10-13T05:11:10.280646696Z" level=info msg="StartContainer for \"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\" returns successfully" Oct 13 05:11:10.293435 systemd[1]: cri-containerd-c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd.scope: Deactivated successfully. Oct 13 05:11:10.324893 containerd[1574]: time="2025-10-13T05:11:10.324820001Z" level=info msg="received exit event container_id:\"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\" id:\"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\" pid:3201 exited_at:{seconds:1760332270 nanos:320121550}" Oct 13 05:11:10.324998 containerd[1574]: time="2025-10-13T05:11:10.324933122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\" id:\"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\" pid:3201 exited_at:{seconds:1760332270 nanos:320121550}" Oct 13 05:11:10.348006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd-rootfs.mount: Deactivated successfully. Oct 13 05:11:10.359258 kubelet[2704]: E1013 05:11:10.359211 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:11.362946 kubelet[2704]: E1013 05:11:11.362909 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:11.370165 containerd[1574]: time="2025-10-13T05:11:11.368939210Z" level=info msg="CreateContainer within sandbox \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 13 05:11:11.381540 containerd[1574]: time="2025-10-13T05:11:11.381500838Z" level=info msg="Container 926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:11:11.387189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034525637.mount: Deactivated successfully. Oct 13 05:11:11.394480 containerd[1574]: time="2025-10-13T05:11:11.394424107Z" level=info msg="CreateContainer within sandbox \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\"" Oct 13 05:11:11.395350 containerd[1574]: time="2025-10-13T05:11:11.395224189Z" level=info msg="StartContainer for \"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\"" Oct 13 05:11:11.396488 containerd[1574]: time="2025-10-13T05:11:11.396416831Z" level=info msg="connecting to shim 926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7" address="unix:///run/containerd/s/95b6dcc8f752e2d68540c7c0dfc7793801444468d5b8a176202e95df516e796a" protocol=ttrpc version=3 Oct 13 05:11:11.416310 systemd[1]: Started cri-containerd-926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7.scope - libcontainer container 926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7. Oct 13 05:11:11.441261 containerd[1574]: time="2025-10-13T05:11:11.441152692Z" level=info msg="StartContainer for \"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\" returns successfully" Oct 13 05:11:11.451570 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:11:11.451839 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:11:11.451910 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:11:11.453881 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:11:11.455449 systemd[1]: cri-containerd-926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7.scope: Deactivated successfully. Oct 13 05:11:11.458990 containerd[1574]: time="2025-10-13T05:11:11.458027410Z" level=info msg="received exit event container_id:\"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\" id:\"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\" pid:3245 exited_at:{seconds:1760332271 nanos:457814409}" Oct 13 05:11:11.458990 containerd[1574]: time="2025-10-13T05:11:11.458165290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\" id:\"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\" pid:3245 exited_at:{seconds:1760332271 nanos:457814409}" Oct 13 05:11:11.477738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7-rootfs.mount: Deactivated successfully. Oct 13 05:11:11.492619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:11:12.367253 kubelet[2704]: E1013 05:11:12.366985 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:12.373378 containerd[1574]: time="2025-10-13T05:11:12.373328375Z" level=info msg="CreateContainer within sandbox \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 13 05:11:12.380610 containerd[1574]: time="2025-10-13T05:11:12.380579950Z" level=info msg="Container fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:11:12.389016 containerd[1574]: time="2025-10-13T05:11:12.388975888Z" level=info msg="CreateContainer within sandbox \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\"" Oct 13 05:11:12.391153 containerd[1574]: time="2025-10-13T05:11:12.389998250Z" level=info msg="StartContainer for \"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\"" Oct 13 05:11:12.394089 containerd[1574]: time="2025-10-13T05:11:12.394043019Z" level=info msg="connecting to shim fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832" address="unix:///run/containerd/s/95b6dcc8f752e2d68540c7c0dfc7793801444468d5b8a176202e95df516e796a" protocol=ttrpc version=3 Oct 13 05:11:12.415324 systemd[1]: Started cri-containerd-fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832.scope - libcontainer container fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832. Oct 13 05:11:12.480704 containerd[1574]: time="2025-10-13T05:11:12.480271920Z" level=info msg="StartContainer for \"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\" returns successfully" Oct 13 05:11:12.482663 systemd[1]: cri-containerd-fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832.scope: Deactivated successfully. Oct 13 05:11:12.484007 containerd[1574]: time="2025-10-13T05:11:12.483975328Z" level=info msg="received exit event container_id:\"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\" id:\"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\" pid:3293 exited_at:{seconds:1760332272 nanos:483714208}" Oct 13 05:11:12.484455 containerd[1574]: time="2025-10-13T05:11:12.484421969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\" id:\"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\" pid:3293 exited_at:{seconds:1760332272 nanos:483714208}" Oct 13 05:11:12.510560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832-rootfs.mount: Deactivated successfully. Oct 13 05:11:13.372361 kubelet[2704]: E1013 05:11:13.372260 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:13.378151 containerd[1574]: time="2025-10-13T05:11:13.377329961Z" level=info msg="CreateContainer within sandbox \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 13 05:11:13.385208 containerd[1574]: time="2025-10-13T05:11:13.385176537Z" level=info msg="Container ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:11:13.388458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2886165539.mount: Deactivated successfully. Oct 13 05:11:13.395158 containerd[1574]: time="2025-10-13T05:11:13.395043036Z" level=info msg="CreateContainer within sandbox \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\"" Oct 13 05:11:13.396177 containerd[1574]: time="2025-10-13T05:11:13.395500637Z" level=info msg="StartContainer for \"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\"" Oct 13 05:11:13.396989 containerd[1574]: time="2025-10-13T05:11:13.396947320Z" level=info msg="connecting to shim ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003" address="unix:///run/containerd/s/95b6dcc8f752e2d68540c7c0dfc7793801444468d5b8a176202e95df516e796a" protocol=ttrpc version=3 Oct 13 05:11:13.415281 systemd[1]: Started cri-containerd-ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003.scope - libcontainer container ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003. Oct 13 05:11:13.437106 systemd[1]: cri-containerd-ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003.scope: Deactivated successfully. Oct 13 05:11:13.438669 containerd[1574]: time="2025-10-13T05:11:13.438629282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\" id:\"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\" pid:3332 exited_at:{seconds:1760332273 nanos:437405240}" Oct 13 05:11:13.440287 containerd[1574]: time="2025-10-13T05:11:13.440237205Z" level=info msg="StartContainer for \"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\" returns successfully" Oct 13 05:11:13.446045 containerd[1574]: time="2025-10-13T05:11:13.445988697Z" level=info msg="received exit event container_id:\"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\" id:\"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\" pid:3332 exited_at:{seconds:1760332273 nanos:437405240}" Oct 13 05:11:13.465282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003-rootfs.mount: Deactivated successfully. Oct 13 05:11:14.377562 kubelet[2704]: E1013 05:11:14.377352 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:14.383209 containerd[1574]: time="2025-10-13T05:11:14.383170501Z" level=info msg="CreateContainer within sandbox \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 13 05:11:14.411192 containerd[1574]: time="2025-10-13T05:11:14.410669512Z" level=info msg="Container 2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:11:14.414099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532442794.mount: Deactivated successfully. Oct 13 05:11:14.416832 containerd[1574]: time="2025-10-13T05:11:14.416798043Z" level=info msg="CreateContainer within sandbox \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\"" Oct 13 05:11:14.417422 containerd[1574]: time="2025-10-13T05:11:14.417387484Z" level=info msg="StartContainer for \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\"" Oct 13 05:11:14.418323 containerd[1574]: time="2025-10-13T05:11:14.418284166Z" level=info msg="connecting to shim 2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583" address="unix:///run/containerd/s/95b6dcc8f752e2d68540c7c0dfc7793801444468d5b8a176202e95df516e796a" protocol=ttrpc version=3 Oct 13 05:11:14.442310 systemd[1]: Started cri-containerd-2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583.scope - libcontainer container 2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583. Oct 13 05:11:14.474483 containerd[1574]: time="2025-10-13T05:11:14.474445790Z" level=info msg="StartContainer for \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" returns successfully" Oct 13 05:11:14.555635 containerd[1574]: time="2025-10-13T05:11:14.555577900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" id:\"7b9377686aabef08070ec057860a97a7d67a8239a406383a23ded30446039af5\" pid:3402 exited_at:{seconds:1760332274 nanos:555290700}" Oct 13 05:11:14.652024 kubelet[2704]: I1013 05:11:14.651851 2704 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 05:11:14.704114 systemd[1]: Created slice kubepods-burstable-podb24b5c0a_3992_4245_bfc3_deb9622ed259.slice - libcontainer container kubepods-burstable-podb24b5c0a_3992_4245_bfc3_deb9622ed259.slice. Oct 13 05:11:14.717187 systemd[1]: Created slice kubepods-burstable-pod406bd773_ce5c_4688_b488_f5ae6ea47f1b.slice - libcontainer container kubepods-burstable-pod406bd773_ce5c_4688_b488_f5ae6ea47f1b.slice. Oct 13 05:11:14.831714 kubelet[2704]: I1013 05:11:14.831617 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hpjn\" (UniqueName: \"kubernetes.io/projected/406bd773-ce5c-4688-b488-f5ae6ea47f1b-kube-api-access-8hpjn\") pod \"coredns-66bc5c9577-p44jg\" (UID: \"406bd773-ce5c-4688-b488-f5ae6ea47f1b\") " pod="kube-system/coredns-66bc5c9577-p44jg" Oct 13 05:11:14.831714 kubelet[2704]: I1013 05:11:14.831660 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf58c\" (UniqueName: \"kubernetes.io/projected/b24b5c0a-3992-4245-bfc3-deb9622ed259-kube-api-access-gf58c\") pod \"coredns-66bc5c9577-ckkgv\" (UID: \"b24b5c0a-3992-4245-bfc3-deb9622ed259\") " pod="kube-system/coredns-66bc5c9577-ckkgv" Oct 13 05:11:14.831714 kubelet[2704]: I1013 05:11:14.831678 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b24b5c0a-3992-4245-bfc3-deb9622ed259-config-volume\") pod \"coredns-66bc5c9577-ckkgv\" (UID: \"b24b5c0a-3992-4245-bfc3-deb9622ed259\") " pod="kube-system/coredns-66bc5c9577-ckkgv" Oct 13 05:11:14.831714 kubelet[2704]: I1013 05:11:14.831695 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/406bd773-ce5c-4688-b488-f5ae6ea47f1b-config-volume\") pod \"coredns-66bc5c9577-p44jg\" (UID: \"406bd773-ce5c-4688-b488-f5ae6ea47f1b\") " pod="kube-system/coredns-66bc5c9577-p44jg" Oct 13 05:11:15.008481 kubelet[2704]: E1013 05:11:15.008432 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:15.009401 containerd[1574]: time="2025-10-13T05:11:15.009151379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ckkgv,Uid:b24b5c0a-3992-4245-bfc3-deb9622ed259,Namespace:kube-system,Attempt:0,}" Oct 13 05:11:15.023656 kubelet[2704]: E1013 05:11:15.021667 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:15.023793 containerd[1574]: time="2025-10-13T05:11:15.022267522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p44jg,Uid:406bd773-ce5c-4688-b488-f5ae6ea47f1b,Namespace:kube-system,Attempt:0,}" Oct 13 05:11:15.383521 kubelet[2704]: E1013 05:11:15.383411 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:15.403231 kubelet[2704]: I1013 05:11:15.403061 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4pf6j" podStartSLOduration=7.215736179 podStartE2EDuration="19.403046343s" podCreationTimestamp="2025-10-13 05:10:56 +0000 UTC" firstStartedPulling="2025-10-13 05:10:58.010387893 +0000 UTC m=+6.856044503" lastFinishedPulling="2025-10-13 05:11:10.197698057 +0000 UTC m=+19.043354667" observedRunningTime="2025-10-13 05:11:15.400859379 +0000 UTC m=+24.246516069" watchObservedRunningTime="2025-10-13 05:11:15.403046343 +0000 UTC m=+24.248702953" Oct 13 05:11:16.110004 systemd-networkd[1480]: cilium_host: Link UP Oct 13 05:11:16.110292 systemd-networkd[1480]: cilium_net: Link UP Oct 13 05:11:16.110525 systemd-networkd[1480]: cilium_net: Gained carrier Oct 13 05:11:16.110724 systemd-networkd[1480]: cilium_host: Gained carrier Oct 13 05:11:16.190002 systemd-networkd[1480]: cilium_vxlan: Link UP Oct 13 05:11:16.190008 systemd-networkd[1480]: cilium_vxlan: Gained carrier Oct 13 05:11:16.386568 kubelet[2704]: E1013 05:11:16.386460 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:16.442157 kernel: NET: Registered PF_ALG protocol family Oct 13 05:11:16.445316 systemd-networkd[1480]: cilium_host: Gained IPv6LL Oct 13 05:11:16.953682 systemd[1]: Started sshd@7-10.0.0.119:22-10.0.0.1:34128.service - OpenSSH per-connection server daemon (10.0.0.1:34128). Oct 13 05:11:17.010044 sshd[3797]: Accepted publickey for core from 10.0.0.1 port 34128 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:17.011625 sshd-session[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:17.015905 systemd-logind[1542]: New session 8 of user core. Oct 13 05:11:17.023418 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:11:17.035955 systemd-networkd[1480]: lxc_health: Link UP Oct 13 05:11:17.042706 systemd-networkd[1480]: lxc_health: Gained carrier Oct 13 05:11:17.093289 systemd-networkd[1480]: cilium_net: Gained IPv6LL Oct 13 05:11:17.159780 sshd[3841]: Connection closed by 10.0.0.1 port 34128 Oct 13 05:11:17.160088 sshd-session[3797]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:17.164855 systemd[1]: sshd@7-10.0.0.119:22-10.0.0.1:34128.service: Deactivated successfully. Oct 13 05:11:17.167846 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:11:17.168859 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:11:17.169742 systemd-logind[1542]: Removed session 8. Oct 13 05:11:17.285310 systemd-networkd[1480]: cilium_vxlan: Gained IPv6LL Oct 13 05:11:17.388953 kubelet[2704]: E1013 05:11:17.388897 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:17.554458 kernel: eth0: renamed from tmp01c67 Oct 13 05:11:17.555600 systemd-networkd[1480]: lxcd04a8e5b1f6e: Link UP Oct 13 05:11:17.558966 systemd-networkd[1480]: lxcd04a8e5b1f6e: Gained carrier Oct 13 05:11:17.559543 systemd-networkd[1480]: lxc708e04a3ba09: Link UP Oct 13 05:11:17.565337 kernel: eth0: renamed from tmp0687d Oct 13 05:11:17.567033 systemd-networkd[1480]: lxc708e04a3ba09: Gained carrier Oct 13 05:11:18.393179 kubelet[2704]: E1013 05:11:18.391984 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:18.693314 systemd-networkd[1480]: lxc_health: Gained IPv6LL Oct 13 05:11:18.757270 systemd-networkd[1480]: lxc708e04a3ba09: Gained IPv6LL Oct 13 05:11:19.269286 systemd-networkd[1480]: lxcd04a8e5b1f6e: Gained IPv6LL Oct 13 05:11:19.394148 kubelet[2704]: E1013 05:11:19.394104 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:21.211810 containerd[1574]: time="2025-10-13T05:11:21.211290893Z" level=info msg="connecting to shim 01c6788a9c1d62a796521817fc6bb33b425e74acc6e60048f8ac1d6b24403000" address="unix:///run/containerd/s/81d426754623c24688575c1cb1c4c1a7cef78c1c0fa288b5307e7be426891b3e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:11:21.212636 containerd[1574]: time="2025-10-13T05:11:21.212603574Z" level=info msg="connecting to shim 0687d20f01903f0268295ed67ba007827302209fe328786360ecbf3e5cff9acf" address="unix:///run/containerd/s/a9278db8e8cd5790558d3eced42037b363980a568b960abf790c3c24d05b1ac2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:11:21.251356 systemd[1]: Started cri-containerd-01c6788a9c1d62a796521817fc6bb33b425e74acc6e60048f8ac1d6b24403000.scope - libcontainer container 01c6788a9c1d62a796521817fc6bb33b425e74acc6e60048f8ac1d6b24403000. Oct 13 05:11:21.252497 systemd[1]: Started cri-containerd-0687d20f01903f0268295ed67ba007827302209fe328786360ecbf3e5cff9acf.scope - libcontainer container 0687d20f01903f0268295ed67ba007827302209fe328786360ecbf3e5cff9acf. Oct 13 05:11:21.268892 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:11:21.270941 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:11:21.300636 containerd[1574]: time="2025-10-13T05:11:21.300568198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ckkgv,Uid:b24b5c0a-3992-4245-bfc3-deb9622ed259,Namespace:kube-system,Attempt:0,} returns sandbox id \"01c6788a9c1d62a796521817fc6bb33b425e74acc6e60048f8ac1d6b24403000\"" Oct 13 05:11:21.301348 containerd[1574]: time="2025-10-13T05:11:21.301309679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p44jg,Uid:406bd773-ce5c-4688-b488-f5ae6ea47f1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0687d20f01903f0268295ed67ba007827302209fe328786360ecbf3e5cff9acf\"" Oct 13 05:11:21.301810 kubelet[2704]: E1013 05:11:21.301777 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:21.303162 kubelet[2704]: E1013 05:11:21.302208 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:21.306993 containerd[1574]: time="2025-10-13T05:11:21.306937846Z" level=info msg="CreateContainer within sandbox \"0687d20f01903f0268295ed67ba007827302209fe328786360ecbf3e5cff9acf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:11:21.308315 containerd[1574]: time="2025-10-13T05:11:21.308282967Z" level=info msg="CreateContainer within sandbox \"01c6788a9c1d62a796521817fc6bb33b425e74acc6e60048f8ac1d6b24403000\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:11:21.319870 containerd[1574]: time="2025-10-13T05:11:21.319830941Z" level=info msg="Container 41ebd9b08c2288d38ebd761b44871807725adbd8afbdbbf9ed64be3a47598d6d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:11:21.332163 containerd[1574]: time="2025-10-13T05:11:21.331741995Z" level=info msg="CreateContainer within sandbox \"0687d20f01903f0268295ed67ba007827302209fe328786360ecbf3e5cff9acf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41ebd9b08c2288d38ebd761b44871807725adbd8afbdbbf9ed64be3a47598d6d\"" Oct 13 05:11:21.332482 containerd[1574]: time="2025-10-13T05:11:21.332441156Z" level=info msg="StartContainer for \"41ebd9b08c2288d38ebd761b44871807725adbd8afbdbbf9ed64be3a47598d6d\"" Oct 13 05:11:21.333662 containerd[1574]: time="2025-10-13T05:11:21.333625677Z" level=info msg="connecting to shim 41ebd9b08c2288d38ebd761b44871807725adbd8afbdbbf9ed64be3a47598d6d" address="unix:///run/containerd/s/a9278db8e8cd5790558d3eced42037b363980a568b960abf790c3c24d05b1ac2" protocol=ttrpc version=3 Oct 13 05:11:21.334523 containerd[1574]: time="2025-10-13T05:11:21.334489438Z" level=info msg="Container c16d4fe33f504724aa1a53a06489fe6ab2870138941b6939b4d62b6a5d232824: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:11:21.346474 containerd[1574]: time="2025-10-13T05:11:21.346428492Z" level=info msg="CreateContainer within sandbox \"01c6788a9c1d62a796521817fc6bb33b425e74acc6e60048f8ac1d6b24403000\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c16d4fe33f504724aa1a53a06489fe6ab2870138941b6939b4d62b6a5d232824\"" Oct 13 05:11:21.347366 containerd[1574]: time="2025-10-13T05:11:21.347290493Z" level=info msg="StartContainer for \"c16d4fe33f504724aa1a53a06489fe6ab2870138941b6939b4d62b6a5d232824\"" Oct 13 05:11:21.348102 containerd[1574]: time="2025-10-13T05:11:21.348076254Z" level=info msg="connecting to shim c16d4fe33f504724aa1a53a06489fe6ab2870138941b6939b4d62b6a5d232824" address="unix:///run/containerd/s/81d426754623c24688575c1cb1c4c1a7cef78c1c0fa288b5307e7be426891b3e" protocol=ttrpc version=3 Oct 13 05:11:21.358295 systemd[1]: Started cri-containerd-41ebd9b08c2288d38ebd761b44871807725adbd8afbdbbf9ed64be3a47598d6d.scope - libcontainer container 41ebd9b08c2288d38ebd761b44871807725adbd8afbdbbf9ed64be3a47598d6d. Oct 13 05:11:21.366427 systemd[1]: Started cri-containerd-c16d4fe33f504724aa1a53a06489fe6ab2870138941b6939b4d62b6a5d232824.scope - libcontainer container c16d4fe33f504724aa1a53a06489fe6ab2870138941b6939b4d62b6a5d232824. Oct 13 05:11:21.405277 containerd[1574]: time="2025-10-13T05:11:21.404754281Z" level=info msg="StartContainer for \"41ebd9b08c2288d38ebd761b44871807725adbd8afbdbbf9ed64be3a47598d6d\" returns successfully" Oct 13 05:11:21.408343 containerd[1574]: time="2025-10-13T05:11:21.408233165Z" level=info msg="StartContainer for \"c16d4fe33f504724aa1a53a06489fe6ab2870138941b6939b4d62b6a5d232824\" returns successfully" Oct 13 05:11:21.414775 kubelet[2704]: E1013 05:11:21.414746 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:21.426280 kubelet[2704]: I1013 05:11:21.426224 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p44jg" podStartSLOduration=24.426206866 podStartE2EDuration="24.426206866s" podCreationTimestamp="2025-10-13 05:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:11:21.425958786 +0000 UTC m=+30.271615396" watchObservedRunningTime="2025-10-13 05:11:21.426206866 +0000 UTC m=+30.271863476" Oct 13 05:11:22.173685 systemd[1]: Started sshd@8-10.0.0.119:22-10.0.0.1:34152.service - OpenSSH per-connection server daemon (10.0.0.1:34152). Oct 13 05:11:22.250641 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 34152 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:22.252806 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:22.257433 systemd-logind[1542]: New session 9 of user core. Oct 13 05:11:22.267344 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:11:22.404854 sshd[4081]: Connection closed by 10.0.0.1 port 34152 Oct 13 05:11:22.406962 sshd-session[4078]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:22.411824 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:11:22.412014 systemd[1]: sshd@8-10.0.0.119:22-10.0.0.1:34152.service: Deactivated successfully. Oct 13 05:11:22.416284 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:11:22.418480 systemd-logind[1542]: Removed session 9. Oct 13 05:11:22.419177 kubelet[2704]: E1013 05:11:22.419119 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:22.424245 kubelet[2704]: E1013 05:11:22.424002 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:22.445633 kubelet[2704]: I1013 05:11:22.445560 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ckkgv" podStartSLOduration=25.445543595 podStartE2EDuration="25.445543595s" podCreationTimestamp="2025-10-13 05:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:11:22.43219454 +0000 UTC m=+31.277851150" watchObservedRunningTime="2025-10-13 05:11:22.445543595 +0000 UTC m=+31.291200205" Oct 13 05:11:23.420577 kubelet[2704]: E1013 05:11:23.420547 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:23.421559 kubelet[2704]: E1013 05:11:23.421344 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:11:27.421116 systemd[1]: Started sshd@9-10.0.0.119:22-10.0.0.1:58234.service - OpenSSH per-connection server daemon (10.0.0.1:58234). Oct 13 05:11:27.468860 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 58234 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:27.470467 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:27.475074 systemd-logind[1542]: New session 10 of user core. Oct 13 05:11:27.490379 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:11:27.633113 sshd[4101]: Connection closed by 10.0.0.1 port 58234 Oct 13 05:11:27.633469 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:27.636943 systemd[1]: sshd@9-10.0.0.119:22-10.0.0.1:58234.service: Deactivated successfully. Oct 13 05:11:27.640645 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:11:27.643084 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:11:27.649118 systemd-logind[1542]: Removed session 10. Oct 13 05:11:32.648211 systemd[1]: Started sshd@10-10.0.0.119:22-10.0.0.1:58290.service - OpenSSH per-connection server daemon (10.0.0.1:58290). Oct 13 05:11:32.713723 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 58290 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:32.716010 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:32.721816 systemd-logind[1542]: New session 11 of user core. Oct 13 05:11:32.729325 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:11:32.839255 sshd[4121]: Connection closed by 10.0.0.1 port 58290 Oct 13 05:11:32.839885 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:32.854751 systemd[1]: sshd@10-10.0.0.119:22-10.0.0.1:58290.service: Deactivated successfully. Oct 13 05:11:32.856603 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:11:32.857313 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:11:32.859824 systemd[1]: Started sshd@11-10.0.0.119:22-10.0.0.1:58300.service - OpenSSH per-connection server daemon (10.0.0.1:58300). Oct 13 05:11:32.860818 systemd-logind[1542]: Removed session 11. Oct 13 05:11:32.934442 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 58300 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:32.936355 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:32.941126 systemd-logind[1542]: New session 12 of user core. Oct 13 05:11:32.953326 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:11:33.110870 sshd[4139]: Connection closed by 10.0.0.1 port 58300 Oct 13 05:11:33.111356 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:33.120054 systemd[1]: sshd@11-10.0.0.119:22-10.0.0.1:58300.service: Deactivated successfully. Oct 13 05:11:33.123323 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:11:33.124108 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:11:33.127362 systemd[1]: Started sshd@12-10.0.0.119:22-10.0.0.1:58306.service - OpenSSH per-connection server daemon (10.0.0.1:58306). Oct 13 05:11:33.128167 systemd-logind[1542]: Removed session 12. Oct 13 05:11:33.195943 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 58306 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:33.197546 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:33.202194 systemd-logind[1542]: New session 13 of user core. Oct 13 05:11:33.219478 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:11:33.336183 sshd[4154]: Connection closed by 10.0.0.1 port 58306 Oct 13 05:11:33.336711 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:33.340945 systemd[1]: sshd@12-10.0.0.119:22-10.0.0.1:58306.service: Deactivated successfully. Oct 13 05:11:33.343750 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:11:33.344383 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:11:33.345556 systemd-logind[1542]: Removed session 13. Oct 13 05:11:38.355406 systemd[1]: Started sshd@13-10.0.0.119:22-10.0.0.1:45896.service - OpenSSH per-connection server daemon (10.0.0.1:45896). Oct 13 05:11:38.409693 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 45896 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:38.410922 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:38.414968 systemd-logind[1542]: New session 14 of user core. Oct 13 05:11:38.424301 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:11:38.547869 sshd[4170]: Connection closed by 10.0.0.1 port 45896 Oct 13 05:11:38.548207 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:38.552069 systemd[1]: sshd@13-10.0.0.119:22-10.0.0.1:45896.service: Deactivated successfully. Oct 13 05:11:38.555412 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:11:38.556223 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:11:38.557234 systemd-logind[1542]: Removed session 14. Oct 13 05:11:43.565293 systemd[1]: Started sshd@14-10.0.0.119:22-10.0.0.1:45972.service - OpenSSH per-connection server daemon (10.0.0.1:45972). Oct 13 05:11:43.619930 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 45972 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:43.621085 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:43.624980 systemd-logind[1542]: New session 15 of user core. Oct 13 05:11:43.636310 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:11:43.776449 sshd[4187]: Connection closed by 10.0.0.1 port 45972 Oct 13 05:11:43.776826 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:43.787873 systemd[1]: sshd@14-10.0.0.119:22-10.0.0.1:45972.service: Deactivated successfully. Oct 13 05:11:43.790493 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:11:43.791407 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:11:43.794310 systemd[1]: Started sshd@15-10.0.0.119:22-10.0.0.1:45980.service - OpenSSH per-connection server daemon (10.0.0.1:45980). Oct 13 05:11:43.795218 systemd-logind[1542]: Removed session 15. Oct 13 05:11:43.852075 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 45980 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:43.853552 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:43.857643 systemd-logind[1542]: New session 16 of user core. Oct 13 05:11:43.864307 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:11:44.063413 sshd[4204]: Connection closed by 10.0.0.1 port 45980 Oct 13 05:11:44.063709 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:44.080396 systemd[1]: sshd@15-10.0.0.119:22-10.0.0.1:45980.service: Deactivated successfully. Oct 13 05:11:44.081938 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:11:44.082656 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:11:44.084852 systemd[1]: Started sshd@16-10.0.0.119:22-10.0.0.1:45996.service - OpenSSH per-connection server daemon (10.0.0.1:45996). Oct 13 05:11:44.085837 systemd-logind[1542]: Removed session 16. Oct 13 05:11:44.148412 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 45996 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:44.149780 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:44.154400 systemd-logind[1542]: New session 17 of user core. Oct 13 05:11:44.166306 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:11:44.789896 sshd[4219]: Connection closed by 10.0.0.1 port 45996 Oct 13 05:11:44.789757 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:44.803925 systemd[1]: sshd@16-10.0.0.119:22-10.0.0.1:45996.service: Deactivated successfully. Oct 13 05:11:44.811563 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:11:44.817604 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:11:44.823856 systemd[1]: Started sshd@17-10.0.0.119:22-10.0.0.1:45998.service - OpenSSH per-connection server daemon (10.0.0.1:45998). Oct 13 05:11:44.825942 systemd-logind[1542]: Removed session 17. Oct 13 05:11:44.892683 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 45998 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:44.894087 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:44.898105 systemd-logind[1542]: New session 18 of user core. Oct 13 05:11:44.906328 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:11:45.137379 sshd[4241]: Connection closed by 10.0.0.1 port 45998 Oct 13 05:11:45.137987 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:45.152844 systemd[1]: sshd@17-10.0.0.119:22-10.0.0.1:45998.service: Deactivated successfully. Oct 13 05:11:45.156758 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:11:45.159914 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:11:45.164596 systemd[1]: Started sshd@18-10.0.0.119:22-10.0.0.1:46014.service - OpenSSH per-connection server daemon (10.0.0.1:46014). Oct 13 05:11:45.166601 systemd-logind[1542]: Removed session 18. Oct 13 05:11:45.224507 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 46014 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:45.225918 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:45.230270 systemd-logind[1542]: New session 19 of user core. Oct 13 05:11:45.239337 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:11:45.350900 sshd[4255]: Connection closed by 10.0.0.1 port 46014 Oct 13 05:11:45.351443 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:45.355382 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:11:45.355672 systemd[1]: sshd@18-10.0.0.119:22-10.0.0.1:46014.service: Deactivated successfully. Oct 13 05:11:45.357429 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:11:45.359164 systemd-logind[1542]: Removed session 19. Oct 13 05:11:50.363743 systemd[1]: Started sshd@19-10.0.0.119:22-10.0.0.1:38988.service - OpenSSH per-connection server daemon (10.0.0.1:38988). Oct 13 05:11:50.418599 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 38988 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:50.419804 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:50.423559 systemd-logind[1542]: New session 20 of user core. Oct 13 05:11:50.437303 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:11:50.552913 sshd[4276]: Connection closed by 10.0.0.1 port 38988 Oct 13 05:11:50.553227 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:50.556667 systemd[1]: sshd@19-10.0.0.119:22-10.0.0.1:38988.service: Deactivated successfully. Oct 13 05:11:50.558351 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:11:50.558986 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:11:50.560163 systemd-logind[1542]: Removed session 20. Oct 13 05:11:55.564529 systemd[1]: Started sshd@20-10.0.0.119:22-10.0.0.1:41240.service - OpenSSH per-connection server daemon (10.0.0.1:41240). Oct 13 05:11:55.627095 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 41240 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:55.628464 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:55.633697 systemd-logind[1542]: New session 21 of user core. Oct 13 05:11:55.646358 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:11:55.774197 sshd[4296]: Connection closed by 10.0.0.1 port 41240 Oct 13 05:11:55.773688 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Oct 13 05:11:55.794652 systemd[1]: sshd@20-10.0.0.119:22-10.0.0.1:41240.service: Deactivated successfully. Oct 13 05:11:55.798667 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:11:55.800135 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:11:55.803071 systemd[1]: Started sshd@21-10.0.0.119:22-10.0.0.1:41254.service - OpenSSH per-connection server daemon (10.0.0.1:41254). Oct 13 05:11:55.804382 systemd-logind[1542]: Removed session 21. Oct 13 05:11:55.887378 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 41254 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:11:55.889227 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:11:55.895949 systemd-logind[1542]: New session 22 of user core. Oct 13 05:11:55.904323 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:11:58.819549 containerd[1574]: time="2025-10-13T05:11:58.819375238Z" level=info msg="StopContainer for \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\" with timeout 30 (s)" Oct 13 05:11:58.820145 containerd[1574]: time="2025-10-13T05:11:58.819973243Z" level=info msg="Stop container \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\" with signal terminated" Oct 13 05:11:58.841049 systemd[1]: cri-containerd-da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc.scope: Deactivated successfully. Oct 13 05:11:58.844498 containerd[1574]: time="2025-10-13T05:11:58.844454103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\" id:\"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\" pid:3120 exited_at:{seconds:1760332318 nanos:843662977}" Oct 13 05:11:58.844877 containerd[1574]: time="2025-10-13T05:11:58.844518663Z" level=info msg="received exit event container_id:\"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\" id:\"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\" pid:3120 exited_at:{seconds:1760332318 nanos:843662977}" Oct 13 05:11:58.867257 containerd[1574]: time="2025-10-13T05:11:58.867211671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" id:\"1bf9be421aa7a2a5efac00489a591d6cfa9e62ea467d502e0ceff7dcded1dfb8\" pid:4343 exited_at:{seconds:1760332318 nanos:866691307}" Oct 13 05:11:58.871374 containerd[1574]: time="2025-10-13T05:11:58.871339181Z" level=info msg="StopContainer for \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" with timeout 2 (s)" Oct 13 05:11:58.872446 containerd[1574]: time="2025-10-13T05:11:58.872242228Z" level=info msg="Stop container \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" with signal terminated" Oct 13 05:11:58.875533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc-rootfs.mount: Deactivated successfully. Oct 13 05:11:58.880955 systemd-networkd[1480]: lxc_health: Link DOWN Oct 13 05:11:58.880973 systemd-networkd[1480]: lxc_health: Lost carrier Oct 13 05:11:58.888031 containerd[1574]: time="2025-10-13T05:11:58.887967383Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:11:58.891220 containerd[1574]: time="2025-10-13T05:11:58.890100319Z" level=info msg="StopContainer for \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\" returns successfully" Oct 13 05:11:58.893873 containerd[1574]: time="2025-10-13T05:11:58.893821347Z" level=info msg="StopPodSandbox for \"50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945\"" Oct 13 05:11:58.894333 containerd[1574]: time="2025-10-13T05:11:58.894114869Z" level=info msg="Container to stop \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:11:58.897783 systemd[1]: cri-containerd-2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583.scope: Deactivated successfully. Oct 13 05:11:58.898111 systemd[1]: cri-containerd-2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583.scope: Consumed 6.240s CPU time, 126M memory peak, 160K read from disk, 12.9M written to disk. Oct 13 05:11:58.900139 containerd[1574]: time="2025-10-13T05:11:58.899695510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" id:\"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" pid:3368 exited_at:{seconds:1760332318 nanos:899088865}" Oct 13 05:11:58.900139 containerd[1574]: time="2025-10-13T05:11:58.899782950Z" level=info msg="received exit event container_id:\"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" id:\"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" pid:3368 exited_at:{seconds:1760332318 nanos:899088865}" Oct 13 05:11:58.905623 systemd[1]: cri-containerd-50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945.scope: Deactivated successfully. Oct 13 05:11:58.909189 containerd[1574]: time="2025-10-13T05:11:58.909092219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945\" id:\"50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945\" pid:2825 exit_status:137 exited_at:{seconds:1760332318 nanos:908773177}" Oct 13 05:11:58.921425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583-rootfs.mount: Deactivated successfully. Oct 13 05:11:58.936416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945-rootfs.mount: Deactivated successfully. Oct 13 05:11:58.959007 containerd[1574]: time="2025-10-13T05:11:58.958957906Z" level=info msg="shim disconnected" id=50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945 namespace=k8s.io Oct 13 05:11:58.966259 containerd[1574]: time="2025-10-13T05:11:58.958997547Z" level=warning msg="cleaning up after shim disconnected" id=50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945 namespace=k8s.io Oct 13 05:11:58.966259 containerd[1574]: time="2025-10-13T05:11:58.966048799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:11:58.971405 containerd[1574]: time="2025-10-13T05:11:58.971367198Z" level=info msg="StopContainer for \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" returns successfully" Oct 13 05:11:58.972633 containerd[1574]: time="2025-10-13T05:11:58.972601847Z" level=info msg="StopPodSandbox for \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\"" Oct 13 05:11:58.972788 containerd[1574]: time="2025-10-13T05:11:58.972764888Z" level=info msg="Container to stop \"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:11:58.972832 containerd[1574]: time="2025-10-13T05:11:58.972789248Z" level=info msg="Container to stop \"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:11:58.972832 containerd[1574]: time="2025-10-13T05:11:58.972800008Z" level=info msg="Container to stop \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:11:58.972832 containerd[1574]: time="2025-10-13T05:11:58.972808928Z" level=info msg="Container to stop \"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:11:58.972832 containerd[1574]: time="2025-10-13T05:11:58.972816608Z" level=info msg="Container to stop \"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:11:58.981641 systemd[1]: cri-containerd-448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c.scope: Deactivated successfully. Oct 13 05:11:58.987203 containerd[1574]: time="2025-10-13T05:11:58.986545430Z" level=info msg="TearDown network for sandbox \"50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945\" successfully" Oct 13 05:11:58.987203 containerd[1574]: time="2025-10-13T05:11:58.986581070Z" level=info msg="StopPodSandbox for \"50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945\" returns successfully" Oct 13 05:11:58.987203 containerd[1574]: time="2025-10-13T05:11:58.986813192Z" level=info msg="TaskExit event in podsandbox handler container_id:\"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" id:\"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" pid:2910 exit_status:137 exited_at:{seconds:1760332318 nanos:986285108}" Oct 13 05:11:58.989463 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945-shm.mount: Deactivated successfully. Oct 13 05:11:58.990063 containerd[1574]: time="2025-10-13T05:11:58.990029975Z" level=info msg="received exit event sandbox_id:\"50d17d2f4a59850cee3d1cc4f7c644adddfe3115f3be2ff760839d5a3f3b2945\" exit_status:137 exited_at:{seconds:1760332318 nanos:908773177}" Oct 13 05:11:59.017816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c-rootfs.mount: Deactivated successfully. Oct 13 05:11:59.105294 containerd[1574]: time="2025-10-13T05:11:59.105186322Z" level=info msg="shim disconnected" id=448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c namespace=k8s.io Oct 13 05:11:59.105294 containerd[1574]: time="2025-10-13T05:11:59.105220283Z" level=warning msg="cleaning up after shim disconnected" id=448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c namespace=k8s.io Oct 13 05:11:59.105294 containerd[1574]: time="2025-10-13T05:11:59.105268963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:11:59.114556 containerd[1574]: time="2025-10-13T05:11:59.114476949Z" level=info msg="received exit event sandbox_id:\"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" exit_status:137 exited_at:{seconds:1760332318 nanos:986285108}" Oct 13 05:11:59.114990 containerd[1574]: time="2025-10-13T05:11:59.114912632Z" level=info msg="TearDown network for sandbox \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" successfully" Oct 13 05:11:59.114990 containerd[1574]: time="2025-10-13T05:11:59.114936872Z" level=info msg="StopPodSandbox for \"448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c\" returns successfully" Oct 13 05:11:59.206793 kubelet[2704]: I1013 05:11:59.206738 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6015f34-65bf-48d7-9be0-e207a43cff58-cilium-config-path\") pod \"c6015f34-65bf-48d7-9be0-e207a43cff58\" (UID: \"c6015f34-65bf-48d7-9be0-e207a43cff58\") " Oct 13 05:11:59.206793 kubelet[2704]: I1013 05:11:59.206786 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4r92\" (UniqueName: \"kubernetes.io/projected/c6015f34-65bf-48d7-9be0-e207a43cff58-kube-api-access-z4r92\") pod \"c6015f34-65bf-48d7-9be0-e207a43cff58\" (UID: \"c6015f34-65bf-48d7-9be0-e207a43cff58\") " Oct 13 05:11:59.209459 kubelet[2704]: I1013 05:11:59.209372 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6015f34-65bf-48d7-9be0-e207a43cff58-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6015f34-65bf-48d7-9be0-e207a43cff58" (UID: "c6015f34-65bf-48d7-9be0-e207a43cff58"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:11:59.211411 kubelet[2704]: I1013 05:11:59.211272 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6015f34-65bf-48d7-9be0-e207a43cff58-kube-api-access-z4r92" (OuterVolumeSpecName: "kube-api-access-z4r92") pod "c6015f34-65bf-48d7-9be0-e207a43cff58" (UID: "c6015f34-65bf-48d7-9be0-e207a43cff58"). InnerVolumeSpecName "kube-api-access-z4r92". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:11:59.282872 systemd[1]: Removed slice kubepods-besteffort-podc6015f34_65bf_48d7_9be0_e207a43cff58.slice - libcontainer container kubepods-besteffort-podc6015f34_65bf_48d7_9be0_e207a43cff58.slice. Oct 13 05:11:59.307035 kubelet[2704]: I1013 05:11:59.306997 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-host-proc-sys-kernel\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307035 kubelet[2704]: I1013 05:11:59.307045 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/596e6789-6ea7-414e-82c9-7454b7e2d1ab-hubble-tls\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307035 kubelet[2704]: I1013 05:11:59.307065 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zhgt\" (UniqueName: \"kubernetes.io/projected/596e6789-6ea7-414e-82c9-7454b7e2d1ab-kube-api-access-7zhgt\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307035 kubelet[2704]: I1013 05:11:59.307080 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-bpf-maps\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307035 kubelet[2704]: I1013 05:11:59.307096 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-config-path\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307035 kubelet[2704]: I1013 05:11:59.307109 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-etc-cni-netd\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307366 kubelet[2704]: I1013 05:11:59.307123 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-lib-modules\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307366 kubelet[2704]: I1013 05:11:59.307150 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-cgroup\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307366 kubelet[2704]: I1013 05:11:59.307167 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-hostproc\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307366 kubelet[2704]: I1013 05:11:59.307180 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cni-path\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307366 kubelet[2704]: I1013 05:11:59.307209 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cni-path" (OuterVolumeSpecName: "cni-path") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:11:59.307366 kubelet[2704]: I1013 05:11:59.307242 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:11:59.307487 kubelet[2704]: I1013 05:11:59.307255 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:11:59.307487 kubelet[2704]: I1013 05:11:59.307270 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:11:59.307487 kubelet[2704]: I1013 05:11:59.307281 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-hostproc" (OuterVolumeSpecName: "hostproc") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:11:59.307808 kubelet[2704]: I1013 05:11:59.307563 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:11:59.307808 kubelet[2704]: I1013 05:11:59.307624 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:11:59.307808 kubelet[2704]: I1013 05:11:59.307701 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-host-proc-sys-net\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307808 kubelet[2704]: I1013 05:11:59.307742 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-run\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307808 kubelet[2704]: I1013 05:11:59.307761 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-xtables-lock\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.307940 kubelet[2704]: I1013 05:11:59.307780 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/596e6789-6ea7-414e-82c9-7454b7e2d1ab-clustermesh-secrets\") pod \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\" (UID: \"596e6789-6ea7-414e-82c9-7454b7e2d1ab\") " Oct 13 05:11:59.308058 kubelet[2704]: I1013 05:11:59.308005 2704 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.308058 kubelet[2704]: I1013 05:11:59.308011 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:11:59.308058 kubelet[2704]: I1013 05:11:59.308025 2704 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.308058 kubelet[2704]: I1013 05:11:59.308054 2704 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.308177 kubelet[2704]: I1013 05:11:59.308063 2704 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.308177 kubelet[2704]: I1013 05:11:59.308070 2704 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.308177 kubelet[2704]: I1013 05:11:59.308079 2704 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.308177 kubelet[2704]: I1013 05:11:59.308087 2704 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.308177 kubelet[2704]: I1013 05:11:59.308095 2704 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6015f34-65bf-48d7-9be0-e207a43cff58-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.308177 kubelet[2704]: I1013 05:11:59.308104 2704 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z4r92\" (UniqueName: \"kubernetes.io/projected/c6015f34-65bf-48d7-9be0-e207a43cff58-kube-api-access-z4r92\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.308177 kubelet[2704]: I1013 05:11:59.308124 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:11:59.308313 kubelet[2704]: I1013 05:11:59.308159 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 05:11:59.308934 kubelet[2704]: I1013 05:11:59.308854 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:11:59.310032 kubelet[2704]: I1013 05:11:59.309985 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/596e6789-6ea7-414e-82c9-7454b7e2d1ab-kube-api-access-7zhgt" (OuterVolumeSpecName: "kube-api-access-7zhgt") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "kube-api-access-7zhgt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:11:59.310390 kubelet[2704]: I1013 05:11:59.310345 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/596e6789-6ea7-414e-82c9-7454b7e2d1ab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:11:59.311043 kubelet[2704]: I1013 05:11:59.311005 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596e6789-6ea7-414e-82c9-7454b7e2d1ab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "596e6789-6ea7-414e-82c9-7454b7e2d1ab" (UID: "596e6789-6ea7-414e-82c9-7454b7e2d1ab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:11:59.408641 kubelet[2704]: I1013 05:11:59.408517 2704 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.408641 kubelet[2704]: I1013 05:11:59.408551 2704 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/596e6789-6ea7-414e-82c9-7454b7e2d1ab-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.408641 kubelet[2704]: I1013 05:11:59.408561 2704 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/596e6789-6ea7-414e-82c9-7454b7e2d1ab-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.408641 kubelet[2704]: I1013 05:11:59.408571 2704 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7zhgt\" (UniqueName: \"kubernetes.io/projected/596e6789-6ea7-414e-82c9-7454b7e2d1ab-kube-api-access-7zhgt\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.408641 kubelet[2704]: I1013 05:11:59.408580 2704 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.408641 kubelet[2704]: I1013 05:11:59.408587 2704 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.408641 kubelet[2704]: I1013 05:11:59.408603 2704 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/596e6789-6ea7-414e-82c9-7454b7e2d1ab-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 13 05:11:59.498989 kubelet[2704]: I1013 05:11:59.498941 2704 scope.go:117] "RemoveContainer" containerID="2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583" Oct 13 05:11:59.503804 containerd[1574]: time="2025-10-13T05:11:59.503754257Z" level=info msg="RemoveContainer for \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\"" Oct 13 05:11:59.504988 systemd[1]: Removed slice kubepods-burstable-pod596e6789_6ea7_414e_82c9_7454b7e2d1ab.slice - libcontainer container kubepods-burstable-pod596e6789_6ea7_414e_82c9_7454b7e2d1ab.slice. Oct 13 05:11:59.505089 systemd[1]: kubepods-burstable-pod596e6789_6ea7_414e_82c9_7454b7e2d1ab.slice: Consumed 6.323s CPU time, 126.3M memory peak, 160K read from disk, 12.9M written to disk. Oct 13 05:11:59.514261 containerd[1574]: time="2025-10-13T05:11:59.514214972Z" level=info msg="RemoveContainer for \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" returns successfully" Oct 13 05:11:59.514591 kubelet[2704]: I1013 05:11:59.514570 2704 scope.go:117] "RemoveContainer" containerID="ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003" Oct 13 05:11:59.516293 containerd[1574]: time="2025-10-13T05:11:59.516264747Z" level=info msg="RemoveContainer for \"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\"" Oct 13 05:11:59.521248 containerd[1574]: time="2025-10-13T05:11:59.521217262Z" level=info msg="RemoveContainer for \"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\" returns successfully" Oct 13 05:11:59.521545 kubelet[2704]: I1013 05:11:59.521526 2704 scope.go:117] "RemoveContainer" containerID="fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832" Oct 13 05:11:59.524302 containerd[1574]: time="2025-10-13T05:11:59.524271644Z" level=info msg="RemoveContainer for \"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\"" Oct 13 05:11:59.538981 containerd[1574]: time="2025-10-13T05:11:59.538942549Z" level=info msg="RemoveContainer for \"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\" returns successfully" Oct 13 05:11:59.539241 kubelet[2704]: I1013 05:11:59.539219 2704 scope.go:117] "RemoveContainer" containerID="926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7" Oct 13 05:11:59.540709 containerd[1574]: time="2025-10-13T05:11:59.540682602Z" level=info msg="RemoveContainer for \"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\"" Oct 13 05:11:59.543618 containerd[1574]: time="2025-10-13T05:11:59.543571862Z" level=info msg="RemoveContainer for \"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\" returns successfully" Oct 13 05:11:59.543769 kubelet[2704]: I1013 05:11:59.543746 2704 scope.go:117] "RemoveContainer" containerID="c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd" Oct 13 05:11:59.544895 containerd[1574]: time="2025-10-13T05:11:59.544873272Z" level=info msg="RemoveContainer for \"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\"" Oct 13 05:11:59.547598 containerd[1574]: time="2025-10-13T05:11:59.547559651Z" level=info msg="RemoveContainer for \"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\" returns successfully" Oct 13 05:11:59.547756 kubelet[2704]: I1013 05:11:59.547738 2704 scope.go:117] "RemoveContainer" containerID="2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583" Oct 13 05:11:59.547931 containerd[1574]: time="2025-10-13T05:11:59.547899613Z" level=error msg="ContainerStatus for \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\": not found" Oct 13 05:11:59.549079 kubelet[2704]: E1013 05:11:59.549047 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\": not found" containerID="2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583" Oct 13 05:11:59.549151 kubelet[2704]: I1013 05:11:59.549090 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583"} err="failed to get container status \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\": rpc error: code = NotFound desc = an error occurred when try to find container \"2df76402496fa3a247ecf4e23ca1448144b7d417bda4e1912406d628c4cdb583\": not found" Oct 13 05:11:59.549213 kubelet[2704]: I1013 05:11:59.549168 2704 scope.go:117] "RemoveContainer" containerID="ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003" Oct 13 05:11:59.549468 containerd[1574]: time="2025-10-13T05:11:59.549439384Z" level=error msg="ContainerStatus for \"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\": not found" Oct 13 05:11:59.549575 kubelet[2704]: E1013 05:11:59.549557 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\": not found" containerID="ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003" Oct 13 05:11:59.549619 kubelet[2704]: I1013 05:11:59.549579 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003"} err="failed to get container status \"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\": rpc error: code = NotFound desc = an error occurred when try to find container \"ddc4a90b30e9a84883b2c03f7e1eae1615527e80410ff61f47cad18adaec6003\": not found" Oct 13 05:11:59.549619 kubelet[2704]: I1013 05:11:59.549591 2704 scope.go:117] "RemoveContainer" containerID="fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832" Oct 13 05:11:59.549750 containerd[1574]: time="2025-10-13T05:11:59.549722787Z" level=error msg="ContainerStatus for \"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\": not found" Oct 13 05:11:59.549871 kubelet[2704]: E1013 05:11:59.549848 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\": not found" containerID="fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832" Oct 13 05:11:59.549907 kubelet[2704]: I1013 05:11:59.549883 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832"} err="failed to get container status \"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe07f512a46ad0e615f164844091806dd78841587e2eabbe3187385db74a8832\": not found" Oct 13 05:11:59.549907 kubelet[2704]: I1013 05:11:59.549905 2704 scope.go:117] "RemoveContainer" containerID="926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7" Oct 13 05:11:59.550167 containerd[1574]: time="2025-10-13T05:11:59.550143630Z" level=error msg="ContainerStatus for \"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\": not found" Oct 13 05:11:59.550308 kubelet[2704]: E1013 05:11:59.550285 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\": not found" containerID="926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7" Oct 13 05:11:59.550342 kubelet[2704]: I1013 05:11:59.550308 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7"} err="failed to get container status \"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\": rpc error: code = NotFound desc = an error occurred when try to find container \"926b5c3e764f08a8f2ed63c0e7c0c7c525eda3ed01c48722f08e12606ea7cae7\": not found" Oct 13 05:11:59.550342 kubelet[2704]: I1013 05:11:59.550321 2704 scope.go:117] "RemoveContainer" containerID="c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd" Oct 13 05:11:59.550494 containerd[1574]: time="2025-10-13T05:11:59.550468712Z" level=error msg="ContainerStatus for \"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\": not found" Oct 13 05:11:59.550624 kubelet[2704]: E1013 05:11:59.550590 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\": not found" containerID="c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd" Oct 13 05:11:59.550657 kubelet[2704]: I1013 05:11:59.550624 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd"} err="failed to get container status \"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0dfc3bf907a8f12f96ce5d641735ad6bdb4ff1ceee0d2fed8b4842c2f48bcbd\": not found" Oct 13 05:11:59.550657 kubelet[2704]: I1013 05:11:59.550644 2704 scope.go:117] "RemoveContainer" containerID="da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc" Oct 13 05:11:59.552110 containerd[1574]: time="2025-10-13T05:11:59.552089003Z" level=info msg="RemoveContainer for \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\"" Oct 13 05:11:59.554659 containerd[1574]: time="2025-10-13T05:11:59.554624422Z" level=info msg="RemoveContainer for \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\" returns successfully" Oct 13 05:11:59.554827 kubelet[2704]: I1013 05:11:59.554784 2704 scope.go:117] "RemoveContainer" containerID="da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc" Oct 13 05:11:59.555067 containerd[1574]: time="2025-10-13T05:11:59.555042785Z" level=error msg="ContainerStatus for \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\": not found" Oct 13 05:11:59.555189 kubelet[2704]: E1013 05:11:59.555171 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\": not found" containerID="da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc" Oct 13 05:11:59.555225 kubelet[2704]: I1013 05:11:59.555197 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc"} err="failed to get container status \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\": rpc error: code = NotFound desc = an error occurred when try to find container \"da5a6fcb735cd7ed9782cdc5f5bc142d55c9150af5c8e75d643285b1643dbacc\": not found" Oct 13 05:11:59.875263 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-448ab076f0bb1c82abde76dfb4e40163a448c4a2f29fb822e692c670f9aa2d8c-shm.mount: Deactivated successfully. Oct 13 05:11:59.875370 systemd[1]: var-lib-kubelet-pods-596e6789\x2d6ea7\x2d414e\x2d82c9\x2d7454b7e2d1ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7zhgt.mount: Deactivated successfully. Oct 13 05:11:59.875423 systemd[1]: var-lib-kubelet-pods-c6015f34\x2d65bf\x2d48d7\x2d9be0\x2de207a43cff58-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz4r92.mount: Deactivated successfully. Oct 13 05:11:59.875473 systemd[1]: var-lib-kubelet-pods-596e6789\x2d6ea7\x2d414e\x2d82c9\x2d7454b7e2d1ab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 13 05:11:59.875521 systemd[1]: var-lib-kubelet-pods-596e6789\x2d6ea7\x2d414e\x2d82c9\x2d7454b7e2d1ab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 13 05:12:00.774347 sshd[4312]: Connection closed by 10.0.0.1 port 41254 Oct 13 05:12:00.775269 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Oct 13 05:12:00.789559 systemd[1]: sshd@21-10.0.0.119:22-10.0.0.1:41254.service: Deactivated successfully. Oct 13 05:12:00.791237 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:12:00.792169 systemd[1]: session-22.scope: Consumed 2.222s CPU time, 27M memory peak. Oct 13 05:12:00.792821 systemd-logind[1542]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:12:00.795327 systemd[1]: Started sshd@22-10.0.0.119:22-10.0.0.1:41264.service - OpenSSH per-connection server daemon (10.0.0.1:41264). Oct 13 05:12:00.796148 systemd-logind[1542]: Removed session 22. Oct 13 05:12:00.866372 sshd[4472]: Accepted publickey for core from 10.0.0.1 port 41264 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:12:00.868268 sshd-session[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:12:00.872487 systemd-logind[1542]: New session 23 of user core. Oct 13 05:12:00.880318 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:12:01.273609 kubelet[2704]: I1013 05:12:01.273561 2704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="596e6789-6ea7-414e-82c9-7454b7e2d1ab" path="/var/lib/kubelet/pods/596e6789-6ea7-414e-82c9-7454b7e2d1ab/volumes" Oct 13 05:12:01.274110 kubelet[2704]: I1013 05:12:01.274084 2704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6015f34-65bf-48d7-9be0-e207a43cff58" path="/var/lib/kubelet/pods/c6015f34-65bf-48d7-9be0-e207a43cff58/volumes" Oct 13 05:12:01.332576 kubelet[2704]: E1013 05:12:01.332518 2704 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 13 05:12:01.982520 sshd[4475]: Connection closed by 10.0.0.1 port 41264 Oct 13 05:12:01.983035 sshd-session[4472]: pam_unix(sshd:session): session closed for user core Oct 13 05:12:01.996272 systemd[1]: sshd@22-10.0.0.119:22-10.0.0.1:41264.service: Deactivated successfully. Oct 13 05:12:01.997936 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:12:01.999500 systemd-logind[1542]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:12:02.003431 systemd[1]: Started sshd@23-10.0.0.119:22-10.0.0.1:41272.service - OpenSSH per-connection server daemon (10.0.0.1:41272). Oct 13 05:12:02.005497 systemd-logind[1542]: Removed session 23. Oct 13 05:12:02.016152 systemd[1]: Created slice kubepods-burstable-podb5119039_826b_40a8_9d4c_15f5b9feefba.slice - libcontainer container kubepods-burstable-podb5119039_826b_40a8_9d4c_15f5b9feefba.slice. Oct 13 05:12:02.066442 sshd[4487]: Accepted publickey for core from 10.0.0.1 port 41272 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:12:02.067746 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:12:02.073216 systemd-logind[1542]: New session 24 of user core. Oct 13 05:12:02.082330 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:12:02.122700 kubelet[2704]: I1013 05:12:02.122659 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5119039-826b-40a8-9d4c-15f5b9feefba-host-proc-sys-kernel\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.122700 kubelet[2704]: I1013 05:12:02.122701 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5119039-826b-40a8-9d4c-15f5b9feefba-hubble-tls\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.122700 kubelet[2704]: I1013 05:12:02.122724 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5119039-826b-40a8-9d4c-15f5b9feefba-cilium-config-path\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.122700 kubelet[2704]: I1013 05:12:02.122743 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5119039-826b-40a8-9d4c-15f5b9feefba-etc-cni-netd\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.123085 kubelet[2704]: I1013 05:12:02.123056 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5119039-826b-40a8-9d4c-15f5b9feefba-host-proc-sys-net\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.123181 kubelet[2704]: I1013 05:12:02.123093 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5119039-826b-40a8-9d4c-15f5b9feefba-cilium-run\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.123181 kubelet[2704]: I1013 05:12:02.123111 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5119039-826b-40a8-9d4c-15f5b9feefba-hostproc\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.123181 kubelet[2704]: I1013 05:12:02.123140 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5119039-826b-40a8-9d4c-15f5b9feefba-cilium-ipsec-secrets\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.123181 kubelet[2704]: I1013 05:12:02.123165 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5119039-826b-40a8-9d4c-15f5b9feefba-bpf-maps\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.123309 kubelet[2704]: I1013 05:12:02.123193 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5119039-826b-40a8-9d4c-15f5b9feefba-cilium-cgroup\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.123309 kubelet[2704]: I1013 05:12:02.123209 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5119039-826b-40a8-9d4c-15f5b9feefba-xtables-lock\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.123309 kubelet[2704]: I1013 05:12:02.123240 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5119039-826b-40a8-9d4c-15f5b9feefba-clustermesh-secrets\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.123309 kubelet[2704]: I1013 05:12:02.123258 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxwjc\" (UniqueName: \"kubernetes.io/projected/b5119039-826b-40a8-9d4c-15f5b9feefba-kube-api-access-mxwjc\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.123309 kubelet[2704]: I1013 05:12:02.123295 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5119039-826b-40a8-9d4c-15f5b9feefba-cni-path\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.123427 kubelet[2704]: I1013 05:12:02.123316 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5119039-826b-40a8-9d4c-15f5b9feefba-lib-modules\") pod \"cilium-qtmzj\" (UID: \"b5119039-826b-40a8-9d4c-15f5b9feefba\") " pod="kube-system/cilium-qtmzj" Oct 13 05:12:02.131944 sshd[4490]: Connection closed by 10.0.0.1 port 41272 Oct 13 05:12:02.132459 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Oct 13 05:12:02.148843 systemd[1]: sshd@23-10.0.0.119:22-10.0.0.1:41272.service: Deactivated successfully. Oct 13 05:12:02.151786 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:12:02.155298 systemd-logind[1542]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:12:02.158582 systemd[1]: Started sshd@24-10.0.0.119:22-10.0.0.1:41284.service - OpenSSH per-connection server daemon (10.0.0.1:41284). Oct 13 05:12:02.159479 systemd-logind[1542]: Removed session 24. Oct 13 05:12:02.213864 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 41284 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:12:02.215259 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:12:02.220751 systemd-logind[1542]: New session 25 of user core. Oct 13 05:12:02.228365 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 05:12:02.323445 kubelet[2704]: E1013 05:12:02.323347 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:02.324940 containerd[1574]: time="2025-10-13T05:12:02.324356769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qtmzj,Uid:b5119039-826b-40a8-9d4c-15f5b9feefba,Namespace:kube-system,Attempt:0,}" Oct 13 05:12:02.345208 containerd[1574]: time="2025-10-13T05:12:02.345158466Z" level=info msg="connecting to shim a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73" address="unix:///run/containerd/s/8bff410eebe93dcb94dd7a70d7f06ce68c5dc659555fd10ac0faf70f1b983fd5" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:12:02.371296 systemd[1]: Started cri-containerd-a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73.scope - libcontainer container a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73. Oct 13 05:12:02.393725 containerd[1574]: time="2025-10-13T05:12:02.393684545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qtmzj,Uid:b5119039-826b-40a8-9d4c-15f5b9feefba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73\"" Oct 13 05:12:02.394468 kubelet[2704]: E1013 05:12:02.394404 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:02.398074 containerd[1574]: time="2025-10-13T05:12:02.398032414Z" level=info msg="CreateContainer within sandbox \"a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 13 05:12:02.405398 containerd[1574]: time="2025-10-13T05:12:02.404761658Z" level=info msg="Container b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:12:02.409196 containerd[1574]: time="2025-10-13T05:12:02.409161967Z" level=info msg="CreateContainer within sandbox \"a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3\"" Oct 13 05:12:02.409715 containerd[1574]: time="2025-10-13T05:12:02.409689411Z" level=info msg="StartContainer for \"b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3\"" Oct 13 05:12:02.410548 containerd[1574]: time="2025-10-13T05:12:02.410524616Z" level=info msg="connecting to shim b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3" address="unix:///run/containerd/s/8bff410eebe93dcb94dd7a70d7f06ce68c5dc659555fd10ac0faf70f1b983fd5" protocol=ttrpc version=3 Oct 13 05:12:02.439365 systemd[1]: Started cri-containerd-b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3.scope - libcontainer container b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3. Oct 13 05:12:02.475759 containerd[1574]: time="2025-10-13T05:12:02.475657885Z" level=info msg="StartContainer for \"b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3\" returns successfully" Oct 13 05:12:02.485092 systemd[1]: cri-containerd-b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3.scope: Deactivated successfully. Oct 13 05:12:02.487439 containerd[1574]: time="2025-10-13T05:12:02.487398403Z" level=info msg="received exit event container_id:\"b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3\" id:\"b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3\" pid:4568 exited_at:{seconds:1760332322 nanos:486333676}" Oct 13 05:12:02.487810 containerd[1574]: time="2025-10-13T05:12:02.487444003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3\" id:\"b20618fe40e95d458dbcfb6c19e81fd2f0c911f83e966371835bf9f244b6f7c3\" pid:4568 exited_at:{seconds:1760332322 nanos:486333676}" Oct 13 05:12:02.523473 kubelet[2704]: E1013 05:12:02.523424 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:02.615723 kubelet[2704]: I1013 05:12:02.615614 2704 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T05:12:02Z","lastTransitionTime":"2025-10-13T05:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 13 05:12:03.515739 kubelet[2704]: E1013 05:12:03.515505 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:03.519844 containerd[1574]: time="2025-10-13T05:12:03.519801992Z" level=info msg="CreateContainer within sandbox \"a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 13 05:12:03.528022 containerd[1574]: time="2025-10-13T05:12:03.527969004Z" level=info msg="Container 7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:12:03.535547 containerd[1574]: time="2025-10-13T05:12:03.534349445Z" level=info msg="CreateContainer within sandbox \"a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82\"" Oct 13 05:12:03.536496 containerd[1574]: time="2025-10-13T05:12:03.536460899Z" level=info msg="StartContainer for \"7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82\"" Oct 13 05:12:03.537462 containerd[1574]: time="2025-10-13T05:12:03.537400585Z" level=info msg="connecting to shim 7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82" address="unix:///run/containerd/s/8bff410eebe93dcb94dd7a70d7f06ce68c5dc659555fd10ac0faf70f1b983fd5" protocol=ttrpc version=3 Oct 13 05:12:03.563361 systemd[1]: Started cri-containerd-7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82.scope - libcontainer container 7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82. Oct 13 05:12:03.587405 containerd[1574]: time="2025-10-13T05:12:03.587366065Z" level=info msg="StartContainer for \"7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82\" returns successfully" Oct 13 05:12:03.594809 systemd[1]: cri-containerd-7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82.scope: Deactivated successfully. Oct 13 05:12:03.597483 containerd[1574]: time="2025-10-13T05:12:03.597292808Z" level=info msg="received exit event container_id:\"7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82\" id:\"7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82\" pid:4614 exited_at:{seconds:1760332323 nanos:596828445}" Oct 13 05:12:03.597593 containerd[1574]: time="2025-10-13T05:12:03.597559610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82\" id:\"7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82\" pid:4614 exited_at:{seconds:1760332323 nanos:596828445}" Oct 13 05:12:03.616155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c29ddddcb88d453556193a86c51a1fa10e14dba94a3e5f3ac5f5f9a4e671d82-rootfs.mount: Deactivated successfully. Oct 13 05:12:04.518877 kubelet[2704]: E1013 05:12:04.518198 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:04.527965 containerd[1574]: time="2025-10-13T05:12:04.527917681Z" level=info msg="CreateContainer within sandbox \"a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 13 05:12:04.552545 containerd[1574]: time="2025-10-13T05:12:04.552492194Z" level=info msg="Container 1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:12:04.561016 containerd[1574]: time="2025-10-13T05:12:04.560974407Z" level=info msg="CreateContainer within sandbox \"a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e\"" Oct 13 05:12:04.561726 containerd[1574]: time="2025-10-13T05:12:04.561624131Z" level=info msg="StartContainer for \"1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e\"" Oct 13 05:12:04.562978 containerd[1574]: time="2025-10-13T05:12:04.562953180Z" level=info msg="connecting to shim 1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e" address="unix:///run/containerd/s/8bff410eebe93dcb94dd7a70d7f06ce68c5dc659555fd10ac0faf70f1b983fd5" protocol=ttrpc version=3 Oct 13 05:12:04.586389 systemd[1]: Started cri-containerd-1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e.scope - libcontainer container 1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e. Oct 13 05:12:04.622556 systemd[1]: cri-containerd-1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e.scope: Deactivated successfully. Oct 13 05:12:04.623657 containerd[1574]: time="2025-10-13T05:12:04.623619638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e\" id:\"1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e\" pid:4659 exited_at:{seconds:1760332324 nanos:623328636}" Oct 13 05:12:04.633846 containerd[1574]: time="2025-10-13T05:12:04.633801141Z" level=info msg="received exit event container_id:\"1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e\" id:\"1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e\" pid:4659 exited_at:{seconds:1760332324 nanos:623328636}" Oct 13 05:12:04.635025 containerd[1574]: time="2025-10-13T05:12:04.634990829Z" level=info msg="StartContainer for \"1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e\" returns successfully" Oct 13 05:12:04.658958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1531e22a1f117ea8149102f2264fcc3d8d4d6db5ee1d96168d0624711270fc9e-rootfs.mount: Deactivated successfully. Oct 13 05:12:05.524668 kubelet[2704]: E1013 05:12:05.524448 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:05.529677 containerd[1574]: time="2025-10-13T05:12:05.529632558Z" level=info msg="CreateContainer within sandbox \"a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 13 05:12:05.537661 containerd[1574]: time="2025-10-13T05:12:05.537607806Z" level=info msg="Container 9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:12:05.544624 containerd[1574]: time="2025-10-13T05:12:05.544571568Z" level=info msg="CreateContainer within sandbox \"a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a\"" Oct 13 05:12:05.545695 containerd[1574]: time="2025-10-13T05:12:05.545661135Z" level=info msg="StartContainer for \"9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a\"" Oct 13 05:12:05.546700 containerd[1574]: time="2025-10-13T05:12:05.546631421Z" level=info msg="connecting to shim 9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a" address="unix:///run/containerd/s/8bff410eebe93dcb94dd7a70d7f06ce68c5dc659555fd10ac0faf70f1b983fd5" protocol=ttrpc version=3 Oct 13 05:12:05.571320 systemd[1]: Started cri-containerd-9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a.scope - libcontainer container 9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a. Oct 13 05:12:05.595426 systemd[1]: cri-containerd-9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a.scope: Deactivated successfully. Oct 13 05:12:05.596695 containerd[1574]: time="2025-10-13T05:12:05.596662964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a\" id:\"9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a\" pid:4699 exited_at:{seconds:1760332325 nanos:596440363}" Oct 13 05:12:05.596918 containerd[1574]: time="2025-10-13T05:12:05.596890926Z" level=info msg="received exit event container_id:\"9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a\" id:\"9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a\" pid:4699 exited_at:{seconds:1760332325 nanos:596440363}" Oct 13 05:12:05.599294 containerd[1574]: time="2025-10-13T05:12:05.598765177Z" level=info msg="StartContainer for \"9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a\" returns successfully" Oct 13 05:12:05.618403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d1a6609cc67ba32cf79fafda0081eb0440a5cdd93d186d835a09c45b9c17c7a-rootfs.mount: Deactivated successfully. Oct 13 05:12:06.333684 kubelet[2704]: E1013 05:12:06.333577 2704 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 13 05:12:06.534386 kubelet[2704]: E1013 05:12:06.534346 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:06.543751 containerd[1574]: time="2025-10-13T05:12:06.543700261Z" level=info msg="CreateContainer within sandbox \"a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 13 05:12:06.571152 containerd[1574]: time="2025-10-13T05:12:06.570792781Z" level=info msg="Container 5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:12:06.578171 containerd[1574]: time="2025-10-13T05:12:06.578110704Z" level=info msg="CreateContainer within sandbox \"a55100673a6a4711dbdc2ce2a612305db1616dbef6155ed77e9c0853a4c8aa73\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923\"" Oct 13 05:12:06.581073 containerd[1574]: time="2025-10-13T05:12:06.581043761Z" level=info msg="StartContainer for \"5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923\"" Oct 13 05:12:06.583340 containerd[1574]: time="2025-10-13T05:12:06.582122128Z" level=info msg="connecting to shim 5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923" address="unix:///run/containerd/s/8bff410eebe93dcb94dd7a70d7f06ce68c5dc659555fd10ac0faf70f1b983fd5" protocol=ttrpc version=3 Oct 13 05:12:06.605325 systemd[1]: Started cri-containerd-5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923.scope - libcontainer container 5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923. Oct 13 05:12:06.632847 containerd[1574]: time="2025-10-13T05:12:06.632805267Z" level=info msg="StartContainer for \"5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923\" returns successfully" Oct 13 05:12:06.694847 containerd[1574]: time="2025-10-13T05:12:06.694800193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923\" id:\"7d0fc29685feff08d9c467f0cfc9f97e607a3a77ce91dd633a2df77d67a95d3f\" pid:4766 exited_at:{seconds:1760332326 nanos:694443751}" Oct 13 05:12:06.919209 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 13 05:12:07.541015 kubelet[2704]: E1013 05:12:07.540985 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:07.561309 kubelet[2704]: I1013 05:12:07.561170 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qtmzj" podStartSLOduration=6.561154698 podStartE2EDuration="6.561154698s" podCreationTimestamp="2025-10-13 05:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:12:07.560987898 +0000 UTC m=+76.406644508" watchObservedRunningTime="2025-10-13 05:12:07.561154698 +0000 UTC m=+76.406811308" Oct 13 05:12:08.543530 kubelet[2704]: E1013 05:12:08.543486 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:08.698485 containerd[1574]: time="2025-10-13T05:12:08.698440325Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923\" id:\"3cfeda26941885682bd80c49d1346643ba38b6567221c97765b1c16b370c3bc4\" pid:4930 exit_status:1 exited_at:{seconds:1760332328 nanos:698020723}" Oct 13 05:12:09.271438 kubelet[2704]: E1013 05:12:09.271405 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:09.851450 systemd-networkd[1480]: lxc_health: Link UP Oct 13 05:12:09.865262 systemd-networkd[1480]: lxc_health: Gained carrier Oct 13 05:12:10.324914 kubelet[2704]: E1013 05:12:10.324868 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:10.547710 kubelet[2704]: E1013 05:12:10.547680 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:10.836000 containerd[1574]: time="2025-10-13T05:12:10.835956126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923\" id:\"0057ba6344aae7af84a1934d38c126d6047d7e7dc0a87bd06e269ff4e03aa974\" pid:5299 exited_at:{seconds:1760332330 nanos:835486283}" Oct 13 05:12:11.429277 systemd-networkd[1480]: lxc_health: Gained IPv6LL Oct 13 05:12:11.549136 kubelet[2704]: E1013 05:12:11.549109 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:12.992753 containerd[1574]: time="2025-10-13T05:12:12.992702264Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923\" id:\"8bcb21e279187d4eec218c744a92fd296692aa203af0ee214e1f69f365154b73\" pid:5334 exited_at:{seconds:1760332332 nanos:992377343}" Oct 13 05:12:13.272976 kubelet[2704]: E1013 05:12:13.272515 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:12:15.101469 containerd[1574]: time="2025-10-13T05:12:15.101426635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5665c56fa0762d2f27221577483bd0bea3306327bfea5b932141b71e418e2923\" id:\"ba4ef4b2cec33629209aca63dc3fc343a27177ef7bb3d7d3bcda34d7172a0f23\" pid:5364 exited_at:{seconds:1760332335 nanos:100865432}" Oct 13 05:12:15.106182 sshd[4504]: Connection closed by 10.0.0.1 port 41284 Oct 13 05:12:15.106364 sshd-session[4497]: pam_unix(sshd:session): session closed for user core Oct 13 05:12:15.111511 systemd[1]: sshd@24-10.0.0.119:22-10.0.0.1:41284.service: Deactivated successfully. Oct 13 05:12:15.113401 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 05:12:15.116955 systemd-logind[1542]: Session 25 logged out. Waiting for processes to exit. Oct 13 05:12:15.117964 systemd-logind[1542]: Removed session 25.