Sep 9 05:03:27.767455 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 05:03:27.767477 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 03:38:34 -00 2025 Sep 9 05:03:27.767487 kernel: KASLR enabled Sep 9 05:03:27.767493 kernel: efi: EFI v2.7 by EDK II Sep 9 05:03:27.767517 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Sep 9 05:03:27.767523 kernel: random: crng init done Sep 9 05:03:27.767530 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 9 05:03:27.767536 kernel: secureboot: Secure boot enabled Sep 9 05:03:27.767542 kernel: ACPI: Early table checksum verification disabled Sep 9 05:03:27.767549 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 9 05:03:27.767555 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 05:03:27.767561 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:03:27.767566 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:03:27.767572 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:03:27.767579 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:03:27.767587 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:03:27.767593 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:03:27.767599 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:03:27.767605 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:03:27.767612 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:03:27.767618 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 05:03:27.767624 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 05:03:27.767631 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 05:03:27.767637 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 9 05:03:27.767643 kernel: Zone ranges: Sep 9 05:03:27.767651 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 05:03:27.767657 kernel: DMA32 empty Sep 9 05:03:27.767662 kernel: Normal empty Sep 9 05:03:27.767669 kernel: Device empty Sep 9 05:03:27.767675 kernel: Movable zone start for each node Sep 9 05:03:27.767681 kernel: Early memory node ranges Sep 9 05:03:27.767687 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 9 05:03:27.767693 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 9 05:03:27.767699 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 9 05:03:27.767705 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 9 05:03:27.767710 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 9 05:03:27.767716 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 9 05:03:27.767724 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 9 05:03:27.767730 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 9 05:03:27.767737 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 05:03:27.767746 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 05:03:27.767752 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 05:03:27.767759 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 9 05:03:27.767765 kernel: psci: probing for conduit method from ACPI. Sep 9 05:03:27.767773 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 05:03:27.767780 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 05:03:27.767786 kernel: psci: Trusted OS migration not required Sep 9 05:03:27.767792 kernel: psci: SMC Calling Convention v1.1 Sep 9 05:03:27.767799 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 05:03:27.767806 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 05:03:27.767812 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 05:03:27.767819 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 05:03:27.767825 kernel: Detected PIPT I-cache on CPU0 Sep 9 05:03:27.767833 kernel: CPU features: detected: GIC system register CPU interface Sep 9 05:03:27.767839 kernel: CPU features: detected: Spectre-v4 Sep 9 05:03:27.767846 kernel: CPU features: detected: Spectre-BHB Sep 9 05:03:27.767853 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 05:03:27.767859 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 05:03:27.767865 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 05:03:27.767872 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 05:03:27.767878 kernel: alternatives: applying boot alternatives Sep 9 05:03:27.767886 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 05:03:27.767892 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:03:27.767899 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 05:03:27.767907 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 05:03:27.767914 kernel: Fallback order for Node 0: 0 Sep 9 05:03:27.767920 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 05:03:27.767926 kernel: Policy zone: DMA Sep 9 05:03:27.767933 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:03:27.767939 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 05:03:27.767946 kernel: software IO TLB: area num 4. Sep 9 05:03:27.767952 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 05:03:27.767959 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 9 05:03:27.767972 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 05:03:27.767979 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:03:27.767989 kernel: rcu: RCU event tracing is enabled. Sep 9 05:03:27.768001 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 05:03:27.768011 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:03:27.768019 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:03:27.768026 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:03:27.768033 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 05:03:27.768040 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:03:27.768047 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:03:27.768055 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 05:03:27.768062 kernel: GICv3: 256 SPIs implemented Sep 9 05:03:27.768068 kernel: GICv3: 0 Extended SPIs implemented Sep 9 05:03:27.768075 kernel: Root IRQ handler: gic_handle_irq Sep 9 05:03:27.768083 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 05:03:27.768089 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 05:03:27.768096 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 05:03:27.768103 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 05:03:27.768109 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 05:03:27.768116 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 05:03:27.768122 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 05:03:27.768129 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 05:03:27.768136 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:03:27.768142 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:03:27.768149 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 05:03:27.768156 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 05:03:27.768164 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 05:03:27.768171 kernel: arm-pv: using stolen time PV Sep 9 05:03:27.768177 kernel: Console: colour dummy device 80x25 Sep 9 05:03:27.768184 kernel: ACPI: Core revision 20240827 Sep 9 05:03:27.768192 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 05:03:27.768199 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:03:27.768205 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:03:27.768212 kernel: landlock: Up and running. Sep 9 05:03:27.768218 kernel: SELinux: Initializing. Sep 9 05:03:27.768227 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:03:27.768234 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:03:27.768240 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:03:27.768247 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:03:27.768254 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:03:27.768260 kernel: Remapping and enabling EFI services. Sep 9 05:03:27.768267 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:03:27.768274 kernel: Detected PIPT I-cache on CPU1 Sep 9 05:03:27.768281 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 05:03:27.768289 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 05:03:27.768301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:03:27.768308 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 05:03:27.768317 kernel: Detected PIPT I-cache on CPU2 Sep 9 05:03:27.768324 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 05:03:27.768331 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 05:03:27.768338 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:03:27.768361 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 05:03:27.768368 kernel: Detected PIPT I-cache on CPU3 Sep 9 05:03:27.768376 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 05:03:27.768384 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 05:03:27.768391 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:03:27.768398 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 05:03:27.768405 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 05:03:27.768412 kernel: SMP: Total of 4 processors activated. Sep 9 05:03:27.768419 kernel: CPU: All CPU(s) started at EL1 Sep 9 05:03:27.768427 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 05:03:27.768434 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 05:03:27.768442 kernel: CPU features: detected: Common not Private translations Sep 9 05:03:27.768449 kernel: CPU features: detected: CRC32 instructions Sep 9 05:03:27.768456 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 05:03:27.768463 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 05:03:27.768471 kernel: CPU features: detected: LSE atomic instructions Sep 9 05:03:27.768478 kernel: CPU features: detected: Privileged Access Never Sep 9 05:03:27.768485 kernel: CPU features: detected: RAS Extension Support Sep 9 05:03:27.768492 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 05:03:27.768542 kernel: alternatives: applying system-wide alternatives Sep 9 05:03:27.768551 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 05:03:27.768559 kernel: Memory: 2422372K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 127580K reserved, 16384K cma-reserved) Sep 9 05:03:27.768566 kernel: devtmpfs: initialized Sep 9 05:03:27.768573 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:03:27.768580 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 05:03:27.768588 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 05:03:27.768595 kernel: 0 pages in range for non-PLT usage Sep 9 05:03:27.768602 kernel: 508560 pages in range for PLT usage Sep 9 05:03:27.768609 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:03:27.768617 kernel: SMBIOS 3.0.0 present. Sep 9 05:03:27.768624 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 05:03:27.768631 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:03:27.768638 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:03:27.768646 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 05:03:27.768653 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 05:03:27.768660 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 05:03:27.768667 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:03:27.768674 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 9 05:03:27.768682 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:03:27.768689 kernel: cpuidle: using governor menu Sep 9 05:03:27.768696 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 05:03:27.768703 kernel: ASID allocator initialised with 32768 entries Sep 9 05:03:27.768710 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:03:27.768717 kernel: Serial: AMBA PL011 UART driver Sep 9 05:03:27.768724 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:03:27.768731 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:03:27.768737 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 05:03:27.768746 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 05:03:27.768752 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:03:27.768759 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:03:27.768766 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 05:03:27.768773 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 05:03:27.768779 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:03:27.768786 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:03:27.768794 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:03:27.768801 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 05:03:27.768809 kernel: ACPI: Interpreter enabled Sep 9 05:03:27.768816 kernel: ACPI: Using GIC for interrupt routing Sep 9 05:03:27.768823 kernel: ACPI: MCFG table detected, 1 entries Sep 9 05:03:27.768830 kernel: ACPI: CPU0 has been hot-added Sep 9 05:03:27.768837 kernel: ACPI: CPU1 has been hot-added Sep 9 05:03:27.768844 kernel: ACPI: CPU2 has been hot-added Sep 9 05:03:27.768851 kernel: ACPI: CPU3 has been hot-added Sep 9 05:03:27.768858 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 05:03:27.768865 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 05:03:27.768873 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:03:27.769040 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:03:27.769108 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 05:03:27.769166 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 05:03:27.769224 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 05:03:27.769280 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 05:03:27.769289 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 05:03:27.769299 kernel: PCI host bridge to bus 0000:00 Sep 9 05:03:27.769371 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 05:03:27.769427 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 05:03:27.769481 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 05:03:27.769548 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:03:27.769633 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:03:27.769703 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 05:03:27.769765 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 05:03:27.769825 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 05:03:27.769883 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 05:03:27.769941 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 05:03:27.770015 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 05:03:27.770078 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 05:03:27.770134 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 05:03:27.770206 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 05:03:27.770259 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 05:03:27.770268 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 05:03:27.770275 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 05:03:27.770282 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 05:03:27.770289 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 05:03:27.770297 kernel: iommu: Default domain type: Translated Sep 9 05:03:27.770303 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 05:03:27.770312 kernel: efivars: Registered efivars operations Sep 9 05:03:27.770319 kernel: vgaarb: loaded Sep 9 05:03:27.770326 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 05:03:27.770333 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:03:27.770341 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:03:27.770348 kernel: pnp: PnP ACPI init Sep 9 05:03:27.770417 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 05:03:27.770428 kernel: pnp: PnP ACPI: found 1 devices Sep 9 05:03:27.770436 kernel: NET: Registered PF_INET protocol family Sep 9 05:03:27.770443 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 05:03:27.770450 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 05:03:27.770458 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:03:27.770465 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 05:03:27.770472 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 05:03:27.770479 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 05:03:27.770486 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:03:27.770494 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:03:27.770525 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:03:27.770533 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:03:27.770540 kernel: kvm [1]: HYP mode not available Sep 9 05:03:27.770547 kernel: Initialise system trusted keyrings Sep 9 05:03:27.770554 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 05:03:27.770561 kernel: Key type asymmetric registered Sep 9 05:03:27.770568 kernel: Asymmetric key parser 'x509' registered Sep 9 05:03:27.770575 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 05:03:27.770582 kernel: io scheduler mq-deadline registered Sep 9 05:03:27.770591 kernel: io scheduler kyber registered Sep 9 05:03:27.770598 kernel: io scheduler bfq registered Sep 9 05:03:27.770606 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 05:03:27.770613 kernel: ACPI: button: Power Button [PWRB] Sep 9 05:03:27.770621 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 05:03:27.770689 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 05:03:27.770698 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:03:27.770706 kernel: thunder_xcv, ver 1.0 Sep 9 05:03:27.770713 kernel: thunder_bgx, ver 1.0 Sep 9 05:03:27.770722 kernel: nicpf, ver 1.0 Sep 9 05:03:27.770729 kernel: nicvf, ver 1.0 Sep 9 05:03:27.770796 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 05:03:27.770853 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T05:03:27 UTC (1757394207) Sep 9 05:03:27.770862 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 05:03:27.770870 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 05:03:27.770877 kernel: watchdog: NMI not fully supported Sep 9 05:03:27.770884 kernel: watchdog: Hard watchdog permanently disabled Sep 9 05:03:27.770892 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:03:27.770899 kernel: Segment Routing with IPv6 Sep 9 05:03:27.770906 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:03:27.770913 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:03:27.770920 kernel: Key type dns_resolver registered Sep 9 05:03:27.770928 kernel: registered taskstats version 1 Sep 9 05:03:27.770935 kernel: Loading compiled-in X.509 certificates Sep 9 05:03:27.770942 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 44d1e8b5c5ffbaa3cedd99c03d41580671fabec5' Sep 9 05:03:27.770949 kernel: Demotion targets for Node 0: null Sep 9 05:03:27.770958 kernel: Key type .fscrypt registered Sep 9 05:03:27.770973 kernel: Key type fscrypt-provisioning registered Sep 9 05:03:27.770981 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:03:27.770988 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:03:27.770995 kernel: ima: No architecture policies found Sep 9 05:03:27.771002 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 05:03:27.771009 kernel: clk: Disabling unused clocks Sep 9 05:03:27.771016 kernel: PM: genpd: Disabling unused power domains Sep 9 05:03:27.771023 kernel: Warning: unable to open an initial console. Sep 9 05:03:27.771032 kernel: Freeing unused kernel memory: 38976K Sep 9 05:03:27.771039 kernel: Run /init as init process Sep 9 05:03:27.771046 kernel: with arguments: Sep 9 05:03:27.771053 kernel: /init Sep 9 05:03:27.771060 kernel: with environment: Sep 9 05:03:27.771067 kernel: HOME=/ Sep 9 05:03:27.771074 kernel: TERM=linux Sep 9 05:03:27.771081 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:03:27.771089 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:03:27.771100 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:03:27.771109 systemd[1]: Detected virtualization kvm. Sep 9 05:03:27.771116 systemd[1]: Detected architecture arm64. Sep 9 05:03:27.771124 systemd[1]: Running in initrd. Sep 9 05:03:27.771132 systemd[1]: No hostname configured, using default hostname. Sep 9 05:03:27.771140 systemd[1]: Hostname set to . Sep 9 05:03:27.771147 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:03:27.771156 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:03:27.771164 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:03:27.771171 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:03:27.771179 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:03:27.771187 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:03:27.771195 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:03:27.771203 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:03:27.771213 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:03:27.771220 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:03:27.771228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:03:27.771235 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:03:27.771243 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:03:27.771251 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:03:27.771259 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:03:27.771267 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:03:27.771276 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:03:27.771284 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:03:27.771291 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:03:27.771299 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:03:27.771307 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:03:27.771314 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:03:27.771322 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:03:27.771329 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:03:27.771337 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:03:27.771346 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:03:27.771353 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:03:27.771361 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:03:27.771369 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:03:27.771377 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:03:27.771384 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:03:27.771392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:03:27.771400 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:03:27.771409 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:03:27.771417 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:03:27.771441 systemd-journald[244]: Collecting audit messages is disabled. Sep 9 05:03:27.771462 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:03:27.771471 systemd-journald[244]: Journal started Sep 9 05:03:27.771488 systemd-journald[244]: Runtime Journal (/run/log/journal/98349e9030f647f0a8356e46344a81c6) is 6M, max 48.5M, 42.4M free. Sep 9 05:03:27.765428 systemd-modules-load[245]: Inserted module 'overlay' Sep 9 05:03:27.773018 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:03:27.778516 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:03:27.780098 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 9 05:03:27.781256 kernel: Bridge firewalling registered Sep 9 05:03:27.781025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:03:27.782611 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:03:27.787338 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:03:27.789014 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:03:27.790859 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:03:27.792295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:03:27.806739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:03:27.814459 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:03:27.816554 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:03:27.820200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:03:27.822738 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:03:27.824088 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:03:27.827244 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:03:27.829401 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:03:27.855638 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 05:03:27.870263 systemd-resolved[289]: Positive Trust Anchors: Sep 9 05:03:27.870280 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:03:27.870312 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:03:27.876604 systemd-resolved[289]: Defaulting to hostname 'linux'. Sep 9 05:03:27.877723 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:03:27.879489 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:03:27.932526 kernel: SCSI subsystem initialized Sep 9 05:03:27.936517 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:03:27.944545 kernel: iscsi: registered transport (tcp) Sep 9 05:03:27.957537 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:03:27.957568 kernel: QLogic iSCSI HBA Driver Sep 9 05:03:27.974176 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:03:27.993564 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:03:27.996747 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:03:28.041109 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:03:28.043323 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:03:28.109552 kernel: raid6: neonx8 gen() 15801 MB/s Sep 9 05:03:28.126518 kernel: raid6: neonx4 gen() 15811 MB/s Sep 9 05:03:28.144537 kernel: raid6: neonx2 gen() 13212 MB/s Sep 9 05:03:28.160525 kernel: raid6: neonx1 gen() 10447 MB/s Sep 9 05:03:28.177515 kernel: raid6: int64x8 gen() 6895 MB/s Sep 9 05:03:28.194514 kernel: raid6: int64x4 gen() 7350 MB/s Sep 9 05:03:28.211514 kernel: raid6: int64x2 gen() 6106 MB/s Sep 9 05:03:28.228516 kernel: raid6: int64x1 gen() 5055 MB/s Sep 9 05:03:28.228535 kernel: raid6: using algorithm neonx4 gen() 15811 MB/s Sep 9 05:03:28.245522 kernel: raid6: .... xor() 12338 MB/s, rmw enabled Sep 9 05:03:28.245546 kernel: raid6: using neon recovery algorithm Sep 9 05:03:28.250739 kernel: xor: measuring software checksum speed Sep 9 05:03:28.250760 kernel: 8regs : 19869 MB/sec Sep 9 05:03:28.251864 kernel: 32regs : 21653 MB/sec Sep 9 05:03:28.251880 kernel: arm64_neon : 28109 MB/sec Sep 9 05:03:28.251889 kernel: xor: using function: arm64_neon (28109 MB/sec) Sep 9 05:03:28.304529 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:03:28.310857 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:03:28.314255 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:03:28.342522 systemd-udevd[497]: Using default interface naming scheme 'v255'. Sep 9 05:03:28.346521 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:03:28.348157 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:03:28.375640 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Sep 9 05:03:28.397811 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:03:28.399889 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:03:28.448403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:03:28.450940 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:03:28.499548 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 05:03:28.502071 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 05:03:28.505921 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:03:28.505954 kernel: GPT:9289727 != 19775487 Sep 9 05:03:28.505970 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:03:28.505981 kernel: GPT:9289727 != 19775487 Sep 9 05:03:28.505989 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:03:28.507688 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:03:28.517585 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:03:28.507847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:03:28.519894 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:03:28.523271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:03:28.543366 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 05:03:28.546524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:03:28.552559 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:03:28.561473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:03:28.568940 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 05:03:28.579019 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 05:03:28.579983 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 05:03:28.582683 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:03:28.584517 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:03:28.586249 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:03:28.588665 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:03:28.590194 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:03:28.614487 disk-uuid[591]: Primary Header is updated. Sep 9 05:03:28.614487 disk-uuid[591]: Secondary Entries is updated. Sep 9 05:03:28.614487 disk-uuid[591]: Secondary Header is updated. Sep 9 05:03:28.619519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:03:28.619538 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:03:29.626517 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:03:29.627244 disk-uuid[594]: The operation has completed successfully. Sep 9 05:03:29.648986 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:03:29.649075 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:03:29.676482 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:03:29.688239 sh[612]: Success Sep 9 05:03:29.700930 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:03:29.700976 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:03:29.700995 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:03:29.707559 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 05:03:29.732073 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:03:29.734709 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:03:29.753827 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:03:29.758524 kernel: BTRFS: device fsid 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (624) Sep 9 05:03:29.758566 kernel: BTRFS info (device dm-0): first mount of filesystem 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 Sep 9 05:03:29.760524 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:03:29.763993 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:03:29.764010 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:03:29.764981 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:03:29.766196 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:03:29.767580 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:03:29.768356 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:03:29.770035 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:03:29.798259 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (653) Sep 9 05:03:29.798312 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:03:29.798323 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:03:29.801795 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:03:29.801835 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:03:29.806533 kernel: BTRFS info (device vda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:03:29.807296 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:03:29.809215 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:03:29.895922 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:03:29.898676 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:03:29.914696 ignition[699]: Ignition 2.22.0 Sep 9 05:03:29.915407 ignition[699]: Stage: fetch-offline Sep 9 05:03:29.915454 ignition[699]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:03:29.915462 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:03:29.915552 ignition[699]: parsed url from cmdline: "" Sep 9 05:03:29.915555 ignition[699]: no config URL provided Sep 9 05:03:29.915559 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:03:29.915567 ignition[699]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:03:29.915587 ignition[699]: op(1): [started] loading QEMU firmware config module Sep 9 05:03:29.915591 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 05:03:29.921532 ignition[699]: op(1): [finished] loading QEMU firmware config module Sep 9 05:03:29.938661 systemd-networkd[806]: lo: Link UP Sep 9 05:03:29.938674 systemd-networkd[806]: lo: Gained carrier Sep 9 05:03:29.939647 systemd-networkd[806]: Enumeration completed Sep 9 05:03:29.939774 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:03:29.940081 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:03:29.940085 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:03:29.941018 systemd-networkd[806]: eth0: Link UP Sep 9 05:03:29.941111 systemd-networkd[806]: eth0: Gained carrier Sep 9 05:03:29.941119 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:03:29.941711 systemd[1]: Reached target network.target - Network. Sep 9 05:03:29.964573 systemd-networkd[806]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:03:29.972509 ignition[699]: parsing config with SHA512: 2d8a49bc9beffeebaca22bd7ad244fa11dcb2c4590ef52a2ec2d44d3ca3cf03a25a861e3c6474064ccd83ba7acce2bcc9862c89226ae0e61198fb1fe663ff495 Sep 9 05:03:29.978557 unknown[699]: fetched base config from "system" Sep 9 05:03:29.979296 unknown[699]: fetched user config from "qemu" Sep 9 05:03:29.979724 ignition[699]: fetch-offline: fetch-offline passed Sep 9 05:03:29.979787 ignition[699]: Ignition finished successfully Sep 9 05:03:29.981754 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:03:29.982859 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 05:03:29.983644 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:03:30.018662 ignition[815]: Ignition 2.22.0 Sep 9 05:03:30.018679 ignition[815]: Stage: kargs Sep 9 05:03:30.018805 ignition[815]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:03:30.018814 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:03:30.021846 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:03:30.019556 ignition[815]: kargs: kargs passed Sep 9 05:03:30.019598 ignition[815]: Ignition finished successfully Sep 9 05:03:30.023912 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:03:30.058537 ignition[823]: Ignition 2.22.0 Sep 9 05:03:30.058554 ignition[823]: Stage: disks Sep 9 05:03:30.058682 ignition[823]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:03:30.058690 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:03:30.059454 ignition[823]: disks: disks passed Sep 9 05:03:30.061700 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:03:30.059514 ignition[823]: Ignition finished successfully Sep 9 05:03:30.063259 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:03:30.064506 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:03:30.066269 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:03:30.067660 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:03:30.069251 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:03:30.071846 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:03:30.096683 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 05:03:30.103024 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:03:30.105824 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:03:30.167511 kernel: EXT4-fs (vda9): mounted filesystem 88574756-967d-44b3-be66-46689c8baf27 r/w with ordered data mode. Quota mode: none. Sep 9 05:03:30.168020 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:03:30.169223 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:03:30.171254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:03:30.172760 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:03:30.173721 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 05:03:30.173763 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:03:30.173786 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:03:30.195196 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:03:30.197787 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:03:30.201768 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Sep 9 05:03:30.201790 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:03:30.201822 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:03:30.203511 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:03:30.203537 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:03:30.204745 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:03:30.232352 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:03:30.236576 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:03:30.240626 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:03:30.244387 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:03:30.322630 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:03:30.324613 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:03:30.326168 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:03:30.356537 kernel: BTRFS info (device vda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:03:30.368241 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:03:30.384027 ignition[956]: INFO : Ignition 2.22.0 Sep 9 05:03:30.384027 ignition[956]: INFO : Stage: mount Sep 9 05:03:30.385363 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:03:30.385363 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:03:30.385363 ignition[956]: INFO : mount: mount passed Sep 9 05:03:30.385363 ignition[956]: INFO : Ignition finished successfully Sep 9 05:03:30.387233 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:03:30.389630 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:03:30.758618 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:03:30.760087 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:03:30.789525 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (969) Sep 9 05:03:30.791292 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:03:30.791310 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:03:30.793718 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:03:30.793763 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:03:30.795146 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:03:30.829202 ignition[986]: INFO : Ignition 2.22.0 Sep 9 05:03:30.829202 ignition[986]: INFO : Stage: files Sep 9 05:03:30.830636 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:03:30.830636 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:03:30.830636 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:03:30.833296 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:03:30.833296 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:03:30.836450 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:03:30.837787 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:03:30.839071 unknown[986]: wrote ssh authorized keys file for user: core Sep 9 05:03:30.840057 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:03:30.842097 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 05:03:30.842097 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 05:03:30.869598 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:03:31.306039 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 05:03:31.306039 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:03:31.309686 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 05:03:31.511112 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 05:03:31.512342 systemd-networkd[806]: eth0: Gained IPv6LL Sep 9 05:03:31.617254 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:03:31.617254 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:03:31.620066 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:03:31.620066 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:03:31.620066 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:03:31.620066 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:03:31.620066 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:03:31.620066 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:03:31.620066 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:03:31.631130 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:03:31.631130 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:03:31.631130 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 05:03:31.631130 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 05:03:31.631130 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 05:03:31.631130 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 05:03:31.891982 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 05:03:32.224754 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 05:03:32.224754 ignition[986]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 05:03:32.227980 ignition[986]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:03:32.230254 ignition[986]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:03:32.230254 ignition[986]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 05:03:32.230254 ignition[986]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 05:03:32.234140 ignition[986]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:03:32.234140 ignition[986]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:03:32.234140 ignition[986]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 05:03:32.234140 ignition[986]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 05:03:32.247449 ignition[986]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:03:32.250720 ignition[986]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:03:32.253537 ignition[986]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 05:03:32.253537 ignition[986]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:03:32.253537 ignition[986]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:03:32.253537 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:03:32.253537 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:03:32.253537 ignition[986]: INFO : files: files passed Sep 9 05:03:32.253537 ignition[986]: INFO : Ignition finished successfully Sep 9 05:03:32.255135 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:03:32.257289 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:03:32.260112 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:03:32.273618 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:03:32.273748 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:03:32.278840 initrd-setup-root-after-ignition[1016]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 05:03:32.280360 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:03:32.280360 initrd-setup-root-after-ignition[1018]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:03:32.284075 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:03:32.282794 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:03:32.285478 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:03:32.288652 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:03:32.341453 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:03:32.342580 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:03:32.343801 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:03:32.346607 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:03:32.348342 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:03:32.349151 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:03:32.379812 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:03:32.381691 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:03:32.408234 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:03:32.409359 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:03:32.411353 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:03:32.412882 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:03:32.413005 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:03:32.415090 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:03:32.416752 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:03:32.418121 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:03:32.419549 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:03:32.421294 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:03:32.423001 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:03:32.424565 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:03:32.426192 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:03:32.427882 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:03:32.429447 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:03:32.430933 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:03:32.432199 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:03:32.432309 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:03:32.434260 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:03:32.436069 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:03:32.437678 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:03:32.439164 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:03:32.440290 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:03:32.440393 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:03:32.442785 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:03:32.442894 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:03:32.444567 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:03:32.445934 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:03:32.446658 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:03:32.448294 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:03:32.449742 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:03:32.451390 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:03:32.451470 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:03:32.452752 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:03:32.452828 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:03:32.454319 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:03:32.454429 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:03:32.456018 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:03:32.456113 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:03:32.458264 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:03:32.460050 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:03:32.462280 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:03:32.462406 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:03:32.464302 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:03:32.464391 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:03:32.469014 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:03:32.476669 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:03:32.484892 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:03:32.505551 ignition[1042]: INFO : Ignition 2.22.0 Sep 9 05:03:32.505551 ignition[1042]: INFO : Stage: umount Sep 9 05:03:32.508007 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:03:32.508007 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:03:32.508007 ignition[1042]: INFO : umount: umount passed Sep 9 05:03:32.508007 ignition[1042]: INFO : Ignition finished successfully Sep 9 05:03:32.508959 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:03:32.509052 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:03:32.510634 systemd[1]: Stopped target network.target - Network. Sep 9 05:03:32.511741 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:03:32.511794 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:03:32.513239 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:03:32.513280 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:03:32.514551 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:03:32.514592 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:03:32.515999 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:03:32.516034 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:03:32.517465 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:03:32.518866 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:03:32.525247 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:03:32.525350 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:03:32.528892 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:03:32.529099 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:03:32.529192 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:03:32.531952 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:03:32.532444 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:03:32.533978 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:03:32.534014 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:03:32.536228 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:03:32.537611 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:03:32.537662 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:03:32.539264 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:03:32.539305 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:03:32.541645 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:03:32.541683 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:03:32.543307 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:03:32.543347 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:03:32.545791 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:03:32.550264 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:03:32.550325 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:03:32.559190 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:03:32.562763 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:03:32.565149 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:03:32.565250 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:03:32.566782 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:03:32.566848 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:03:32.568661 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:03:32.568722 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:03:32.570346 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:03:32.570375 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:03:32.571926 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:03:32.571979 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:03:32.574020 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:03:32.574060 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:03:32.576168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:03:32.576220 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:03:32.578474 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:03:32.578532 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:03:32.580851 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:03:32.582851 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:03:32.582916 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:03:32.585329 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:03:32.585369 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:03:32.587878 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 05:03:32.587915 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:03:32.590464 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:03:32.590513 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:03:32.592486 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:03:32.592539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:03:32.596289 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 05:03:32.596333 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 05:03:32.596363 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 05:03:32.596391 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:03:32.596725 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:03:32.596819 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:03:32.598316 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:03:32.600123 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:03:32.626736 systemd[1]: Switching root. Sep 9 05:03:32.662517 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 9 05:03:32.662562 systemd-journald[244]: Journal stopped Sep 9 05:03:33.440070 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:03:33.440116 kernel: SELinux: policy capability open_perms=1 Sep 9 05:03:33.440129 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:03:33.440138 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:03:33.440150 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:03:33.440161 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:03:33.440170 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:03:33.440182 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:03:33.440191 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:03:33.440201 systemd[1]: Successfully loaded SELinux policy in 42.519ms. Sep 9 05:03:33.440216 kernel: audit: type=1403 audit(1757394212.835:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:03:33.440230 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.156ms. Sep 9 05:03:33.440241 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:03:33.440253 systemd[1]: Detected virtualization kvm. Sep 9 05:03:33.440263 systemd[1]: Detected architecture arm64. Sep 9 05:03:33.440272 systemd[1]: Detected first boot. Sep 9 05:03:33.440282 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:03:33.440291 zram_generator::config[1089]: No configuration found. Sep 9 05:03:33.440301 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:03:33.440314 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:03:33.440325 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:03:33.440334 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:03:33.440344 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:03:33.440353 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:03:33.440363 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:03:33.440373 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:03:33.440383 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:03:33.440393 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:03:33.440404 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:03:33.440414 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:03:33.440424 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:03:33.440434 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:03:33.440443 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:03:33.440453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:03:33.440464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:03:33.440474 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:03:33.440485 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:03:33.440525 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:03:33.440538 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 05:03:33.440548 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:03:33.440558 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:03:33.440568 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:03:33.440578 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:03:33.440588 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:03:33.440600 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:03:33.440609 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:03:33.440622 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:03:33.440632 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:03:33.440642 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:03:33.440652 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:03:33.440662 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:03:33.440672 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:03:33.440682 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:03:33.440693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:03:33.440703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:03:33.440713 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:03:33.440723 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:03:33.440732 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:03:33.440742 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:03:33.440751 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:03:33.440761 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:03:33.440770 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:03:33.440782 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:03:33.440792 systemd[1]: Reached target machines.target - Containers. Sep 9 05:03:33.440803 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:03:33.440813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:03:33.440822 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:03:33.440833 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:03:33.440842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:03:33.440852 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:03:33.440861 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:03:33.440872 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:03:33.440882 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:03:33.440892 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:03:33.440902 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:03:33.440911 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:03:33.440921 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:03:33.440931 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:03:33.440949 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:03:33.440963 kernel: fuse: init (API version 7.41) Sep 9 05:03:33.440973 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:03:33.440982 kernel: loop: module loaded Sep 9 05:03:33.440992 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:03:33.441002 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:03:33.441012 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:03:33.441021 kernel: ACPI: bus type drm_connector registered Sep 9 05:03:33.441030 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:03:33.441040 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:03:33.441051 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:03:33.441061 systemd[1]: Stopped verity-setup.service. Sep 9 05:03:33.441073 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:03:33.441083 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:03:33.441093 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:03:33.441105 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:03:33.441135 systemd-journald[1158]: Collecting audit messages is disabled. Sep 9 05:03:33.441160 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:03:33.441172 systemd-journald[1158]: Journal started Sep 9 05:03:33.441191 systemd-journald[1158]: Runtime Journal (/run/log/journal/98349e9030f647f0a8356e46344a81c6) is 6M, max 48.5M, 42.4M free. Sep 9 05:03:33.209532 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:03:33.441561 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:03:33.232673 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 05:03:33.233073 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:03:33.443626 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:03:33.445558 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:03:33.446683 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:03:33.447977 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:03:33.448140 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:03:33.449287 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:03:33.449439 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:03:33.450616 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:03:33.450767 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:03:33.451772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:03:33.451946 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:03:33.453166 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:03:33.453309 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:03:33.454437 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:03:33.454621 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:03:33.455742 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:03:33.456837 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:03:33.458093 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:03:33.459406 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:03:33.471360 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:03:33.473582 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:03:33.475408 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:03:33.476454 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:03:33.476480 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:03:33.478115 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:03:33.485312 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:03:33.486310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:03:33.487676 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:03:33.489425 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:03:33.490698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:03:33.492906 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:03:33.493950 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:03:33.494931 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:03:33.500473 systemd-journald[1158]: Time spent on flushing to /var/log/journal/98349e9030f647f0a8356e46344a81c6 is 22.551ms for 890 entries. Sep 9 05:03:33.500473 systemd-journald[1158]: System Journal (/var/log/journal/98349e9030f647f0a8356e46344a81c6) is 8M, max 195.6M, 187.6M free. Sep 9 05:03:33.546275 systemd-journald[1158]: Received client request to flush runtime journal. Sep 9 05:03:33.546324 kernel: loop0: detected capacity change from 0 to 207008 Sep 9 05:03:33.546346 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:03:33.546360 kernel: loop1: detected capacity change from 0 to 100632 Sep 9 05:03:33.498675 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:03:33.501873 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:03:33.505580 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:03:33.506775 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:03:33.507791 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:03:33.513539 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:03:33.518643 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:03:33.522611 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:03:33.526601 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:03:33.547003 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Sep 9 05:03:33.547013 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Sep 9 05:03:33.548996 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:03:33.553062 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:03:33.554765 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:03:33.560207 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:03:33.568527 kernel: loop2: detected capacity change from 0 to 119368 Sep 9 05:03:33.585972 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:03:33.591191 kernel: loop3: detected capacity change from 0 to 207008 Sep 9 05:03:33.589619 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:03:33.594533 kernel: loop4: detected capacity change from 0 to 100632 Sep 9 05:03:33.603531 kernel: loop5: detected capacity change from 0 to 119368 Sep 9 05:03:33.608797 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 9 05:03:33.609050 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 9 05:03:33.612132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:03:33.617624 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 05:03:33.618007 (sd-merge)[1228]: Merged extensions into '/usr'. Sep 9 05:03:33.623038 systemd[1]: Reload requested from client PID 1206 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:03:33.623058 systemd[1]: Reloading... Sep 9 05:03:33.665523 zram_generator::config[1258]: No configuration found. Sep 9 05:03:33.768562 ldconfig[1201]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:03:33.811910 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:03:33.812491 systemd[1]: Reloading finished in 189 ms. Sep 9 05:03:33.829992 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:03:33.831419 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:03:33.842652 systemd[1]: Starting ensure-sysext.service... Sep 9 05:03:33.844327 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:03:33.852973 systemd[1]: Reload requested from client PID 1292 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:03:33.852987 systemd[1]: Reloading... Sep 9 05:03:33.856824 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:03:33.856860 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:03:33.857101 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:03:33.857282 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:03:33.857889 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:03:33.858107 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Sep 9 05:03:33.858151 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Sep 9 05:03:33.860768 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:03:33.860782 systemd-tmpfiles[1293]: Skipping /boot Sep 9 05:03:33.866847 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:03:33.866865 systemd-tmpfiles[1293]: Skipping /boot Sep 9 05:03:33.903540 zram_generator::config[1320]: No configuration found. Sep 9 05:03:34.024363 systemd[1]: Reloading finished in 171 ms. Sep 9 05:03:34.047315 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:03:34.053014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:03:34.070595 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:03:34.072795 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:03:34.074997 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:03:34.079650 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:03:34.081550 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:03:34.085271 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:03:34.091905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:03:34.093570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:03:34.095699 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:03:34.098173 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:03:34.099618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:03:34.099730 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:03:34.100724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:03:34.100900 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:03:34.104394 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:03:34.110923 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:03:34.111664 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:03:34.115014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:03:34.116683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:03:34.121751 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:03:34.122757 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:03:34.122911 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:03:34.124886 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:03:34.127089 systemd-udevd[1367]: Using default interface naming scheme 'v255'. Sep 9 05:03:34.128630 augenrules[1391]: No rules Sep 9 05:03:34.131179 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:03:34.133981 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:03:34.134322 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:03:34.135925 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:03:34.137820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:03:34.138078 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:03:34.139873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:03:34.140106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:03:34.141745 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:03:34.141985 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:03:34.145633 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:03:34.147096 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:03:34.153130 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:03:34.168005 systemd[1]: Finished ensure-sysext.service. Sep 9 05:03:34.181671 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:03:34.182463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:03:34.184714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:03:34.207062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:03:34.210777 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:03:34.220282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:03:34.221279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:03:34.221318 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:03:34.223271 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:03:34.229626 augenrules[1432]: /sbin/augenrules: No change Sep 9 05:03:34.226718 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 05:03:34.227547 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:03:34.227835 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:03:34.229461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:03:34.230116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:03:34.231948 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:03:34.233556 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:03:34.236382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:03:34.237608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:03:34.238937 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:03:34.239096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:03:34.247942 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 05:03:34.252696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:03:34.252760 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:03:34.258742 augenrules[1471]: No rules Sep 9 05:03:34.260395 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:03:34.261130 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:03:34.285330 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:03:34.288193 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:03:34.313182 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:03:34.336217 systemd-networkd[1453]: lo: Link UP Sep 9 05:03:34.336229 systemd-networkd[1453]: lo: Gained carrier Sep 9 05:03:34.337047 systemd-networkd[1453]: Enumeration completed Sep 9 05:03:34.337173 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:03:34.337514 systemd-networkd[1453]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:03:34.337519 systemd-networkd[1453]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:03:34.339629 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:03:34.340841 systemd-networkd[1453]: eth0: Link UP Sep 9 05:03:34.340964 systemd-networkd[1453]: eth0: Gained carrier Sep 9 05:03:34.340983 systemd-networkd[1453]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:03:34.341837 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:03:34.345460 systemd-resolved[1361]: Positive Trust Anchors: Sep 9 05:03:34.345479 systemd-resolved[1361]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:03:34.346075 systemd-resolved[1361]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:03:34.348518 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 05:03:34.349837 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:03:34.356050 systemd-resolved[1361]: Defaulting to hostname 'linux'. Sep 9 05:03:34.356549 systemd-networkd[1453]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:03:34.357133 systemd-timesyncd[1456]: Network configuration changed, trying to establish connection. Sep 9 05:03:34.358039 systemd-timesyncd[1456]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 05:03:34.358091 systemd-timesyncd[1456]: Initial clock synchronization to Tue 2025-09-09 05:03:34.705442 UTC. Sep 9 05:03:34.358543 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:03:34.359533 systemd[1]: Reached target network.target - Network. Sep 9 05:03:34.360557 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:03:34.361440 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:03:34.362396 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:03:34.363418 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:03:34.365141 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:03:34.366075 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:03:34.367122 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:03:34.368128 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:03:34.368155 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:03:34.368870 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:03:34.370603 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:03:34.372720 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:03:34.375342 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:03:34.376947 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:03:34.377963 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:03:34.381904 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:03:34.384172 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:03:34.387128 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:03:34.388725 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:03:34.391329 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:03:34.394807 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:03:34.395741 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:03:34.395777 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:03:34.399596 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:03:34.401650 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:03:34.404719 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:03:34.419408 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:03:34.421361 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:03:34.422203 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:03:34.423134 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:03:34.424806 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:03:34.426219 jq[1506]: false Sep 9 05:03:34.428444 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:03:34.430726 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:03:34.436780 extend-filesystems[1507]: Found /dev/vda6 Sep 9 05:03:34.438295 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:03:34.440133 extend-filesystems[1507]: Found /dev/vda9 Sep 9 05:03:34.441419 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:03:34.441840 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:03:34.442364 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:03:34.444238 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:03:34.444908 extend-filesystems[1507]: Checking size of /dev/vda9 Sep 9 05:03:34.449536 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:03:34.451067 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:03:34.451239 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:03:34.451478 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:03:34.451697 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:03:34.454606 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:03:34.454782 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:03:34.458315 jq[1525]: true Sep 9 05:03:34.465544 extend-filesystems[1507]: Resized partition /dev/vda9 Sep 9 05:03:34.469884 extend-filesystems[1542]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:03:34.478747 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 05:03:34.478688 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:03:34.480345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:03:34.483429 update_engine[1523]: I20250909 05:03:34.483201 1523 main.cc:92] Flatcar Update Engine starting Sep 9 05:03:34.492186 jq[1536]: true Sep 9 05:03:34.503173 tar[1533]: linux-arm64/LICENSE Sep 9 05:03:34.503173 tar[1533]: linux-arm64/helm Sep 9 05:03:34.506539 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 05:03:34.509053 dbus-daemon[1504]: [system] SELinux support is enabled Sep 9 05:03:34.509209 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:03:34.517708 update_engine[1523]: I20250909 05:03:34.511410 1523 update_check_scheduler.cc:74] Next update check in 9m58s Sep 9 05:03:34.512090 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:03:34.512109 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:03:34.514717 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:03:34.514733 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:03:34.517949 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:03:34.520759 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:03:34.521408 extend-filesystems[1542]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 05:03:34.521408 extend-filesystems[1542]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 05:03:34.521408 extend-filesystems[1542]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 05:03:34.525846 extend-filesystems[1507]: Resized filesystem in /dev/vda9 Sep 9 05:03:34.522977 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:03:34.523183 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:03:34.546211 systemd-logind[1521]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 05:03:34.547392 systemd-logind[1521]: New seat seat0. Sep 9 05:03:34.564404 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:03:34.566044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:03:34.594296 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:03:34.597218 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:03:34.600644 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:03:34.602492 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 05:03:34.675063 containerd[1534]: time="2025-09-09T05:03:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:03:34.677349 containerd[1534]: time="2025-09-09T05:03:34.677309040Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.686684000Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.4µs" Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.686718680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.686735400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.686867600Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.686883280Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.686905600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.686966360Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.686978360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.687164960Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.687180520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.687190920Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:03:34.687540 containerd[1534]: time="2025-09-09T05:03:34.687199200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:03:34.687772 containerd[1534]: time="2025-09-09T05:03:34.687271480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:03:34.687772 containerd[1534]: time="2025-09-09T05:03:34.687442160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:03:34.687772 containerd[1534]: time="2025-09-09T05:03:34.687468320Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:03:34.687772 containerd[1534]: time="2025-09-09T05:03:34.687477840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:03:34.687885 containerd[1534]: time="2025-09-09T05:03:34.687864360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:03:34.688182 containerd[1534]: time="2025-09-09T05:03:34.688164040Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:03:34.688314 containerd[1534]: time="2025-09-09T05:03:34.688294120Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:03:34.702342 containerd[1534]: time="2025-09-09T05:03:34.702308480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:03:34.702457 containerd[1534]: time="2025-09-09T05:03:34.702442160Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:03:34.702602 containerd[1534]: time="2025-09-09T05:03:34.702585560Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:03:34.702664 containerd[1534]: time="2025-09-09T05:03:34.702651200Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:03:34.702714 containerd[1534]: time="2025-09-09T05:03:34.702701240Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:03:34.702764 containerd[1534]: time="2025-09-09T05:03:34.702750520Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:03:34.702815 containerd[1534]: time="2025-09-09T05:03:34.702803680Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:03:34.702867 containerd[1534]: time="2025-09-09T05:03:34.702854120Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:03:34.702921 containerd[1534]: time="2025-09-09T05:03:34.702908080Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:03:34.703009 containerd[1534]: time="2025-09-09T05:03:34.702993560Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:03:34.703059 containerd[1534]: time="2025-09-09T05:03:34.703048080Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:03:34.703113 containerd[1534]: time="2025-09-09T05:03:34.703100640Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:03:34.703285 containerd[1534]: time="2025-09-09T05:03:34.703263160Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:03:34.703356 containerd[1534]: time="2025-09-09T05:03:34.703342000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:03:34.703411 containerd[1534]: time="2025-09-09T05:03:34.703397640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:03:34.703476 containerd[1534]: time="2025-09-09T05:03:34.703463360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:03:34.703548 containerd[1534]: time="2025-09-09T05:03:34.703535080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:03:34.703599 containerd[1534]: time="2025-09-09T05:03:34.703587240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:03:34.703651 containerd[1534]: time="2025-09-09T05:03:34.703638880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:03:34.703722 containerd[1534]: time="2025-09-09T05:03:34.703708960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:03:34.703782 containerd[1534]: time="2025-09-09T05:03:34.703769280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:03:34.703837 containerd[1534]: time="2025-09-09T05:03:34.703826120Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:03:34.703886 containerd[1534]: time="2025-09-09T05:03:34.703874520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:03:34.704128 containerd[1534]: time="2025-09-09T05:03:34.704110880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:03:34.704189 containerd[1534]: time="2025-09-09T05:03:34.704176320Z" level=info msg="Start snapshots syncer" Sep 9 05:03:34.704257 containerd[1534]: time="2025-09-09T05:03:34.704242680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:03:34.704695 containerd[1534]: time="2025-09-09T05:03:34.704655680Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:03:34.704851 containerd[1534]: time="2025-09-09T05:03:34.704835000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:03:34.704999 containerd[1534]: time="2025-09-09T05:03:34.704981160Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:03:34.706508 containerd[1534]: time="2025-09-09T05:03:34.706441400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:03:34.706713 containerd[1534]: time="2025-09-09T05:03:34.706691600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:03:34.706761 containerd[1534]: time="2025-09-09T05:03:34.706717240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:03:34.706761 containerd[1534]: time="2025-09-09T05:03:34.706730400Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:03:34.706761 containerd[1534]: time="2025-09-09T05:03:34.706742600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:03:34.706761 containerd[1534]: time="2025-09-09T05:03:34.706752680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:03:34.706827 containerd[1534]: time="2025-09-09T05:03:34.706762840Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:03:34.706827 containerd[1534]: time="2025-09-09T05:03:34.706790640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:03:34.706827 containerd[1534]: time="2025-09-09T05:03:34.706801560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:03:34.706827 containerd[1534]: time="2025-09-09T05:03:34.706811840Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:03:34.706891 containerd[1534]: time="2025-09-09T05:03:34.706843840Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:03:34.706891 containerd[1534]: time="2025-09-09T05:03:34.706860200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:03:34.706891 containerd[1534]: time="2025-09-09T05:03:34.706868880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:03:34.706891 containerd[1534]: time="2025-09-09T05:03:34.706878040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:03:34.706891 containerd[1534]: time="2025-09-09T05:03:34.706886320Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:03:34.706987 containerd[1534]: time="2025-09-09T05:03:34.706895920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:03:34.706987 containerd[1534]: time="2025-09-09T05:03:34.706906360Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:03:34.707527 containerd[1534]: time="2025-09-09T05:03:34.707021640Z" level=info msg="runtime interface created" Sep 9 05:03:34.707527 containerd[1534]: time="2025-09-09T05:03:34.707033040Z" level=info msg="created NRI interface" Sep 9 05:03:34.707527 containerd[1534]: time="2025-09-09T05:03:34.707043040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:03:34.707527 containerd[1534]: time="2025-09-09T05:03:34.707056440Z" level=info msg="Connect containerd service" Sep 9 05:03:34.707527 containerd[1534]: time="2025-09-09T05:03:34.707118600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:03:34.708293 containerd[1534]: time="2025-09-09T05:03:34.708262360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:03:34.784412 containerd[1534]: time="2025-09-09T05:03:34.784345320Z" level=info msg="Start subscribing containerd event" Sep 9 05:03:34.784587 containerd[1534]: time="2025-09-09T05:03:34.784571240Z" level=info msg="Start recovering state" Sep 9 05:03:34.784763 containerd[1534]: time="2025-09-09T05:03:34.784748040Z" level=info msg="Start event monitor" Sep 9 05:03:34.784839 containerd[1534]: time="2025-09-09T05:03:34.784827560Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:03:34.784923 containerd[1534]: time="2025-09-09T05:03:34.784828000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:03:34.785145 containerd[1534]: time="2025-09-09T05:03:34.785039120Z" level=info msg="Start streaming server" Sep 9 05:03:34.785522 containerd[1534]: time="2025-09-09T05:03:34.785388640Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:03:34.785599 containerd[1534]: time="2025-09-09T05:03:34.785583920Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:03:34.785643 containerd[1534]: time="2025-09-09T05:03:34.785632600Z" level=info msg="runtime interface starting up..." Sep 9 05:03:34.785682 containerd[1534]: time="2025-09-09T05:03:34.785672240Z" level=info msg="starting plugins..." Sep 9 05:03:34.785738 containerd[1534]: time="2025-09-09T05:03:34.785725520Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:03:34.786064 containerd[1534]: time="2025-09-09T05:03:34.786043440Z" level=info msg="containerd successfully booted in 0.111317s" Sep 9 05:03:34.786184 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:03:34.831366 tar[1533]: linux-arm64/README.md Sep 9 05:03:34.850834 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:03:35.381863 sshd_keygen[1528]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:03:35.401901 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:03:35.405287 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:03:35.429100 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:03:35.429335 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:03:35.432006 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:03:35.459902 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:03:35.463062 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:03:35.465170 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 05:03:35.466366 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:03:35.994217 systemd-networkd[1453]: eth0: Gained IPv6LL Sep 9 05:03:35.996621 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:03:35.998237 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:03:36.000720 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 05:03:36.023897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:03:36.026585 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:03:36.041326 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 05:03:36.041616 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 05:03:36.043376 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:03:36.046205 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:03:36.651062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:03:36.652755 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:03:36.654576 systemd[1]: Startup finished in 2.021s (kernel) + 5.242s (initrd) + 3.862s (userspace) = 11.126s. Sep 9 05:03:36.657269 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:03:37.032677 kubelet[1645]: E0909 05:03:37.032539 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:03:37.034799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:03:37.034944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:03:37.035258 systemd[1]: kubelet.service: Consumed 748ms CPU time, 256.6M memory peak. Sep 9 05:03:41.144071 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:03:41.145163 systemd[1]: Started sshd@0-10.0.0.93:22-10.0.0.1:58638.service - OpenSSH per-connection server daemon (10.0.0.1:58638). Sep 9 05:03:41.221509 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 58638 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:03:41.226415 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:03:41.232749 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:03:41.233797 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:03:41.243198 systemd-logind[1521]: New session 1 of user core. Sep 9 05:03:41.263141 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:03:41.274512 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:03:41.299043 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:03:41.301885 systemd-logind[1521]: New session c1 of user core. Sep 9 05:03:41.428045 systemd[1664]: Queued start job for default target default.target. Sep 9 05:03:41.443582 systemd[1664]: Created slice app.slice - User Application Slice. Sep 9 05:03:41.443612 systemd[1664]: Reached target paths.target - Paths. Sep 9 05:03:41.443648 systemd[1664]: Reached target timers.target - Timers. Sep 9 05:03:41.444900 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:03:41.459442 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:03:41.459569 systemd[1664]: Reached target sockets.target - Sockets. Sep 9 05:03:41.459609 systemd[1664]: Reached target basic.target - Basic System. Sep 9 05:03:41.459636 systemd[1664]: Reached target default.target - Main User Target. Sep 9 05:03:41.459664 systemd[1664]: Startup finished in 149ms. Sep 9 05:03:41.460131 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:03:41.478790 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:03:41.543535 systemd[1]: Started sshd@1-10.0.0.93:22-10.0.0.1:58644.service - OpenSSH per-connection server daemon (10.0.0.1:58644). Sep 9 05:03:41.619643 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 58644 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:03:41.621150 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:03:41.625940 systemd-logind[1521]: New session 2 of user core. Sep 9 05:03:41.633691 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:03:41.705700 sshd[1678]: Connection closed by 10.0.0.1 port 58644 Sep 9 05:03:41.706049 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Sep 9 05:03:41.721657 systemd[1]: sshd@1-10.0.0.93:22-10.0.0.1:58644.service: Deactivated successfully. Sep 9 05:03:41.725103 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:03:41.726733 systemd-logind[1521]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:03:41.730285 systemd[1]: Started sshd@2-10.0.0.93:22-10.0.0.1:58646.service - OpenSSH per-connection server daemon (10.0.0.1:58646). Sep 9 05:03:41.731825 systemd-logind[1521]: Removed session 2. Sep 9 05:03:41.788051 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 58646 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:03:41.790314 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:03:41.794603 systemd-logind[1521]: New session 3 of user core. Sep 9 05:03:41.804720 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:03:41.854391 sshd[1687]: Connection closed by 10.0.0.1 port 58646 Sep 9 05:03:41.854707 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Sep 9 05:03:41.872431 systemd[1]: sshd@2-10.0.0.93:22-10.0.0.1:58646.service: Deactivated successfully. Sep 9 05:03:41.877083 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:03:41.877878 systemd-logind[1521]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:03:41.881217 systemd[1]: Started sshd@3-10.0.0.93:22-10.0.0.1:58662.service - OpenSSH per-connection server daemon (10.0.0.1:58662). Sep 9 05:03:41.882060 systemd-logind[1521]: Removed session 3. Sep 9 05:03:41.946921 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 58662 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:03:41.949305 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:03:41.953980 systemd-logind[1521]: New session 4 of user core. Sep 9 05:03:41.960744 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:03:42.014044 sshd[1696]: Connection closed by 10.0.0.1 port 58662 Sep 9 05:03:42.015937 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Sep 9 05:03:42.026642 systemd[1]: sshd@3-10.0.0.93:22-10.0.0.1:58662.service: Deactivated successfully. Sep 9 05:03:42.028359 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:03:42.030755 systemd-logind[1521]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:03:42.032498 systemd[1]: Started sshd@4-10.0.0.93:22-10.0.0.1:58670.service - OpenSSH per-connection server daemon (10.0.0.1:58670). Sep 9 05:03:42.036434 systemd-logind[1521]: Removed session 4. Sep 9 05:03:42.095129 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 58670 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:03:42.096411 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:03:42.100654 systemd-logind[1521]: New session 5 of user core. Sep 9 05:03:42.107673 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:03:42.164484 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:03:42.164818 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:03:42.180480 sudo[1706]: pam_unix(sudo:session): session closed for user root Sep 9 05:03:42.182067 sshd[1705]: Connection closed by 10.0.0.1 port 58670 Sep 9 05:03:42.184019 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Sep 9 05:03:42.199769 systemd[1]: sshd@4-10.0.0.93:22-10.0.0.1:58670.service: Deactivated successfully. Sep 9 05:03:42.203159 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:03:42.203900 systemd-logind[1521]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:03:42.206312 systemd[1]: Started sshd@5-10.0.0.93:22-10.0.0.1:58676.service - OpenSSH per-connection server daemon (10.0.0.1:58676). Sep 9 05:03:42.207161 systemd-logind[1521]: Removed session 5. Sep 9 05:03:42.278472 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 58676 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:03:42.279868 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:03:42.284398 systemd-logind[1521]: New session 6 of user core. Sep 9 05:03:42.292764 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:03:42.347156 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:03:42.347438 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:03:42.473302 sudo[1717]: pam_unix(sudo:session): session closed for user root Sep 9 05:03:42.479937 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:03:42.480506 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:03:42.490183 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:03:42.534485 augenrules[1739]: No rules Sep 9 05:03:42.536126 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:03:42.536338 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:03:42.538160 sudo[1716]: pam_unix(sudo:session): session closed for user root Sep 9 05:03:42.540084 sshd[1715]: Connection closed by 10.0.0.1 port 58676 Sep 9 05:03:42.540905 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Sep 9 05:03:42.556649 systemd[1]: sshd@5-10.0.0.93:22-10.0.0.1:58676.service: Deactivated successfully. Sep 9 05:03:42.558064 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:03:42.560421 systemd-logind[1521]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:03:42.562324 systemd[1]: Started sshd@6-10.0.0.93:22-10.0.0.1:58686.service - OpenSSH per-connection server daemon (10.0.0.1:58686). Sep 9 05:03:42.565825 systemd-logind[1521]: Removed session 6. Sep 9 05:03:42.638065 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 58686 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:03:42.639266 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:03:42.643349 systemd-logind[1521]: New session 7 of user core. Sep 9 05:03:42.658644 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:03:42.710995 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:03:42.711274 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:03:42.999502 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:03:43.012876 (dockerd)[1773]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:03:43.252317 dockerd[1773]: time="2025-09-09T05:03:43.252184247Z" level=info msg="Starting up" Sep 9 05:03:43.253060 dockerd[1773]: time="2025-09-09T05:03:43.253040153Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:03:43.263428 dockerd[1773]: time="2025-09-09T05:03:43.263393762Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:03:43.313144 dockerd[1773]: time="2025-09-09T05:03:43.313099562Z" level=info msg="Loading containers: start." Sep 9 05:03:43.325542 kernel: Initializing XFRM netlink socket Sep 9 05:03:43.514022 systemd-networkd[1453]: docker0: Link UP Sep 9 05:03:43.517631 dockerd[1773]: time="2025-09-09T05:03:43.517488038Z" level=info msg="Loading containers: done." Sep 9 05:03:43.533856 dockerd[1773]: time="2025-09-09T05:03:43.533802825Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:03:43.533989 dockerd[1773]: time="2025-09-09T05:03:43.533893803Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:03:43.534014 dockerd[1773]: time="2025-09-09T05:03:43.533985634Z" level=info msg="Initializing buildkit" Sep 9 05:03:43.555168 dockerd[1773]: time="2025-09-09T05:03:43.555133816Z" level=info msg="Completed buildkit initialization" Sep 9 05:03:43.559921 dockerd[1773]: time="2025-09-09T05:03:43.559887762Z" level=info msg="Daemon has completed initialization" Sep 9 05:03:43.560068 dockerd[1773]: time="2025-09-09T05:03:43.559950282Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:03:43.560174 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:03:44.206733 containerd[1534]: time="2025-09-09T05:03:44.206688532Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 05:03:44.786154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145448905.mount: Deactivated successfully. Sep 9 05:03:46.188608 containerd[1534]: time="2025-09-09T05:03:46.188489885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:46.189703 containerd[1534]: time="2025-09-09T05:03:46.189652756Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 9 05:03:46.192075 containerd[1534]: time="2025-09-09T05:03:46.192029242Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:46.195630 containerd[1534]: time="2025-09-09T05:03:46.195586132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:46.197319 containerd[1534]: time="2025-09-09T05:03:46.197284222Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.990549565s" Sep 9 05:03:46.197370 containerd[1534]: time="2025-09-09T05:03:46.197325026Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 05:03:46.199410 containerd[1534]: time="2025-09-09T05:03:46.199369787Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 05:03:47.285304 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:03:47.286865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:03:47.448607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:03:47.463859 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:03:47.563165 containerd[1534]: time="2025-09-09T05:03:47.563008957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:47.564592 containerd[1534]: time="2025-09-09T05:03:47.564552665Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 9 05:03:47.565308 containerd[1534]: time="2025-09-09T05:03:47.565269885Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:47.568096 containerd[1534]: time="2025-09-09T05:03:47.568046162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:47.569159 containerd[1534]: time="2025-09-09T05:03:47.569128711Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.369717931s" Sep 9 05:03:47.569201 containerd[1534]: time="2025-09-09T05:03:47.569166115Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 05:03:47.571224 containerd[1534]: time="2025-09-09T05:03:47.571184136Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 05:03:47.571461 kubelet[2054]: E0909 05:03:47.571420 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:03:47.574716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:03:47.574854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:03:47.575434 systemd[1]: kubelet.service: Consumed 156ms CPU time, 106M memory peak. Sep 9 05:03:49.122210 containerd[1534]: time="2025-09-09T05:03:49.122151982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:49.122697 containerd[1534]: time="2025-09-09T05:03:49.122664673Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 9 05:03:49.123681 containerd[1534]: time="2025-09-09T05:03:49.123650552Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:49.125871 containerd[1534]: time="2025-09-09T05:03:49.125842375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:49.126995 containerd[1534]: time="2025-09-09T05:03:49.126870697Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.555640306s" Sep 9 05:03:49.126995 containerd[1534]: time="2025-09-09T05:03:49.126906536Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 05:03:49.127485 containerd[1534]: time="2025-09-09T05:03:49.127445924Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 05:03:50.075712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount69574174.mount: Deactivated successfully. Sep 9 05:03:50.321612 containerd[1534]: time="2025-09-09T05:03:50.321557805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:50.322180 containerd[1534]: time="2025-09-09T05:03:50.321949568Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 9 05:03:50.322932 containerd[1534]: time="2025-09-09T05:03:50.322899825Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:50.324770 containerd[1534]: time="2025-09-09T05:03:50.324708081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:50.325356 containerd[1534]: time="2025-09-09T05:03:50.325235675Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.19762768s" Sep 9 05:03:50.325356 containerd[1534]: time="2025-09-09T05:03:50.325266695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 05:03:50.325878 containerd[1534]: time="2025-09-09T05:03:50.325786565Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 05:03:51.431230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886737301.mount: Deactivated successfully. Sep 9 05:03:52.142292 containerd[1534]: time="2025-09-09T05:03:52.142244815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:52.143120 containerd[1534]: time="2025-09-09T05:03:52.142753929Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 9 05:03:52.143952 containerd[1534]: time="2025-09-09T05:03:52.143915719Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:52.146761 containerd[1534]: time="2025-09-09T05:03:52.146720689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:52.148851 containerd[1534]: time="2025-09-09T05:03:52.148805441Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.822982961s" Sep 9 05:03:52.148851 containerd[1534]: time="2025-09-09T05:03:52.148849558Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 05:03:52.149307 containerd[1534]: time="2025-09-09T05:03:52.149278876Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:03:52.557817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116714255.mount: Deactivated successfully. Sep 9 05:03:52.563045 containerd[1534]: time="2025-09-09T05:03:52.562990924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:03:52.564401 containerd[1534]: time="2025-09-09T05:03:52.564359317Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 05:03:52.565373 containerd[1534]: time="2025-09-09T05:03:52.565335035Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:03:52.568291 containerd[1534]: time="2025-09-09T05:03:52.568249374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:03:52.569644 containerd[1534]: time="2025-09-09T05:03:52.569606958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 420.211642ms" Sep 9 05:03:52.569684 containerd[1534]: time="2025-09-09T05:03:52.569643200Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 05:03:52.570151 containerd[1534]: time="2025-09-09T05:03:52.570117157Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 05:03:53.041469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884591029.mount: Deactivated successfully. Sep 9 05:03:55.063810 containerd[1534]: time="2025-09-09T05:03:55.063753584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:55.065583 containerd[1534]: time="2025-09-09T05:03:55.065551612Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 9 05:03:55.067518 containerd[1534]: time="2025-09-09T05:03:55.066865109Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:55.069483 containerd[1534]: time="2025-09-09T05:03:55.069446405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:03:55.070803 containerd[1534]: time="2025-09-09T05:03:55.070589992Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.500441588s" Sep 9 05:03:55.070803 containerd[1534]: time="2025-09-09T05:03:55.070622369Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 05:03:57.604732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 05:03:57.606262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:03:57.720780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:03:57.724089 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:03:57.756655 kubelet[2215]: E0909 05:03:57.756588 2215 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:03:57.759061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:03:57.759197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:03:57.759736 systemd[1]: kubelet.service: Consumed 129ms CPU time, 106.6M memory peak. Sep 9 05:03:59.151680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:03:59.151822 systemd[1]: kubelet.service: Consumed 129ms CPU time, 106.6M memory peak. Sep 9 05:03:59.154166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:03:59.179168 systemd[1]: Reload requested from client PID 2231 ('systemctl') (unit session-7.scope)... Sep 9 05:03:59.179185 systemd[1]: Reloading... Sep 9 05:03:59.254539 zram_generator::config[2275]: No configuration found. Sep 9 05:03:59.511145 systemd[1]: Reloading finished in 331 ms. Sep 9 05:03:59.575095 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:03:59.575183 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:03:59.575477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:03:59.575539 systemd[1]: kubelet.service: Consumed 97ms CPU time, 95M memory peak. Sep 9 05:03:59.577176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:03:59.691837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:03:59.708938 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:03:59.745285 kubelet[2319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:03:59.745285 kubelet[2319]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:03:59.745285 kubelet[2319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:03:59.745643 kubelet[2319]: I0909 05:03:59.745356 2319 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:04:01.166237 kubelet[2319]: I0909 05:04:01.166186 2319 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:04:01.166237 kubelet[2319]: I0909 05:04:01.166221 2319 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:04:01.166647 kubelet[2319]: I0909 05:04:01.166491 2319 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:04:01.189647 kubelet[2319]: E0909 05:04:01.189584 2319 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:04:01.191930 kubelet[2319]: I0909 05:04:01.191900 2319 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:04:01.197401 kubelet[2319]: I0909 05:04:01.197378 2319 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:04:01.201437 kubelet[2319]: I0909 05:04:01.201407 2319 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:04:01.202073 kubelet[2319]: I0909 05:04:01.202012 2319 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:04:01.202252 kubelet[2319]: I0909 05:04:01.202055 2319 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:04:01.202354 kubelet[2319]: I0909 05:04:01.202313 2319 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:04:01.202354 kubelet[2319]: I0909 05:04:01.202324 2319 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:04:01.202565 kubelet[2319]: I0909 05:04:01.202541 2319 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:04:01.204854 kubelet[2319]: I0909 05:04:01.204834 2319 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:04:01.204900 kubelet[2319]: I0909 05:04:01.204862 2319 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:04:01.204900 kubelet[2319]: I0909 05:04:01.204886 2319 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:04:01.204900 kubelet[2319]: I0909 05:04:01.204896 2319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:04:01.207216 kubelet[2319]: I0909 05:04:01.207197 2319 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:04:01.207921 kubelet[2319]: W0909 05:04:01.207874 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 9 05:04:01.208035 kubelet[2319]: E0909 05:04:01.208015 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:04:01.208145 kubelet[2319]: I0909 05:04:01.208118 2319 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:04:01.208269 kubelet[2319]: W0909 05:04:01.208256 2319 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:04:01.208724 kubelet[2319]: W0909 05:04:01.208692 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 9 05:04:01.208962 kubelet[2319]: E0909 05:04:01.208937 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:04:01.209134 kubelet[2319]: I0909 05:04:01.209061 2319 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:04:01.209134 kubelet[2319]: I0909 05:04:01.209101 2319 server.go:1287] "Started kubelet" Sep 9 05:04:01.209256 kubelet[2319]: I0909 05:04:01.209230 2319 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:04:01.212268 kubelet[2319]: E0909 05:04:01.212017 2319 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186384c055957a6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 05:04:01.209072238 +0000 UTC m=+1.496990910,LastTimestamp:2025-09-09 05:04:01.209072238 +0000 UTC m=+1.496990910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 05:04:01.213937 kubelet[2319]: I0909 05:04:01.213913 2319 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:04:01.214228 kubelet[2319]: I0909 05:04:01.214199 2319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:04:01.215923 kubelet[2319]: I0909 05:04:01.215894 2319 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:04:01.216525 kubelet[2319]: I0909 05:04:01.216446 2319 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:04:01.216800 kubelet[2319]: I0909 05:04:01.216781 2319 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:04:01.218133 kubelet[2319]: I0909 05:04:01.218112 2319 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:04:01.219180 kubelet[2319]: I0909 05:04:01.218881 2319 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:04:01.219330 kubelet[2319]: E0909 05:04:01.219296 2319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:04:01.219411 kubelet[2319]: I0909 05:04:01.219381 2319 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:04:01.219578 kubelet[2319]: I0909 05:04:01.219555 2319 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:04:01.219835 kubelet[2319]: I0909 05:04:01.219820 2319 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:04:01.220214 kubelet[2319]: W0909 05:04:01.220163 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 9 05:04:01.220255 kubelet[2319]: E0909 05:04:01.220228 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:04:01.221829 kubelet[2319]: E0909 05:04:01.221025 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="200ms" Sep 9 05:04:01.221829 kubelet[2319]: E0909 05:04:01.221394 2319 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:04:01.221829 kubelet[2319]: I0909 05:04:01.221511 2319 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:04:01.228298 kubelet[2319]: I0909 05:04:01.228255 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:04:01.229424 kubelet[2319]: I0909 05:04:01.229372 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:04:01.229424 kubelet[2319]: I0909 05:04:01.229397 2319 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:04:01.229424 kubelet[2319]: I0909 05:04:01.229419 2319 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:04:01.229424 kubelet[2319]: I0909 05:04:01.229426 2319 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:04:01.229564 kubelet[2319]: E0909 05:04:01.229467 2319 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:04:01.235166 kubelet[2319]: W0909 05:04:01.235134 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 9 05:04:01.235290 kubelet[2319]: E0909 05:04:01.235269 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:04:01.236033 kubelet[2319]: I0909 05:04:01.236018 2319 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:04:01.236147 kubelet[2319]: I0909 05:04:01.236137 2319 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:04:01.236216 kubelet[2319]: I0909 05:04:01.236207 2319 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:04:01.238211 kubelet[2319]: I0909 05:04:01.238184 2319 policy_none.go:49] "None policy: Start" Sep 9 05:04:01.238302 kubelet[2319]: I0909 05:04:01.238292 2319 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:04:01.238353 kubelet[2319]: I0909 05:04:01.238345 2319 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:04:01.242861 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:04:01.253029 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:04:01.255677 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:04:01.264244 kubelet[2319]: I0909 05:04:01.264191 2319 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:04:01.264639 kubelet[2319]: I0909 05:04:01.264410 2319 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:04:01.264639 kubelet[2319]: I0909 05:04:01.264431 2319 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:04:01.265338 kubelet[2319]: I0909 05:04:01.264706 2319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:04:01.265547 kubelet[2319]: E0909 05:04:01.265480 2319 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:04:01.265586 kubelet[2319]: E0909 05:04:01.265569 2319 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 05:04:01.337259 systemd[1]: Created slice kubepods-burstable-pod9619f5755ab1cded03a4068f9036c7cd.slice - libcontainer container kubepods-burstable-pod9619f5755ab1cded03a4068f9036c7cd.slice. Sep 9 05:04:01.351382 kubelet[2319]: E0909 05:04:01.351284 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:04:01.354409 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 05:04:01.356400 kubelet[2319]: E0909 05:04:01.356371 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:04:01.357654 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 05:04:01.359233 kubelet[2319]: E0909 05:04:01.359210 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:04:01.366268 kubelet[2319]: I0909 05:04:01.366215 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:04:01.366666 kubelet[2319]: E0909 05:04:01.366640 2319 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 9 05:04:01.421159 kubelet[2319]: I0909 05:04:01.421046 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9619f5755ab1cded03a4068f9036c7cd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9619f5755ab1cded03a4068f9036c7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:01.421159 kubelet[2319]: I0909 05:04:01.421091 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9619f5755ab1cded03a4068f9036c7cd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9619f5755ab1cded03a4068f9036c7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:01.421159 kubelet[2319]: I0909 05:04:01.421109 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9619f5755ab1cded03a4068f9036c7cd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9619f5755ab1cded03a4068f9036c7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:01.421159 kubelet[2319]: I0909 05:04:01.421131 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:01.421159 kubelet[2319]: I0909 05:04:01.421159 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:04:01.421312 kubelet[2319]: I0909 05:04:01.421173 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:01.421312 kubelet[2319]: I0909 05:04:01.421188 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:01.421312 kubelet[2319]: I0909 05:04:01.421202 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:01.421312 kubelet[2319]: I0909 05:04:01.421224 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:01.422795 kubelet[2319]: E0909 05:04:01.422654 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="400ms" Sep 9 05:04:01.568630 kubelet[2319]: I0909 05:04:01.568592 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:04:01.569110 kubelet[2319]: E0909 05:04:01.569078 2319 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 9 05:04:01.651921 kubelet[2319]: E0909 05:04:01.651895 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:01.652604 containerd[1534]: time="2025-09-09T05:04:01.652571233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9619f5755ab1cded03a4068f9036c7cd,Namespace:kube-system,Attempt:0,}" Sep 9 05:04:01.657761 kubelet[2319]: E0909 05:04:01.657728 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:01.658455 containerd[1534]: time="2025-09-09T05:04:01.658421635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 05:04:01.659731 kubelet[2319]: E0909 05:04:01.659652 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:01.660158 containerd[1534]: time="2025-09-09T05:04:01.660126692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 05:04:01.681592 containerd[1534]: time="2025-09-09T05:04:01.681462197Z" level=info msg="connecting to shim 6164566fec8cbade5ee2c21a1066d4f6697444b9550c6b774b361c53ae4713c4" address="unix:///run/containerd/s/c4644b3a1fdec0a501cdc29813d8c17d24f4464958f5ffc275aa562c8e43573e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:04:01.700942 containerd[1534]: time="2025-09-09T05:04:01.700891493Z" level=info msg="connecting to shim 30d399e548c4f40b1daf812466ff5bae53f6125409fce16064e32a758653587a" address="unix:///run/containerd/s/1cb5b2667e4451c4628c426689f347d4d75228ff951531fcd27a4739b66c267c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:04:01.702849 containerd[1534]: time="2025-09-09T05:04:01.702797742Z" level=info msg="connecting to shim c45e27872fc5d957eee6071edb6d25f1953e946399397bf114941c47bf42d710" address="unix:///run/containerd/s/35a955118e747f45230f9722f58a57152d6ec2dad697effe152d820757519f07" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:04:01.718680 systemd[1]: Started cri-containerd-6164566fec8cbade5ee2c21a1066d4f6697444b9550c6b774b361c53ae4713c4.scope - libcontainer container 6164566fec8cbade5ee2c21a1066d4f6697444b9550c6b774b361c53ae4713c4. Sep 9 05:04:01.728558 systemd[1]: Started cri-containerd-30d399e548c4f40b1daf812466ff5bae53f6125409fce16064e32a758653587a.scope - libcontainer container 30d399e548c4f40b1daf812466ff5bae53f6125409fce16064e32a758653587a. Sep 9 05:04:01.730225 systemd[1]: Started cri-containerd-c45e27872fc5d957eee6071edb6d25f1953e946399397bf114941c47bf42d710.scope - libcontainer container c45e27872fc5d957eee6071edb6d25f1953e946399397bf114941c47bf42d710. Sep 9 05:04:01.769599 containerd[1534]: time="2025-09-09T05:04:01.769481062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9619f5755ab1cded03a4068f9036c7cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6164566fec8cbade5ee2c21a1066d4f6697444b9550c6b774b361c53ae4713c4\"" Sep 9 05:04:01.770852 containerd[1534]: time="2025-09-09T05:04:01.770799438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"30d399e548c4f40b1daf812466ff5bae53f6125409fce16064e32a758653587a\"" Sep 9 05:04:01.771230 kubelet[2319]: E0909 05:04:01.771206 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:01.771433 kubelet[2319]: E0909 05:04:01.771403 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:01.774688 containerd[1534]: time="2025-09-09T05:04:01.774619024Z" level=info msg="CreateContainer within sandbox \"30d399e548c4f40b1daf812466ff5bae53f6125409fce16064e32a758653587a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:04:01.774834 containerd[1534]: time="2025-09-09T05:04:01.774808239Z" level=info msg="CreateContainer within sandbox \"6164566fec8cbade5ee2c21a1066d4f6697444b9550c6b774b361c53ae4713c4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:04:01.781934 containerd[1534]: time="2025-09-09T05:04:01.781882851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c45e27872fc5d957eee6071edb6d25f1953e946399397bf114941c47bf42d710\"" Sep 9 05:04:01.782684 kubelet[2319]: E0909 05:04:01.782659 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:01.784673 containerd[1534]: time="2025-09-09T05:04:01.784626988Z" level=info msg="CreateContainer within sandbox \"c45e27872fc5d957eee6071edb6d25f1953e946399397bf114941c47bf42d710\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:04:01.786290 containerd[1534]: time="2025-09-09T05:04:01.786233192Z" level=info msg="Container ca4a8122dc9c6a514e10bd18295d807b968f21bd45dd1d16a7573607c6edc0c7: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:01.786358 containerd[1534]: time="2025-09-09T05:04:01.786304127Z" level=info msg="Container b1d73c5c2cbd98c1d4fc9fe0c14bbf8b406b263c8ebbcde9330e30813eb92c97: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:01.793432 containerd[1534]: time="2025-09-09T05:04:01.793394400Z" level=info msg="CreateContainer within sandbox \"6164566fec8cbade5ee2c21a1066d4f6697444b9550c6b774b361c53ae4713c4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b1d73c5c2cbd98c1d4fc9fe0c14bbf8b406b263c8ebbcde9330e30813eb92c97\"" Sep 9 05:04:01.794643 containerd[1534]: time="2025-09-09T05:04:01.794166000Z" level=info msg="StartContainer for \"b1d73c5c2cbd98c1d4fc9fe0c14bbf8b406b263c8ebbcde9330e30813eb92c97\"" Sep 9 05:04:01.795342 containerd[1534]: time="2025-09-09T05:04:01.795306376Z" level=info msg="connecting to shim b1d73c5c2cbd98c1d4fc9fe0c14bbf8b406b263c8ebbcde9330e30813eb92c97" address="unix:///run/containerd/s/c4644b3a1fdec0a501cdc29813d8c17d24f4464958f5ffc275aa562c8e43573e" protocol=ttrpc version=3 Sep 9 05:04:01.795750 containerd[1534]: time="2025-09-09T05:04:01.795723738Z" level=info msg="Container 2a3ac1fce5740de38e6e5817cd179f69dd4cbc18d62625ba3b67a087051685a8: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:01.797542 containerd[1534]: time="2025-09-09T05:04:01.797487955Z" level=info msg="CreateContainer within sandbox \"30d399e548c4f40b1daf812466ff5bae53f6125409fce16064e32a758653587a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ca4a8122dc9c6a514e10bd18295d807b968f21bd45dd1d16a7573607c6edc0c7\"" Sep 9 05:04:01.798136 containerd[1534]: time="2025-09-09T05:04:01.798101662Z" level=info msg="StartContainer for \"ca4a8122dc9c6a514e10bd18295d807b968f21bd45dd1d16a7573607c6edc0c7\"" Sep 9 05:04:01.799225 containerd[1534]: time="2025-09-09T05:04:01.799161530Z" level=info msg="connecting to shim ca4a8122dc9c6a514e10bd18295d807b968f21bd45dd1d16a7573607c6edc0c7" address="unix:///run/containerd/s/1cb5b2667e4451c4628c426689f347d4d75228ff951531fcd27a4739b66c267c" protocol=ttrpc version=3 Sep 9 05:04:01.803383 containerd[1534]: time="2025-09-09T05:04:01.803345767Z" level=info msg="CreateContainer within sandbox \"c45e27872fc5d957eee6071edb6d25f1953e946399397bf114941c47bf42d710\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2a3ac1fce5740de38e6e5817cd179f69dd4cbc18d62625ba3b67a087051685a8\"" Sep 9 05:04:01.804033 containerd[1534]: time="2025-09-09T05:04:01.803814038Z" level=info msg="StartContainer for \"2a3ac1fce5740de38e6e5817cd179f69dd4cbc18d62625ba3b67a087051685a8\"" Sep 9 05:04:01.805051 containerd[1534]: time="2025-09-09T05:04:01.805017580Z" level=info msg="connecting to shim 2a3ac1fce5740de38e6e5817cd179f69dd4cbc18d62625ba3b67a087051685a8" address="unix:///run/containerd/s/35a955118e747f45230f9722f58a57152d6ec2dad697effe152d820757519f07" protocol=ttrpc version=3 Sep 9 05:04:01.814721 systemd[1]: Started cri-containerd-b1d73c5c2cbd98c1d4fc9fe0c14bbf8b406b263c8ebbcde9330e30813eb92c97.scope - libcontainer container b1d73c5c2cbd98c1d4fc9fe0c14bbf8b406b263c8ebbcde9330e30813eb92c97. Sep 9 05:04:01.818372 systemd[1]: Started cri-containerd-ca4a8122dc9c6a514e10bd18295d807b968f21bd45dd1d16a7573607c6edc0c7.scope - libcontainer container ca4a8122dc9c6a514e10bd18295d807b968f21bd45dd1d16a7573607c6edc0c7. Sep 9 05:04:01.823840 kubelet[2319]: E0909 05:04:01.823785 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="800ms" Sep 9 05:04:01.833679 systemd[1]: Started cri-containerd-2a3ac1fce5740de38e6e5817cd179f69dd4cbc18d62625ba3b67a087051685a8.scope - libcontainer container 2a3ac1fce5740de38e6e5817cd179f69dd4cbc18d62625ba3b67a087051685a8. Sep 9 05:04:01.873995 containerd[1534]: time="2025-09-09T05:04:01.873946125Z" level=info msg="StartContainer for \"b1d73c5c2cbd98c1d4fc9fe0c14bbf8b406b263c8ebbcde9330e30813eb92c97\" returns successfully" Sep 9 05:04:01.875535 containerd[1534]: time="2025-09-09T05:04:01.875391352Z" level=info msg="StartContainer for \"ca4a8122dc9c6a514e10bd18295d807b968f21bd45dd1d16a7573607c6edc0c7\" returns successfully" Sep 9 05:04:01.883696 containerd[1534]: time="2025-09-09T05:04:01.883663057Z" level=info msg="StartContainer for \"2a3ac1fce5740de38e6e5817cd179f69dd4cbc18d62625ba3b67a087051685a8\" returns successfully" Sep 9 05:04:01.970659 kubelet[2319]: I0909 05:04:01.970542 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:04:02.241018 kubelet[2319]: E0909 05:04:02.240903 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:04:02.241018 kubelet[2319]: E0909 05:04:02.241010 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:02.244914 kubelet[2319]: E0909 05:04:02.244886 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:04:02.245008 kubelet[2319]: E0909 05:04:02.244992 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:02.248506 kubelet[2319]: E0909 05:04:02.246197 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:04:02.248655 kubelet[2319]: E0909 05:04:02.248636 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:03.250468 kubelet[2319]: E0909 05:04:03.250437 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:04:03.250788 kubelet[2319]: E0909 05:04:03.250584 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:03.250832 kubelet[2319]: E0909 05:04:03.250798 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:04:03.251508 kubelet[2319]: E0909 05:04:03.250868 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:03.251508 kubelet[2319]: E0909 05:04:03.251137 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:04:03.251508 kubelet[2319]: E0909 05:04:03.251220 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:03.449211 kubelet[2319]: E0909 05:04:03.449170 2319 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 05:04:03.589062 kubelet[2319]: E0909 05:04:03.588879 2319 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186384c055957a6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 05:04:01.209072238 +0000 UTC m=+1.496990910,LastTimestamp:2025-09-09 05:04:01.209072238 +0000 UTC m=+1.496990910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 05:04:03.601672 kubelet[2319]: I0909 05:04:03.601641 2319 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:04:03.620627 kubelet[2319]: I0909 05:04:03.620597 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:03.627103 kubelet[2319]: E0909 05:04:03.627053 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:03.627103 kubelet[2319]: I0909 05:04:03.627086 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:03.629007 kubelet[2319]: E0909 05:04:03.628947 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:03.629007 kubelet[2319]: I0909 05:04:03.628985 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:04:03.631369 kubelet[2319]: E0909 05:04:03.631129 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 05:04:04.207149 kubelet[2319]: I0909 05:04:04.207091 2319 apiserver.go:52] "Watching apiserver" Sep 9 05:04:04.219803 kubelet[2319]: I0909 05:04:04.219752 2319 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:04:05.465939 kubelet[2319]: I0909 05:04:05.465902 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:05.470917 kubelet[2319]: E0909 05:04:05.470832 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:05.635766 systemd[1]: Reload requested from client PID 2593 ('systemctl') (unit session-7.scope)... Sep 9 05:04:05.635781 systemd[1]: Reloading... Sep 9 05:04:05.712546 zram_generator::config[2636]: No configuration found. Sep 9 05:04:05.876069 systemd[1]: Reloading finished in 239 ms. Sep 9 05:04:05.905759 kubelet[2319]: I0909 05:04:05.905711 2319 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:04:05.906921 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:04:05.929441 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:04:05.929729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:04:05.929792 systemd[1]: kubelet.service: Consumed 1.894s CPU time, 130.6M memory peak. Sep 9 05:04:05.931589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:04:06.067281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:04:06.082855 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:04:06.118610 kubelet[2678]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:04:06.118610 kubelet[2678]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:04:06.118610 kubelet[2678]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:04:06.118945 kubelet[2678]: I0909 05:04:06.118655 2678 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:04:06.124135 kubelet[2678]: I0909 05:04:06.124091 2678 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:04:06.124135 kubelet[2678]: I0909 05:04:06.124122 2678 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:04:06.124392 kubelet[2678]: I0909 05:04:06.124365 2678 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:04:06.125567 kubelet[2678]: I0909 05:04:06.125547 2678 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 05:04:06.127948 kubelet[2678]: I0909 05:04:06.127799 2678 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:04:06.132951 kubelet[2678]: I0909 05:04:06.132931 2678 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:04:06.135795 kubelet[2678]: I0909 05:04:06.135745 2678 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:04:06.136082 kubelet[2678]: I0909 05:04:06.136047 2678 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:04:06.136342 kubelet[2678]: I0909 05:04:06.136152 2678 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:04:06.136478 kubelet[2678]: I0909 05:04:06.136463 2678 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:04:06.136563 kubelet[2678]: I0909 05:04:06.136553 2678 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:04:06.136672 kubelet[2678]: I0909 05:04:06.136660 2678 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:04:06.136866 kubelet[2678]: I0909 05:04:06.136854 2678 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:04:06.136936 kubelet[2678]: I0909 05:04:06.136927 2678 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:04:06.137476 kubelet[2678]: I0909 05:04:06.137458 2678 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:04:06.137595 kubelet[2678]: I0909 05:04:06.137584 2678 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:04:06.138869 kubelet[2678]: I0909 05:04:06.138785 2678 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:04:06.139621 kubelet[2678]: I0909 05:04:06.139280 2678 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:04:06.139746 kubelet[2678]: I0909 05:04:06.139726 2678 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:04:06.139784 kubelet[2678]: I0909 05:04:06.139760 2678 server.go:1287] "Started kubelet" Sep 9 05:04:06.140915 kubelet[2678]: I0909 05:04:06.140890 2678 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:04:06.141926 kubelet[2678]: I0909 05:04:06.141880 2678 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:04:06.147532 kubelet[2678]: I0909 05:04:06.145320 2678 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:04:06.147532 kubelet[2678]: I0909 05:04:06.146198 2678 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:04:06.147532 kubelet[2678]: I0909 05:04:06.145330 2678 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:04:06.147726 kubelet[2678]: I0909 05:04:06.147707 2678 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:04:06.150564 kubelet[2678]: I0909 05:04:06.150530 2678 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:04:06.150823 kubelet[2678]: E0909 05:04:06.150792 2678 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:04:06.152257 kubelet[2678]: I0909 05:04:06.151576 2678 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:04:06.152257 kubelet[2678]: I0909 05:04:06.151723 2678 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:04:06.157999 kubelet[2678]: E0909 05:04:06.157822 2678 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:04:06.159965 kubelet[2678]: I0909 05:04:06.159935 2678 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:04:06.160097 kubelet[2678]: I0909 05:04:06.160048 2678 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:04:06.164308 kubelet[2678]: I0909 05:04:06.164244 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:04:06.165346 kubelet[2678]: I0909 05:04:06.165323 2678 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:04:06.167744 kubelet[2678]: I0909 05:04:06.167722 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:04:06.167884 kubelet[2678]: I0909 05:04:06.167861 2678 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:04:06.167979 kubelet[2678]: I0909 05:04:06.167966 2678 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:04:06.168166 kubelet[2678]: I0909 05:04:06.168154 2678 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:04:06.168298 kubelet[2678]: E0909 05:04:06.168280 2678 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:04:06.202437 kubelet[2678]: I0909 05:04:06.202395 2678 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:04:06.202437 kubelet[2678]: I0909 05:04:06.202417 2678 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:04:06.202437 kubelet[2678]: I0909 05:04:06.202438 2678 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:04:06.203267 kubelet[2678]: I0909 05:04:06.203228 2678 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:04:06.203267 kubelet[2678]: I0909 05:04:06.203260 2678 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:04:06.203360 kubelet[2678]: I0909 05:04:06.203281 2678 policy_none.go:49] "None policy: Start" Sep 9 05:04:06.203360 kubelet[2678]: I0909 05:04:06.203291 2678 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:04:06.203360 kubelet[2678]: I0909 05:04:06.203302 2678 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:04:06.203445 kubelet[2678]: I0909 05:04:06.203429 2678 state_mem.go:75] "Updated machine memory state" Sep 9 05:04:06.207742 kubelet[2678]: I0909 05:04:06.207711 2678 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:04:06.207890 kubelet[2678]: I0909 05:04:06.207875 2678 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:04:06.207974 kubelet[2678]: I0909 05:04:06.207894 2678 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:04:06.208956 kubelet[2678]: I0909 05:04:06.208632 2678 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:04:06.209906 kubelet[2678]: E0909 05:04:06.209197 2678 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:04:06.269361 kubelet[2678]: I0909 05:04:06.269281 2678 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:04:06.269745 kubelet[2678]: I0909 05:04:06.269725 2678 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:06.269987 kubelet[2678]: I0909 05:04:06.269783 2678 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:06.274991 kubelet[2678]: E0909 05:04:06.274963 2678 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:06.311458 kubelet[2678]: I0909 05:04:06.311414 2678 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:04:06.317328 kubelet[2678]: I0909 05:04:06.317283 2678 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 05:04:06.317421 kubelet[2678]: I0909 05:04:06.317368 2678 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:04:06.353335 kubelet[2678]: I0909 05:04:06.353275 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:06.353335 kubelet[2678]: I0909 05:04:06.353321 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:06.353335 kubelet[2678]: I0909 05:04:06.353340 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9619f5755ab1cded03a4068f9036c7cd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9619f5755ab1cded03a4068f9036c7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:06.353526 kubelet[2678]: I0909 05:04:06.353365 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9619f5755ab1cded03a4068f9036c7cd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9619f5755ab1cded03a4068f9036c7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:06.353526 kubelet[2678]: I0909 05:04:06.353387 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:06.353526 kubelet[2678]: I0909 05:04:06.353426 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:06.353526 kubelet[2678]: I0909 05:04:06.353464 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9619f5755ab1cded03a4068f9036c7cd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9619f5755ab1cded03a4068f9036c7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:06.353526 kubelet[2678]: I0909 05:04:06.353520 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:04:06.353625 kubelet[2678]: I0909 05:04:06.353541 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:04:06.574947 kubelet[2678]: E0909 05:04:06.574732 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:06.574947 kubelet[2678]: E0909 05:04:06.574768 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:06.576138 kubelet[2678]: E0909 05:04:06.576054 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:06.629310 sudo[2713]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 05:04:06.629625 sudo[2713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 05:04:06.939399 sudo[2713]: pam_unix(sudo:session): session closed for user root Sep 9 05:04:07.140791 kubelet[2678]: I0909 05:04:07.139424 2678 apiserver.go:52] "Watching apiserver" Sep 9 05:04:07.151855 kubelet[2678]: I0909 05:04:07.151828 2678 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:04:07.185990 kubelet[2678]: I0909 05:04:07.184601 2678 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:07.185990 kubelet[2678]: I0909 05:04:07.184736 2678 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:04:07.185990 kubelet[2678]: E0909 05:04:07.184870 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:07.192378 kubelet[2678]: E0909 05:04:07.192183 2678 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 05:04:07.192759 kubelet[2678]: E0909 05:04:07.192433 2678 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 05:04:07.192939 kubelet[2678]: E0909 05:04:07.192915 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:07.193152 kubelet[2678]: E0909 05:04:07.193064 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:07.210162 kubelet[2678]: I0909 05:04:07.209343 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.209328337 podStartE2EDuration="1.209328337s" podCreationTimestamp="2025-09-09 05:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:04:07.209279733 +0000 UTC m=+1.122570620" watchObservedRunningTime="2025-09-09 05:04:07.209328337 +0000 UTC m=+1.122619264" Sep 9 05:04:07.225186 kubelet[2678]: I0909 05:04:07.224634 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.224617192 podStartE2EDuration="2.224617192s" podCreationTimestamp="2025-09-09 05:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:04:07.217573846 +0000 UTC m=+1.130864733" watchObservedRunningTime="2025-09-09 05:04:07.224617192 +0000 UTC m=+1.137908079" Sep 9 05:04:07.251541 kubelet[2678]: I0909 05:04:07.250532 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.249117426 podStartE2EDuration="1.249117426s" podCreationTimestamp="2025-09-09 05:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:04:07.225798977 +0000 UTC m=+1.139089864" watchObservedRunningTime="2025-09-09 05:04:07.249117426 +0000 UTC m=+1.162408313" Sep 9 05:04:08.186318 kubelet[2678]: E0909 05:04:08.185759 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:08.186318 kubelet[2678]: E0909 05:04:08.186028 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:08.692688 sudo[1752]: pam_unix(sudo:session): session closed for user root Sep 9 05:04:08.693812 sshd[1751]: Connection closed by 10.0.0.1 port 58686 Sep 9 05:04:08.694325 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Sep 9 05:04:08.698180 systemd[1]: sshd@6-10.0.0.93:22-10.0.0.1:58686.service: Deactivated successfully. Sep 9 05:04:08.700712 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:04:08.700892 systemd[1]: session-7.scope: Consumed 6.124s CPU time, 257.4M memory peak. Sep 9 05:04:08.701989 systemd-logind[1521]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:04:08.703084 systemd-logind[1521]: Removed session 7. Sep 9 05:04:09.188426 kubelet[2678]: E0909 05:04:09.188340 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:10.259687 kubelet[2678]: E0909 05:04:10.259653 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:12.375073 kubelet[2678]: I0909 05:04:12.375036 2678 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:04:12.375637 kubelet[2678]: I0909 05:04:12.375476 2678 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:04:12.375666 containerd[1534]: time="2025-09-09T05:04:12.375334642Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:04:12.961482 systemd[1]: Created slice kubepods-besteffort-pod733b5894_03d7_44b1_816f_8554d57604c0.slice - libcontainer container kubepods-besteffort-pod733b5894_03d7_44b1_816f_8554d57604c0.slice. Sep 9 05:04:12.977677 systemd[1]: Created slice kubepods-burstable-pod13096f83_41ee_40dc_b00b_da0988c4264d.slice - libcontainer container kubepods-burstable-pod13096f83_41ee_40dc_b00b_da0988c4264d.slice. Sep 9 05:04:12.995432 kubelet[2678]: I0909 05:04:12.995384 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-run\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995432 kubelet[2678]: I0909 05:04:12.995429 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-hostproc\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995592 kubelet[2678]: I0909 05:04:12.995447 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-cgroup\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995592 kubelet[2678]: I0909 05:04:12.995464 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-etc-cni-netd\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995592 kubelet[2678]: I0909 05:04:12.995491 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-host-proc-sys-kernel\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995592 kubelet[2678]: I0909 05:04:12.995525 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/13096f83-41ee-40dc-b00b-da0988c4264d-clustermesh-secrets\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995592 kubelet[2678]: I0909 05:04:12.995544 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-227zr\" (UniqueName: \"kubernetes.io/projected/13096f83-41ee-40dc-b00b-da0988c4264d-kube-api-access-227zr\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995702 kubelet[2678]: I0909 05:04:12.995576 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-lib-modules\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995702 kubelet[2678]: I0909 05:04:12.995591 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-config-path\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995702 kubelet[2678]: I0909 05:04:12.995605 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-host-proc-sys-net\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995702 kubelet[2678]: I0909 05:04:12.995621 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/733b5894-03d7-44b1-816f-8554d57604c0-lib-modules\") pod \"kube-proxy-gmh6d\" (UID: \"733b5894-03d7-44b1-816f-8554d57604c0\") " pod="kube-system/kube-proxy-gmh6d" Sep 9 05:04:12.995702 kubelet[2678]: I0909 05:04:12.995648 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-bpf-maps\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995795 kubelet[2678]: I0909 05:04:12.995662 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtrmq\" (UniqueName: \"kubernetes.io/projected/733b5894-03d7-44b1-816f-8554d57604c0-kube-api-access-gtrmq\") pod \"kube-proxy-gmh6d\" (UID: \"733b5894-03d7-44b1-816f-8554d57604c0\") " pod="kube-system/kube-proxy-gmh6d" Sep 9 05:04:12.995795 kubelet[2678]: I0909 05:04:12.995677 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cni-path\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995795 kubelet[2678]: I0909 05:04:12.995692 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-xtables-lock\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995795 kubelet[2678]: I0909 05:04:12.995714 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/733b5894-03d7-44b1-816f-8554d57604c0-kube-proxy\") pod \"kube-proxy-gmh6d\" (UID: \"733b5894-03d7-44b1-816f-8554d57604c0\") " pod="kube-system/kube-proxy-gmh6d" Sep 9 05:04:12.995795 kubelet[2678]: I0909 05:04:12.995741 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/13096f83-41ee-40dc-b00b-da0988c4264d-hubble-tls\") pod \"cilium-r4zw5\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " pod="kube-system/cilium-r4zw5" Sep 9 05:04:12.995795 kubelet[2678]: I0909 05:04:12.995759 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/733b5894-03d7-44b1-816f-8554d57604c0-xtables-lock\") pod \"kube-proxy-gmh6d\" (UID: \"733b5894-03d7-44b1-816f-8554d57604c0\") " pod="kube-system/kube-proxy-gmh6d" Sep 9 05:04:13.271144 kubelet[2678]: E0909 05:04:13.271031 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:13.272341 containerd[1534]: time="2025-09-09T05:04:13.272018057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gmh6d,Uid:733b5894-03d7-44b1-816f-8554d57604c0,Namespace:kube-system,Attempt:0,}" Sep 9 05:04:13.281722 kubelet[2678]: E0909 05:04:13.281685 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:13.282690 containerd[1534]: time="2025-09-09T05:04:13.282654788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r4zw5,Uid:13096f83-41ee-40dc-b00b-da0988c4264d,Namespace:kube-system,Attempt:0,}" Sep 9 05:04:13.304679 containerd[1534]: time="2025-09-09T05:04:13.304634263Z" level=info msg="connecting to shim c76d2c28946f5377a3d64e114e600f555707c22255098cb5fa81b55f0e68c4f8" address="unix:///run/containerd/s/5d77963ebe4b9257af75ce31147d153fa9ccb0ca0eaa13714ca077e16d894d32" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:04:13.321112 containerd[1534]: time="2025-09-09T05:04:13.321048314Z" level=info msg="connecting to shim d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212" address="unix:///run/containerd/s/0597ea84313841c7a22630a1c7173e6d384d22830fc2802a72e1b934c5cded67" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:04:13.332691 systemd[1]: Started cri-containerd-c76d2c28946f5377a3d64e114e600f555707c22255098cb5fa81b55f0e68c4f8.scope - libcontainer container c76d2c28946f5377a3d64e114e600f555707c22255098cb5fa81b55f0e68c4f8. Sep 9 05:04:13.353658 systemd[1]: Started cri-containerd-d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212.scope - libcontainer container d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212. Sep 9 05:04:13.374750 containerd[1534]: time="2025-09-09T05:04:13.374708152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gmh6d,Uid:733b5894-03d7-44b1-816f-8554d57604c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c76d2c28946f5377a3d64e114e600f555707c22255098cb5fa81b55f0e68c4f8\"" Sep 9 05:04:13.375558 kubelet[2678]: E0909 05:04:13.375350 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:13.381271 containerd[1534]: time="2025-09-09T05:04:13.381230433Z" level=info msg="CreateContainer within sandbox \"c76d2c28946f5377a3d64e114e600f555707c22255098cb5fa81b55f0e68c4f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:04:13.390116 containerd[1534]: time="2025-09-09T05:04:13.390081173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r4zw5,Uid:13096f83-41ee-40dc-b00b-da0988c4264d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\"" Sep 9 05:04:13.393899 kubelet[2678]: E0909 05:04:13.393862 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:13.398023 containerd[1534]: time="2025-09-09T05:04:13.397971174Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 05:04:13.398290 containerd[1534]: time="2025-09-09T05:04:13.397674223Z" level=info msg="Container 9c44caf843605c2ff5f3e3083298f1792f0c9844356474be30ea9c4a8908fdb0: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:13.411871 containerd[1534]: time="2025-09-09T05:04:13.411358116Z" level=info msg="CreateContainer within sandbox \"c76d2c28946f5377a3d64e114e600f555707c22255098cb5fa81b55f0e68c4f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c44caf843605c2ff5f3e3083298f1792f0c9844356474be30ea9c4a8908fdb0\"" Sep 9 05:04:13.411381 systemd[1]: Created slice kubepods-besteffort-pod47bf2e54_5d5a_4faf_95ca_09c48ae9f8a8.slice - libcontainer container kubepods-besteffort-pod47bf2e54_5d5a_4faf_95ca_09c48ae9f8a8.slice. Sep 9 05:04:13.413915 containerd[1534]: time="2025-09-09T05:04:13.413872855Z" level=info msg="StartContainer for \"9c44caf843605c2ff5f3e3083298f1792f0c9844356474be30ea9c4a8908fdb0\"" Sep 9 05:04:13.415372 containerd[1534]: time="2025-09-09T05:04:13.415196348Z" level=info msg="connecting to shim 9c44caf843605c2ff5f3e3083298f1792f0c9844356474be30ea9c4a8908fdb0" address="unix:///run/containerd/s/5d77963ebe4b9257af75ce31147d153fa9ccb0ca0eaa13714ca077e16d894d32" protocol=ttrpc version=3 Sep 9 05:04:13.445671 systemd[1]: Started cri-containerd-9c44caf843605c2ff5f3e3083298f1792f0c9844356474be30ea9c4a8908fdb0.scope - libcontainer container 9c44caf843605c2ff5f3e3083298f1792f0c9844356474be30ea9c4a8908fdb0. Sep 9 05:04:13.480158 containerd[1534]: time="2025-09-09T05:04:13.480122121Z" level=info msg="StartContainer for \"9c44caf843605c2ff5f3e3083298f1792f0c9844356474be30ea9c4a8908fdb0\" returns successfully" Sep 9 05:04:13.500046 kubelet[2678]: I0909 05:04:13.499973 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gp67\" (UniqueName: \"kubernetes.io/projected/47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8-kube-api-access-4gp67\") pod \"cilium-operator-6c4d7847fc-kcqjd\" (UID: \"47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8\") " pod="kube-system/cilium-operator-6c4d7847fc-kcqjd" Sep 9 05:04:13.500046 kubelet[2678]: I0909 05:04:13.500017 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kcqjd\" (UID: \"47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8\") " pod="kube-system/cilium-operator-6c4d7847fc-kcqjd" Sep 9 05:04:13.720785 kubelet[2678]: E0909 05:04:13.720734 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:13.721168 containerd[1534]: time="2025-09-09T05:04:13.721122892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kcqjd,Uid:47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8,Namespace:kube-system,Attempt:0,}" Sep 9 05:04:13.740091 containerd[1534]: time="2025-09-09T05:04:13.740047520Z" level=info msg="connecting to shim f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682" address="unix:///run/containerd/s/22bc0c325adc0f5c027c50634d7bd521f3da61e17e2dcbab07a04ee2f761d468" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:04:13.763667 systemd[1]: Started cri-containerd-f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682.scope - libcontainer container f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682. Sep 9 05:04:13.796336 containerd[1534]: time="2025-09-09T05:04:13.795867910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kcqjd,Uid:47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682\"" Sep 9 05:04:13.796651 kubelet[2678]: E0909 05:04:13.796627 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:14.202903 kubelet[2678]: E0909 05:04:14.201915 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:14.217670 kubelet[2678]: I0909 05:04:14.217570 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gmh6d" podStartSLOduration=2.217550476 podStartE2EDuration="2.217550476s" podCreationTimestamp="2025-09-09 05:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:04:14.217281872 +0000 UTC m=+8.130572719" watchObservedRunningTime="2025-09-09 05:04:14.217550476 +0000 UTC m=+8.130841363" Sep 9 05:04:14.367608 kubelet[2678]: E0909 05:04:14.367307 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:15.203709 kubelet[2678]: E0909 05:04:15.203678 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:17.834616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994941181.mount: Deactivated successfully. Sep 9 05:04:18.811661 kubelet[2678]: E0909 05:04:18.811581 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:19.071609 containerd[1534]: time="2025-09-09T05:04:19.071351035Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:04:19.072527 containerd[1534]: time="2025-09-09T05:04:19.071994857Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 05:04:19.072930 containerd[1534]: time="2025-09-09T05:04:19.072890758Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:04:19.074612 containerd[1534]: time="2025-09-09T05:04:19.074581832Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.676565831s" Sep 9 05:04:19.074682 containerd[1534]: time="2025-09-09T05:04:19.074619049Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 05:04:19.084361 containerd[1534]: time="2025-09-09T05:04:19.084295192Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 05:04:19.089882 containerd[1534]: time="2025-09-09T05:04:19.089333117Z" level=info msg="CreateContainer within sandbox \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:04:19.097403 containerd[1534]: time="2025-09-09T05:04:19.097353763Z" level=info msg="Container 799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:19.100634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846643300.mount: Deactivated successfully. Sep 9 05:04:19.110235 containerd[1534]: time="2025-09-09T05:04:19.110172781Z" level=info msg="CreateContainer within sandbox \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\"" Sep 9 05:04:19.112577 containerd[1534]: time="2025-09-09T05:04:19.112539532Z" level=info msg="StartContainer for \"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\"" Sep 9 05:04:19.113621 containerd[1534]: time="2025-09-09T05:04:19.113592987Z" level=info msg="connecting to shim 799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff" address="unix:///run/containerd/s/0597ea84313841c7a22630a1c7173e6d384d22830fc2802a72e1b934c5cded67" protocol=ttrpc version=3 Sep 9 05:04:19.159704 systemd[1]: Started cri-containerd-799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff.scope - libcontainer container 799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff. Sep 9 05:04:19.189133 containerd[1534]: time="2025-09-09T05:04:19.189073463Z" level=info msg="StartContainer for \"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\" returns successfully" Sep 9 05:04:19.203049 systemd[1]: cri-containerd-799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff.scope: Deactivated successfully. Sep 9 05:04:19.213038 kubelet[2678]: E0909 05:04:19.213004 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:19.239857 containerd[1534]: time="2025-09-09T05:04:19.239780990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\" id:\"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\" pid:3097 exited_at:{seconds:1757394259 nanos:228614507}" Sep 9 05:04:19.240658 containerd[1534]: time="2025-09-09T05:04:19.240605177Z" level=info msg="received exit event container_id:\"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\" id:\"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\" pid:3097 exited_at:{seconds:1757394259 nanos:228614507}" Sep 9 05:04:19.273439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff-rootfs.mount: Deactivated successfully. Sep 9 05:04:19.528909 update_engine[1523]: I20250909 05:04:19.528829 1523 update_attempter.cc:509] Updating boot flags... Sep 9 05:04:20.227076 kubelet[2678]: E0909 05:04:20.227012 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:20.230570 containerd[1534]: time="2025-09-09T05:04:20.230254221Z" level=info msg="CreateContainer within sandbox \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:04:20.269433 kubelet[2678]: E0909 05:04:20.269397 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:20.279288 containerd[1534]: time="2025-09-09T05:04:20.279128639Z" level=info msg="Container 6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:20.288646 containerd[1534]: time="2025-09-09T05:04:20.288600547Z" level=info msg="CreateContainer within sandbox \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\"" Sep 9 05:04:20.289530 containerd[1534]: time="2025-09-09T05:04:20.289253399Z" level=info msg="StartContainer for \"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\"" Sep 9 05:04:20.290263 containerd[1534]: time="2025-09-09T05:04:20.290235797Z" level=info msg="connecting to shim 6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6" address="unix:///run/containerd/s/0597ea84313841c7a22630a1c7173e6d384d22830fc2802a72e1b934c5cded67" protocol=ttrpc version=3 Sep 9 05:04:20.311828 systemd[1]: Started cri-containerd-6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6.scope - libcontainer container 6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6. Sep 9 05:04:20.345281 containerd[1534]: time="2025-09-09T05:04:20.345241552Z" level=info msg="StartContainer for \"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\" returns successfully" Sep 9 05:04:20.360156 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:04:20.360362 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:04:20.360575 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:04:20.363092 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:04:20.366077 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:04:20.366847 systemd[1]: cri-containerd-6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6.scope: Deactivated successfully. Sep 9 05:04:20.367492 containerd[1534]: time="2025-09-09T05:04:20.367294677Z" level=info msg="received exit event container_id:\"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\" id:\"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\" pid:3158 exited_at:{seconds:1757394260 nanos:366977896}" Sep 9 05:04:20.367492 containerd[1534]: time="2025-09-09T05:04:20.367390840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\" id:\"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\" pid:3158 exited_at:{seconds:1757394260 nanos:366977896}" Sep 9 05:04:20.393024 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:04:21.232368 kubelet[2678]: E0909 05:04:21.232068 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:21.235277 containerd[1534]: time="2025-09-09T05:04:21.235240059Z" level=info msg="CreateContainer within sandbox \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:04:21.257420 containerd[1534]: time="2025-09-09T05:04:21.257362496Z" level=info msg="Container c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:21.267560 containerd[1534]: time="2025-09-09T05:04:21.267481515Z" level=info msg="CreateContainer within sandbox \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\"" Sep 9 05:04:21.269043 containerd[1534]: time="2025-09-09T05:04:21.268996958Z" level=info msg="StartContainer for \"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\"" Sep 9 05:04:21.270709 containerd[1534]: time="2025-09-09T05:04:21.270678873Z" level=info msg="connecting to shim c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722" address="unix:///run/containerd/s/0597ea84313841c7a22630a1c7173e6d384d22830fc2802a72e1b934c5cded67" protocol=ttrpc version=3 Sep 9 05:04:21.279870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6-rootfs.mount: Deactivated successfully. Sep 9 05:04:21.295710 systemd[1]: Started cri-containerd-c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722.scope - libcontainer container c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722. Sep 9 05:04:21.354803 containerd[1534]: time="2025-09-09T05:04:21.354764991Z" level=info msg="StartContainer for \"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\" returns successfully" Sep 9 05:04:21.358256 systemd[1]: cri-containerd-c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722.scope: Deactivated successfully. Sep 9 05:04:21.360834 containerd[1534]: time="2025-09-09T05:04:21.360674981Z" level=info msg="received exit event container_id:\"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\" id:\"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\" pid:3219 exited_at:{seconds:1757394261 nanos:360203021}" Sep 9 05:04:21.360974 containerd[1534]: time="2025-09-09T05:04:21.360927608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\" id:\"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\" pid:3219 exited_at:{seconds:1757394261 nanos:360203021}" Sep 9 05:04:21.394510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722-rootfs.mount: Deactivated successfully. Sep 9 05:04:21.476943 containerd[1534]: time="2025-09-09T05:04:21.476889867Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:04:21.477463 containerd[1534]: time="2025-09-09T05:04:21.477431337Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 05:04:21.478434 containerd[1534]: time="2025-09-09T05:04:21.478387223Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:04:21.480438 containerd[1534]: time="2025-09-09T05:04:21.480391714Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.396029491s" Sep 9 05:04:21.480554 containerd[1534]: time="2025-09-09T05:04:21.480443776Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 05:04:21.482745 containerd[1534]: time="2025-09-09T05:04:21.482633907Z" level=info msg="CreateContainer within sandbox \"f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 05:04:21.494684 containerd[1534]: time="2025-09-09T05:04:21.493719656Z" level=info msg="Container dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:21.499695 containerd[1534]: time="2025-09-09T05:04:21.499639570Z" level=info msg="CreateContainer within sandbox \"f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\"" Sep 9 05:04:21.500963 containerd[1534]: time="2025-09-09T05:04:21.500567124Z" level=info msg="StartContainer for \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\"" Sep 9 05:04:21.501867 containerd[1534]: time="2025-09-09T05:04:21.501820857Z" level=info msg="connecting to shim dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6" address="unix:///run/containerd/s/22bc0c325adc0f5c027c50634d7bd521f3da61e17e2dcbab07a04ee2f761d468" protocol=ttrpc version=3 Sep 9 05:04:21.522752 systemd[1]: Started cri-containerd-dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6.scope - libcontainer container dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6. Sep 9 05:04:21.567220 containerd[1534]: time="2025-09-09T05:04:21.567127838Z" level=info msg="StartContainer for \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\" returns successfully" Sep 9 05:04:22.253530 kubelet[2678]: E0909 05:04:22.253076 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:22.257976 kubelet[2678]: E0909 05:04:22.257924 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:22.262529 containerd[1534]: time="2025-09-09T05:04:22.261628139Z" level=info msg="CreateContainer within sandbox \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:04:22.279483 kubelet[2678]: I0909 05:04:22.279400 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kcqjd" podStartSLOduration=1.5953456780000002 podStartE2EDuration="9.279377999s" podCreationTimestamp="2025-09-09 05:04:13 +0000 UTC" firstStartedPulling="2025-09-09 05:04:13.797119396 +0000 UTC m=+7.710410283" lastFinishedPulling="2025-09-09 05:04:21.481151757 +0000 UTC m=+15.394442604" observedRunningTime="2025-09-09 05:04:22.278772474 +0000 UTC m=+16.192063361" watchObservedRunningTime="2025-09-09 05:04:22.279377999 +0000 UTC m=+16.192668886" Sep 9 05:04:22.309181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540782271.mount: Deactivated successfully. Sep 9 05:04:22.310608 containerd[1534]: time="2025-09-09T05:04:22.310556811Z" level=info msg="Container bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:22.325307 containerd[1534]: time="2025-09-09T05:04:22.325237149Z" level=info msg="CreateContainer within sandbox \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\"" Sep 9 05:04:22.325877 containerd[1534]: time="2025-09-09T05:04:22.325833710Z" level=info msg="StartContainer for \"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\"" Sep 9 05:04:22.327818 containerd[1534]: time="2025-09-09T05:04:22.327768773Z" level=info msg="connecting to shim bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94" address="unix:///run/containerd/s/0597ea84313841c7a22630a1c7173e6d384d22830fc2802a72e1b934c5cded67" protocol=ttrpc version=3 Sep 9 05:04:22.364753 systemd[1]: Started cri-containerd-bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94.scope - libcontainer container bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94. Sep 9 05:04:22.400543 systemd[1]: cri-containerd-bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94.scope: Deactivated successfully. Sep 9 05:04:22.401201 containerd[1534]: time="2025-09-09T05:04:22.401156338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\" id:\"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\" pid:3297 exited_at:{seconds:1757394262 nanos:400562178}" Sep 9 05:04:22.403915 containerd[1534]: time="2025-09-09T05:04:22.403788042Z" level=info msg="received exit event container_id:\"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\" id:\"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\" pid:3297 exited_at:{seconds:1757394262 nanos:400562178}" Sep 9 05:04:22.411144 containerd[1534]: time="2025-09-09T05:04:22.411095118Z" level=info msg="StartContainer for \"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\" returns successfully" Sep 9 05:04:22.425450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94-rootfs.mount: Deactivated successfully. Sep 9 05:04:23.269198 kubelet[2678]: E0909 05:04:23.269053 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:23.270034 kubelet[2678]: E0909 05:04:23.269883 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:23.277022 containerd[1534]: time="2025-09-09T05:04:23.276651470Z" level=info msg="CreateContainer within sandbox \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:04:23.306314 containerd[1534]: time="2025-09-09T05:04:23.301099774Z" level=info msg="Container 1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:23.314592 containerd[1534]: time="2025-09-09T05:04:23.314551400Z" level=info msg="CreateContainer within sandbox \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\"" Sep 9 05:04:23.315574 containerd[1534]: time="2025-09-09T05:04:23.315528416Z" level=info msg="StartContainer for \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\"" Sep 9 05:04:23.316825 containerd[1534]: time="2025-09-09T05:04:23.316735642Z" level=info msg="connecting to shim 1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc" address="unix:///run/containerd/s/0597ea84313841c7a22630a1c7173e6d384d22830fc2802a72e1b934c5cded67" protocol=ttrpc version=3 Sep 9 05:04:23.337707 systemd[1]: Started cri-containerd-1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc.scope - libcontainer container 1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc. Sep 9 05:04:23.385087 containerd[1534]: time="2025-09-09T05:04:23.385042333Z" level=info msg="StartContainer for \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" returns successfully" Sep 9 05:04:23.470851 containerd[1534]: time="2025-09-09T05:04:23.470802512Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" id:\"6d13d99e2e18a10a83bfe957247d2eb2b89d417e612723888a90c8a37a6d7350\" pid:3366 exited_at:{seconds:1757394263 nanos:470338453}" Sep 9 05:04:23.496583 kubelet[2678]: I0909 05:04:23.496555 2678 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 05:04:23.554289 systemd[1]: Created slice kubepods-burstable-pod7659fb80_93a4_4265_af6f_c5e452238e52.slice - libcontainer container kubepods-burstable-pod7659fb80_93a4_4265_af6f_c5e452238e52.slice. Sep 9 05:04:23.561247 systemd[1]: Created slice kubepods-burstable-pod121fb0da_4683_40d1_9b9a_6e79b7ec0ffd.slice - libcontainer container kubepods-burstable-pod121fb0da_4683_40d1_9b9a_6e79b7ec0ffd.slice. Sep 9 05:04:23.573996 kubelet[2678]: I0909 05:04:23.573883 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121fb0da-4683-40d1-9b9a-6e79b7ec0ffd-config-volume\") pod \"coredns-668d6bf9bc-g2b4g\" (UID: \"121fb0da-4683-40d1-9b9a-6e79b7ec0ffd\") " pod="kube-system/coredns-668d6bf9bc-g2b4g" Sep 9 05:04:23.573996 kubelet[2678]: I0909 05:04:23.573922 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7659fb80-93a4-4265-af6f-c5e452238e52-config-volume\") pod \"coredns-668d6bf9bc-w4xbc\" (UID: \"7659fb80-93a4-4265-af6f-c5e452238e52\") " pod="kube-system/coredns-668d6bf9bc-w4xbc" Sep 9 05:04:23.573996 kubelet[2678]: I0909 05:04:23.573946 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnlwr\" (UniqueName: \"kubernetes.io/projected/7659fb80-93a4-4265-af6f-c5e452238e52-kube-api-access-gnlwr\") pod \"coredns-668d6bf9bc-w4xbc\" (UID: \"7659fb80-93a4-4265-af6f-c5e452238e52\") " pod="kube-system/coredns-668d6bf9bc-w4xbc" Sep 9 05:04:23.573996 kubelet[2678]: I0909 05:04:23.573975 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqzrz\" (UniqueName: \"kubernetes.io/projected/121fb0da-4683-40d1-9b9a-6e79b7ec0ffd-kube-api-access-lqzrz\") pod \"coredns-668d6bf9bc-g2b4g\" (UID: \"121fb0da-4683-40d1-9b9a-6e79b7ec0ffd\") " pod="kube-system/coredns-668d6bf9bc-g2b4g" Sep 9 05:04:23.857898 kubelet[2678]: E0909 05:04:23.857868 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:23.859205 containerd[1534]: time="2025-09-09T05:04:23.858734334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w4xbc,Uid:7659fb80-93a4-4265-af6f-c5e452238e52,Namespace:kube-system,Attempt:0,}" Sep 9 05:04:23.866016 kubelet[2678]: E0909 05:04:23.865983 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:23.869809 containerd[1534]: time="2025-09-09T05:04:23.869764586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2b4g,Uid:121fb0da-4683-40d1-9b9a-6e79b7ec0ffd,Namespace:kube-system,Attempt:0,}" Sep 9 05:04:24.274356 kubelet[2678]: E0909 05:04:24.274155 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:24.297919 kubelet[2678]: I0909 05:04:24.297792 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r4zw5" podStartSLOduration=6.610265229 podStartE2EDuration="12.297773565s" podCreationTimestamp="2025-09-09 05:04:12 +0000 UTC" firstStartedPulling="2025-09-09 05:04:13.396586362 +0000 UTC m=+7.309877249" lastFinishedPulling="2025-09-09 05:04:19.084094698 +0000 UTC m=+12.997385585" observedRunningTime="2025-09-09 05:04:24.297566369 +0000 UTC m=+18.210857256" watchObservedRunningTime="2025-09-09 05:04:24.297773565 +0000 UTC m=+18.211064452" Sep 9 05:04:25.276550 kubelet[2678]: E0909 05:04:25.276516 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:25.471321 systemd-networkd[1453]: cilium_host: Link UP Sep 9 05:04:25.471528 systemd-networkd[1453]: cilium_net: Link UP Sep 9 05:04:25.471718 systemd-networkd[1453]: cilium_net: Gained carrier Sep 9 05:04:25.471880 systemd-networkd[1453]: cilium_host: Gained carrier Sep 9 05:04:25.570584 systemd-networkd[1453]: cilium_vxlan: Link UP Sep 9 05:04:25.570594 systemd-networkd[1453]: cilium_vxlan: Gained carrier Sep 9 05:04:25.576721 systemd-networkd[1453]: cilium_host: Gained IPv6LL Sep 9 05:04:25.599677 systemd-networkd[1453]: cilium_net: Gained IPv6LL Sep 9 05:04:25.852535 kernel: NET: Registered PF_ALG protocol family Sep 9 05:04:26.279133 kubelet[2678]: E0909 05:04:26.279102 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:26.466778 systemd-networkd[1453]: lxc_health: Link UP Sep 9 05:04:26.479298 systemd-networkd[1453]: lxc_health: Gained carrier Sep 9 05:04:26.913541 kernel: eth0: renamed from tmpeb6f7 Sep 9 05:04:26.924547 kernel: eth0: renamed from tmpb38c9 Sep 9 05:04:26.924832 systemd-networkd[1453]: lxc3b11b0b1b92e: Link UP Sep 9 05:04:26.925276 systemd-networkd[1453]: lxc72197b14df1d: Link UP Sep 9 05:04:26.925661 systemd-networkd[1453]: lxc3b11b0b1b92e: Gained carrier Sep 9 05:04:26.925799 systemd-networkd[1453]: lxc72197b14df1d: Gained carrier Sep 9 05:04:26.999722 systemd-networkd[1453]: cilium_vxlan: Gained IPv6LL Sep 9 05:04:27.306141 kubelet[2678]: E0909 05:04:27.305858 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:27.639648 systemd-networkd[1453]: lxc_health: Gained IPv6LL Sep 9 05:04:28.599636 systemd-networkd[1453]: lxc72197b14df1d: Gained IPv6LL Sep 9 05:04:28.727640 systemd-networkd[1453]: lxc3b11b0b1b92e: Gained IPv6LL Sep 9 05:04:29.468906 kubelet[2678]: I0909 05:04:29.468858 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:04:29.470835 kubelet[2678]: E0909 05:04:29.470745 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:30.308886 kubelet[2678]: E0909 05:04:30.308847 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:30.485249 containerd[1534]: time="2025-09-09T05:04:30.485107174Z" level=info msg="connecting to shim eb6f77fb20e7a56e33541c486fc0ea9d63c748f7c75bbc236ee9c17bdae8706f" address="unix:///run/containerd/s/cd6d5cab4bc3f66d707654c4e599b2c2605ab5325fe51c2d893fa9a86de6072b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:04:30.486927 containerd[1534]: time="2025-09-09T05:04:30.486841183Z" level=info msg="connecting to shim b38c990f8fbf2b25ce37f5daecf7b408df8bdd40dd8c71a1af724b56ee037ed5" address="unix:///run/containerd/s/b3e89740e68cf74f300f2273ae3f49c648177d518fae6a258e9f4e95ec007e30" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:04:30.509642 systemd[1]: Started cri-containerd-b38c990f8fbf2b25ce37f5daecf7b408df8bdd40dd8c71a1af724b56ee037ed5.scope - libcontainer container b38c990f8fbf2b25ce37f5daecf7b408df8bdd40dd8c71a1af724b56ee037ed5. Sep 9 05:04:30.512385 systemd[1]: Started cri-containerd-eb6f77fb20e7a56e33541c486fc0ea9d63c748f7c75bbc236ee9c17bdae8706f.scope - libcontainer container eb6f77fb20e7a56e33541c486fc0ea9d63c748f7c75bbc236ee9c17bdae8706f. Sep 9 05:04:30.522943 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:04:30.524521 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:04:30.551031 containerd[1534]: time="2025-09-09T05:04:30.550975536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w4xbc,Uid:7659fb80-93a4-4265-af6f-c5e452238e52,Namespace:kube-system,Attempt:0,} returns sandbox id \"b38c990f8fbf2b25ce37f5daecf7b408df8bdd40dd8c71a1af724b56ee037ed5\"" Sep 9 05:04:30.551927 kubelet[2678]: E0909 05:04:30.551903 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:30.555227 containerd[1534]: time="2025-09-09T05:04:30.555192924Z" level=info msg="CreateContainer within sandbox \"b38c990f8fbf2b25ce37f5daecf7b408df8bdd40dd8c71a1af724b56ee037ed5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:04:30.559849 containerd[1534]: time="2025-09-09T05:04:30.559405832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2b4g,Uid:121fb0da-4683-40d1-9b9a-6e79b7ec0ffd,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb6f77fb20e7a56e33541c486fc0ea9d63c748f7c75bbc236ee9c17bdae8706f\"" Sep 9 05:04:30.560271 kubelet[2678]: E0909 05:04:30.560248 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:30.562506 containerd[1534]: time="2025-09-09T05:04:30.562475057Z" level=info msg="CreateContainer within sandbox \"eb6f77fb20e7a56e33541c486fc0ea9d63c748f7c75bbc236ee9c17bdae8706f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:04:30.568133 containerd[1534]: time="2025-09-09T05:04:30.568103923Z" level=info msg="Container 416ad35d95cc930f19404ee66ea8d4f6aec1b3a457a0e7c31342d8439e79b263: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:30.570175 containerd[1534]: time="2025-09-09T05:04:30.569675326Z" level=info msg="Container cbfd7df78fa63e06a9be774afc0d5aec8e857d405e73d1d62a948fcfcc4e797f: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:04:30.573220 containerd[1534]: time="2025-09-09T05:04:30.573193757Z" level=info msg="CreateContainer within sandbox \"b38c990f8fbf2b25ce37f5daecf7b408df8bdd40dd8c71a1af724b56ee037ed5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"416ad35d95cc930f19404ee66ea8d4f6aec1b3a457a0e7c31342d8439e79b263\"" Sep 9 05:04:30.573911 containerd[1534]: time="2025-09-09T05:04:30.573884032Z" level=info msg="StartContainer for \"416ad35d95cc930f19404ee66ea8d4f6aec1b3a457a0e7c31342d8439e79b263\"" Sep 9 05:04:30.574771 containerd[1534]: time="2025-09-09T05:04:30.574746555Z" level=info msg="connecting to shim 416ad35d95cc930f19404ee66ea8d4f6aec1b3a457a0e7c31342d8439e79b263" address="unix:///run/containerd/s/b3e89740e68cf74f300f2273ae3f49c648177d518fae6a258e9f4e95ec007e30" protocol=ttrpc version=3 Sep 9 05:04:30.575732 containerd[1534]: time="2025-09-09T05:04:30.575679658Z" level=info msg="CreateContainer within sandbox \"eb6f77fb20e7a56e33541c486fc0ea9d63c748f7c75bbc236ee9c17bdae8706f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cbfd7df78fa63e06a9be774afc0d5aec8e857d405e73d1d62a948fcfcc4e797f\"" Sep 9 05:04:30.576373 containerd[1534]: time="2025-09-09T05:04:30.576341884Z" level=info msg="StartContainer for \"cbfd7df78fa63e06a9be774afc0d5aec8e857d405e73d1d62a948fcfcc4e797f\"" Sep 9 05:04:30.578177 containerd[1534]: time="2025-09-09T05:04:30.578150954Z" level=info msg="connecting to shim cbfd7df78fa63e06a9be774afc0d5aec8e857d405e73d1d62a948fcfcc4e797f" address="unix:///run/containerd/s/cd6d5cab4bc3f66d707654c4e599b2c2605ab5325fe51c2d893fa9a86de6072b" protocol=ttrpc version=3 Sep 9 05:04:30.607702 systemd[1]: Started cri-containerd-416ad35d95cc930f19404ee66ea8d4f6aec1b3a457a0e7c31342d8439e79b263.scope - libcontainer container 416ad35d95cc930f19404ee66ea8d4f6aec1b3a457a0e7c31342d8439e79b263. Sep 9 05:04:30.610493 systemd[1]: Started cri-containerd-cbfd7df78fa63e06a9be774afc0d5aec8e857d405e73d1d62a948fcfcc4e797f.scope - libcontainer container cbfd7df78fa63e06a9be774afc0d5aec8e857d405e73d1d62a948fcfcc4e797f. Sep 9 05:04:30.638729 containerd[1534]: time="2025-09-09T05:04:30.638624675Z" level=info msg="StartContainer for \"416ad35d95cc930f19404ee66ea8d4f6aec1b3a457a0e7c31342d8439e79b263\" returns successfully" Sep 9 05:04:30.652978 containerd[1534]: time="2025-09-09T05:04:30.652941790Z" level=info msg="StartContainer for \"cbfd7df78fa63e06a9be774afc0d5aec8e857d405e73d1d62a948fcfcc4e797f\" returns successfully" Sep 9 05:04:31.314634 kubelet[2678]: E0909 05:04:31.314448 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:31.314634 kubelet[2678]: E0909 05:04:31.314452 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:31.325131 kubelet[2678]: I0909 05:04:31.325016 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g2b4g" podStartSLOduration=18.324913154 podStartE2EDuration="18.324913154s" podCreationTimestamp="2025-09-09 05:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:04:31.323787729 +0000 UTC m=+25.237078616" watchObservedRunningTime="2025-09-09 05:04:31.324913154 +0000 UTC m=+25.238204041" Sep 9 05:04:31.463117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575814130.mount: Deactivated successfully. Sep 9 05:04:32.315514 kubelet[2678]: E0909 05:04:32.315465 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:33.317608 kubelet[2678]: E0909 05:04:33.317579 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:35.234722 systemd[1]: Started sshd@7-10.0.0.93:22-10.0.0.1:34690.service - OpenSSH per-connection server daemon (10.0.0.1:34690). Sep 9 05:04:35.302765 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 34690 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:04:35.304199 sshd-session[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:04:35.308561 systemd-logind[1521]: New session 8 of user core. Sep 9 05:04:35.321659 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:04:35.440910 sshd[4025]: Connection closed by 10.0.0.1 port 34690 Sep 9 05:04:35.441217 sshd-session[4022]: pam_unix(sshd:session): session closed for user core Sep 9 05:04:35.444864 systemd[1]: sshd@7-10.0.0.93:22-10.0.0.1:34690.service: Deactivated successfully. Sep 9 05:04:35.448559 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:04:35.450447 systemd-logind[1521]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:04:35.452081 systemd-logind[1521]: Removed session 8. Sep 9 05:04:40.455654 systemd[1]: Started sshd@8-10.0.0.93:22-10.0.0.1:34042.service - OpenSSH per-connection server daemon (10.0.0.1:34042). Sep 9 05:04:40.524380 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 34042 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:04:40.526345 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:04:40.531363 systemd-logind[1521]: New session 9 of user core. Sep 9 05:04:40.543656 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:04:40.655609 sshd[4043]: Connection closed by 10.0.0.1 port 34042 Sep 9 05:04:40.656093 sshd-session[4040]: pam_unix(sshd:session): session closed for user core Sep 9 05:04:40.659649 systemd[1]: sshd@8-10.0.0.93:22-10.0.0.1:34042.service: Deactivated successfully. Sep 9 05:04:40.661301 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:04:40.662669 systemd-logind[1521]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:04:40.663628 systemd-logind[1521]: Removed session 9. Sep 9 05:04:41.313659 kubelet[2678]: E0909 05:04:41.313481 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:41.333452 kubelet[2678]: E0909 05:04:41.333424 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:04:41.337143 kubelet[2678]: I0909 05:04:41.337075 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w4xbc" podStartSLOduration=28.337058559 podStartE2EDuration="28.337058559s" podCreationTimestamp="2025-09-09 05:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:04:31.335963982 +0000 UTC m=+25.249254869" watchObservedRunningTime="2025-09-09 05:04:41.337058559 +0000 UTC m=+35.250349446" Sep 9 05:04:45.669610 systemd[1]: Started sshd@9-10.0.0.93:22-10.0.0.1:34048.service - OpenSSH per-connection server daemon (10.0.0.1:34048). Sep 9 05:04:45.723086 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 34048 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:04:45.724183 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:04:45.727549 systemd-logind[1521]: New session 10 of user core. Sep 9 05:04:45.738644 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:04:45.861125 sshd[4066]: Connection closed by 10.0.0.1 port 34048 Sep 9 05:04:45.861869 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Sep 9 05:04:45.865937 systemd[1]: sshd@9-10.0.0.93:22-10.0.0.1:34048.service: Deactivated successfully. Sep 9 05:04:45.867569 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:04:45.868233 systemd-logind[1521]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:04:45.869857 systemd-logind[1521]: Removed session 10. Sep 9 05:04:50.884667 systemd[1]: Started sshd@10-10.0.0.93:22-10.0.0.1:58060.service - OpenSSH per-connection server daemon (10.0.0.1:58060). Sep 9 05:04:50.956185 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 58060 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:04:50.957286 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:04:50.960850 systemd-logind[1521]: New session 11 of user core. Sep 9 05:04:50.975642 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:04:51.093990 sshd[4084]: Connection closed by 10.0.0.1 port 58060 Sep 9 05:04:51.095479 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Sep 9 05:04:51.104470 systemd[1]: sshd@10-10.0.0.93:22-10.0.0.1:58060.service: Deactivated successfully. Sep 9 05:04:51.106466 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:04:51.107304 systemd-logind[1521]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:04:51.110085 systemd[1]: Started sshd@11-10.0.0.93:22-10.0.0.1:58070.service - OpenSSH per-connection server daemon (10.0.0.1:58070). Sep 9 05:04:51.112021 systemd-logind[1521]: Removed session 11. Sep 9 05:04:51.166770 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 58070 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:04:51.167772 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:04:51.171843 systemd-logind[1521]: New session 12 of user core. Sep 9 05:04:51.187630 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:04:51.328903 sshd[4101]: Connection closed by 10.0.0.1 port 58070 Sep 9 05:04:51.329234 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Sep 9 05:04:51.338526 systemd[1]: sshd@11-10.0.0.93:22-10.0.0.1:58070.service: Deactivated successfully. Sep 9 05:04:51.341357 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:04:51.342679 systemd-logind[1521]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:04:51.350952 systemd[1]: Started sshd@12-10.0.0.93:22-10.0.0.1:58076.service - OpenSSH per-connection server daemon (10.0.0.1:58076). Sep 9 05:04:51.353092 systemd-logind[1521]: Removed session 12. Sep 9 05:04:51.415302 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 58076 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:04:51.416614 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:04:51.420521 systemd-logind[1521]: New session 13 of user core. Sep 9 05:04:51.427659 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:04:51.539697 sshd[4117]: Connection closed by 10.0.0.1 port 58076 Sep 9 05:04:51.540006 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Sep 9 05:04:51.543879 systemd[1]: sshd@12-10.0.0.93:22-10.0.0.1:58076.service: Deactivated successfully. Sep 9 05:04:51.545767 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:04:51.546627 systemd-logind[1521]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:04:51.547676 systemd-logind[1521]: Removed session 13. Sep 9 05:04:56.555682 systemd[1]: Started sshd@13-10.0.0.93:22-10.0.0.1:58088.service - OpenSSH per-connection server daemon (10.0.0.1:58088). Sep 9 05:04:56.621621 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 58088 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:04:56.622716 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:04:56.626323 systemd-logind[1521]: New session 14 of user core. Sep 9 05:04:56.632643 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:04:56.744881 sshd[4133]: Connection closed by 10.0.0.1 port 58088 Sep 9 05:04:56.745205 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Sep 9 05:04:56.748904 systemd[1]: sshd@13-10.0.0.93:22-10.0.0.1:58088.service: Deactivated successfully. Sep 9 05:04:56.750547 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:04:56.751166 systemd-logind[1521]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:04:56.752174 systemd-logind[1521]: Removed session 14. Sep 9 05:05:01.761556 systemd[1]: Started sshd@14-10.0.0.93:22-10.0.0.1:55728.service - OpenSSH per-connection server daemon (10.0.0.1:55728). Sep 9 05:05:01.828736 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 55728 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:01.829860 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:01.833563 systemd-logind[1521]: New session 15 of user core. Sep 9 05:05:01.842662 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:05:01.951095 sshd[4150]: Connection closed by 10.0.0.1 port 55728 Sep 9 05:05:01.951556 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:01.961698 systemd[1]: sshd@14-10.0.0.93:22-10.0.0.1:55728.service: Deactivated successfully. Sep 9 05:05:01.963252 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:05:01.963919 systemd-logind[1521]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:05:01.966135 systemd[1]: Started sshd@15-10.0.0.93:22-10.0.0.1:55738.service - OpenSSH per-connection server daemon (10.0.0.1:55738). Sep 9 05:05:01.967760 systemd-logind[1521]: Removed session 15. Sep 9 05:05:02.021538 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 55738 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:02.022608 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:02.027432 systemd-logind[1521]: New session 16 of user core. Sep 9 05:05:02.035655 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:05:02.207493 sshd[4166]: Connection closed by 10.0.0.1 port 55738 Sep 9 05:05:02.207955 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:02.219552 systemd[1]: sshd@15-10.0.0.93:22-10.0.0.1:55738.service: Deactivated successfully. Sep 9 05:05:02.221190 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:05:02.222128 systemd-logind[1521]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:05:02.224244 systemd-logind[1521]: Removed session 16. Sep 9 05:05:02.226080 systemd[1]: Started sshd@16-10.0.0.93:22-10.0.0.1:55742.service - OpenSSH per-connection server daemon (10.0.0.1:55742). Sep 9 05:05:02.291555 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 55742 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:02.292529 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:02.296870 systemd-logind[1521]: New session 17 of user core. Sep 9 05:05:02.312634 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:05:02.891874 sshd[4181]: Connection closed by 10.0.0.1 port 55742 Sep 9 05:05:02.892396 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:02.904468 systemd[1]: sshd@16-10.0.0.93:22-10.0.0.1:55742.service: Deactivated successfully. Sep 9 05:05:02.908643 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:05:02.909670 systemd-logind[1521]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:05:02.915294 systemd[1]: Started sshd@17-10.0.0.93:22-10.0.0.1:55758.service - OpenSSH per-connection server daemon (10.0.0.1:55758). Sep 9 05:05:02.916429 systemd-logind[1521]: Removed session 17. Sep 9 05:05:02.972458 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 55758 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:02.973609 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:02.977904 systemd-logind[1521]: New session 18 of user core. Sep 9 05:05:02.987678 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:05:03.200035 sshd[4202]: Connection closed by 10.0.0.1 port 55758 Sep 9 05:05:03.200249 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:03.208638 systemd[1]: sshd@17-10.0.0.93:22-10.0.0.1:55758.service: Deactivated successfully. Sep 9 05:05:03.210615 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:05:03.212103 systemd-logind[1521]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:05:03.214950 systemd[1]: Started sshd@18-10.0.0.93:22-10.0.0.1:55762.service - OpenSSH per-connection server daemon (10.0.0.1:55762). Sep 9 05:05:03.216721 systemd-logind[1521]: Removed session 18. Sep 9 05:05:03.287675 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 55762 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:03.288915 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:03.293407 systemd-logind[1521]: New session 19 of user core. Sep 9 05:05:03.305656 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:05:03.414429 sshd[4216]: Connection closed by 10.0.0.1 port 55762 Sep 9 05:05:03.414762 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:03.418127 systemd[1]: sshd@18-10.0.0.93:22-10.0.0.1:55762.service: Deactivated successfully. Sep 9 05:05:03.419947 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:05:03.421796 systemd-logind[1521]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:05:03.422946 systemd-logind[1521]: Removed session 19. Sep 9 05:05:08.426613 systemd[1]: Started sshd@19-10.0.0.93:22-10.0.0.1:55774.service - OpenSSH per-connection server daemon (10.0.0.1:55774). Sep 9 05:05:08.490391 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 55774 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:08.491799 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:08.496334 systemd-logind[1521]: New session 20 of user core. Sep 9 05:05:08.505650 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:05:08.614288 sshd[4237]: Connection closed by 10.0.0.1 port 55774 Sep 9 05:05:08.614617 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:08.617779 systemd[1]: sshd@19-10.0.0.93:22-10.0.0.1:55774.service: Deactivated successfully. Sep 9 05:05:08.620568 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:05:08.621648 systemd-logind[1521]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:05:08.623460 systemd-logind[1521]: Removed session 20. Sep 9 05:05:13.626128 systemd[1]: Started sshd@20-10.0.0.93:22-10.0.0.1:45368.service - OpenSSH per-connection server daemon (10.0.0.1:45368). Sep 9 05:05:13.680052 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 45368 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:13.681121 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:13.685566 systemd-logind[1521]: New session 21 of user core. Sep 9 05:05:13.694655 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:05:13.815483 sshd[4256]: Connection closed by 10.0.0.1 port 45368 Sep 9 05:05:13.816870 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:13.821550 systemd-logind[1521]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:05:13.821796 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:05:13.822944 systemd[1]: sshd@20-10.0.0.93:22-10.0.0.1:45368.service: Deactivated successfully. Sep 9 05:05:13.826429 systemd-logind[1521]: Removed session 21. Sep 9 05:05:18.830828 systemd[1]: Started sshd@21-10.0.0.93:22-10.0.0.1:45370.service - OpenSSH per-connection server daemon (10.0.0.1:45370). Sep 9 05:05:18.896057 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 45370 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:18.897363 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:18.901530 systemd-logind[1521]: New session 22 of user core. Sep 9 05:05:18.909635 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:05:19.015552 sshd[4272]: Connection closed by 10.0.0.1 port 45370 Sep 9 05:05:19.015524 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:19.026878 systemd[1]: sshd@21-10.0.0.93:22-10.0.0.1:45370.service: Deactivated successfully. Sep 9 05:05:19.029876 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:05:19.030548 systemd-logind[1521]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:05:19.032554 systemd[1]: Started sshd@22-10.0.0.93:22-10.0.0.1:45380.service - OpenSSH per-connection server daemon (10.0.0.1:45380). Sep 9 05:05:19.033426 systemd-logind[1521]: Removed session 22. Sep 9 05:05:19.094800 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 45380 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:19.096140 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:19.100434 systemd-logind[1521]: New session 23 of user core. Sep 9 05:05:19.111669 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:05:20.169363 kubelet[2678]: E0909 05:05:20.169027 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:20.650099 containerd[1534]: time="2025-09-09T05:05:20.649927188Z" level=info msg="StopContainer for \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\" with timeout 30 (s)" Sep 9 05:05:20.651372 containerd[1534]: time="2025-09-09T05:05:20.651323012Z" level=info msg="Stop container \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\" with signal terminated" Sep 9 05:05:20.677380 systemd[1]: cri-containerd-dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6.scope: Deactivated successfully. Sep 9 05:05:20.678793 containerd[1534]: time="2025-09-09T05:05:20.678744120Z" level=info msg="received exit event container_id:\"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\" id:\"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\" pid:3261 exited_at:{seconds:1757394320 nanos:678441821}" Sep 9 05:05:20.679184 containerd[1534]: time="2025-09-09T05:05:20.679156852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\" id:\"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\" pid:3261 exited_at:{seconds:1757394320 nanos:678441821}" Sep 9 05:05:20.694900 containerd[1534]: time="2025-09-09T05:05:20.694856969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" id:\"582486e56f8f432f7169b1c89780df4c1c66b713778c8ebc6bd08def684bd1ed\" pid:4315 exited_at:{seconds:1757394320 nanos:694326366}" Sep 9 05:05:20.696817 containerd[1534]: time="2025-09-09T05:05:20.696762118Z" level=info msg="StopContainer for \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" with timeout 2 (s)" Sep 9 05:05:20.697580 containerd[1534]: time="2025-09-09T05:05:20.697549543Z" level=info msg="Stop container \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" with signal terminated" Sep 9 05:05:20.703465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6-rootfs.mount: Deactivated successfully. Sep 9 05:05:20.703950 containerd[1534]: time="2025-09-09T05:05:20.703774114Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:05:20.705880 systemd-networkd[1453]: lxc_health: Link DOWN Sep 9 05:05:20.705892 systemd-networkd[1453]: lxc_health: Lost carrier Sep 9 05:05:20.719508 containerd[1534]: time="2025-09-09T05:05:20.719453953Z" level=info msg="StopContainer for \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\" returns successfully" Sep 9 05:05:20.721780 containerd[1534]: time="2025-09-09T05:05:20.721735995Z" level=info msg="StopPodSandbox for \"f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682\"" Sep 9 05:05:20.722269 systemd[1]: cri-containerd-1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc.scope: Deactivated successfully. Sep 9 05:05:20.722576 systemd[1]: cri-containerd-1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc.scope: Consumed 6.337s CPU time, 125.5M memory peak, 136K read from disk, 12.9M written to disk. Sep 9 05:05:20.725827 containerd[1534]: time="2025-09-09T05:05:20.725801475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" id:\"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" pid:3335 exited_at:{seconds:1757394320 nanos:725549212}" Sep 9 05:05:20.725902 containerd[1534]: time="2025-09-09T05:05:20.725870190Z" level=info msg="received exit event container_id:\"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" id:\"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" pid:3335 exited_at:{seconds:1757394320 nanos:725549212}" Sep 9 05:05:20.734029 containerd[1534]: time="2025-09-09T05:05:20.733997470Z" level=info msg="Container to stop \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:05:20.739706 systemd[1]: cri-containerd-f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682.scope: Deactivated successfully. Sep 9 05:05:20.741138 containerd[1534]: time="2025-09-09T05:05:20.741100980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682\" id:\"f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682\" pid:2983 exit_status:137 exited_at:{seconds:1757394320 nanos:740875275}" Sep 9 05:05:20.745941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc-rootfs.mount: Deactivated successfully. Sep 9 05:05:20.759744 containerd[1534]: time="2025-09-09T05:05:20.759710416Z" level=info msg="StopContainer for \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" returns successfully" Sep 9 05:05:20.760390 containerd[1534]: time="2025-09-09T05:05:20.760140147Z" level=info msg="StopPodSandbox for \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\"" Sep 9 05:05:20.760390 containerd[1534]: time="2025-09-09T05:05:20.760193023Z" level=info msg="Container to stop \"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:05:20.760390 containerd[1534]: time="2025-09-09T05:05:20.760204222Z" level=info msg="Container to stop \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:05:20.760390 containerd[1534]: time="2025-09-09T05:05:20.760212382Z" level=info msg="Container to stop \"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:05:20.760390 containerd[1534]: time="2025-09-09T05:05:20.760220701Z" level=info msg="Container to stop \"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:05:20.760390 containerd[1534]: time="2025-09-09T05:05:20.760228021Z" level=info msg="Container to stop \"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:05:20.767110 systemd[1]: cri-containerd-d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212.scope: Deactivated successfully. Sep 9 05:05:20.769423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682-rootfs.mount: Deactivated successfully. Sep 9 05:05:20.776384 containerd[1534]: time="2025-09-09T05:05:20.776345709Z" level=info msg="shim disconnected" id=f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682 namespace=k8s.io Sep 9 05:05:20.783186 containerd[1534]: time="2025-09-09T05:05:20.776376547Z" level=warning msg="cleaning up after shim disconnected" id=f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682 namespace=k8s.io Sep 9 05:05:20.783186 containerd[1534]: time="2025-09-09T05:05:20.783007130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:05:20.799647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212-rootfs.mount: Deactivated successfully. Sep 9 05:05:20.801107 containerd[1534]: time="2025-09-09T05:05:20.800975410Z" level=info msg="received exit event sandbox_id:\"f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682\" exit_status:137 exited_at:{seconds:1757394320 nanos:740875275}" Sep 9 05:05:20.801766 containerd[1534]: time="2025-09-09T05:05:20.801371623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" id:\"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" pid:2827 exit_status:137 exited_at:{seconds:1757394320 nanos:773717730}" Sep 9 05:05:20.801766 containerd[1534]: time="2025-09-09T05:05:20.801632525Z" level=info msg="TearDown network for sandbox \"f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682\" successfully" Sep 9 05:05:20.801766 containerd[1534]: time="2025-09-09T05:05:20.801652404Z" level=info msg="StopPodSandbox for \"f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682\" returns successfully" Sep 9 05:05:20.803177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5bb05b89dbcbd72e07c51a8fbacd20e87a4e82f1d962d5bd5ad7573cf9cc682-shm.mount: Deactivated successfully. Sep 9 05:05:20.807629 containerd[1534]: time="2025-09-09T05:05:20.807492761Z" level=info msg="shim disconnected" id=d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212 namespace=k8s.io Sep 9 05:05:20.807710 containerd[1534]: time="2025-09-09T05:05:20.807630232Z" level=warning msg="cleaning up after shim disconnected" id=d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212 namespace=k8s.io Sep 9 05:05:20.807710 containerd[1534]: time="2025-09-09T05:05:20.807659469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:05:20.809070 containerd[1534]: time="2025-09-09T05:05:20.809047014Z" level=info msg="TearDown network for sandbox \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" successfully" Sep 9 05:05:20.809193 containerd[1534]: time="2025-09-09T05:05:20.809174165Z" level=info msg="StopPodSandbox for \"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" returns successfully" Sep 9 05:05:20.809335 containerd[1534]: time="2025-09-09T05:05:20.809056853Z" level=info msg="received exit event sandbox_id:\"d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212\" exit_status:137 exited_at:{seconds:1757394320 nanos:773717730}" Sep 9 05:05:20.933187 kubelet[2678]: I0909 05:05:20.932987 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-cgroup\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.933187 kubelet[2678]: I0909 05:05:20.933046 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-host-proc-sys-kernel\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.933187 kubelet[2678]: I0909 05:05:20.933068 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-host-proc-sys-net\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.933187 kubelet[2678]: I0909 05:05:20.933086 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-run\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.933187 kubelet[2678]: I0909 05:05:20.933110 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/13096f83-41ee-40dc-b00b-da0988c4264d-clustermesh-secrets\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.933187 kubelet[2678]: I0909 05:05:20.933127 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-bpf-maps\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.934971 kubelet[2678]: I0909 05:05:20.933162 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cni-path\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.934971 kubelet[2678]: I0909 05:05:20.933178 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-lib-modules\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.934971 kubelet[2678]: I0909 05:05:20.933197 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-227zr\" (UniqueName: \"kubernetes.io/projected/13096f83-41ee-40dc-b00b-da0988c4264d-kube-api-access-227zr\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.934971 kubelet[2678]: I0909 05:05:20.933216 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/13096f83-41ee-40dc-b00b-da0988c4264d-hubble-tls\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.934971 kubelet[2678]: I0909 05:05:20.933233 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gp67\" (UniqueName: \"kubernetes.io/projected/47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8-kube-api-access-4gp67\") pod \"47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8\" (UID: \"47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8\") " Sep 9 05:05:20.934971 kubelet[2678]: I0909 05:05:20.933268 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-etc-cni-netd\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.935111 kubelet[2678]: I0909 05:05:20.933285 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-config-path\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.935111 kubelet[2678]: I0909 05:05:20.933299 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-xtables-lock\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.935111 kubelet[2678]: I0909 05:05:20.933316 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8-cilium-config-path\") pod \"47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8\" (UID: \"47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8\") " Sep 9 05:05:20.935111 kubelet[2678]: I0909 05:05:20.933331 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-hostproc\") pod \"13096f83-41ee-40dc-b00b-da0988c4264d\" (UID: \"13096f83-41ee-40dc-b00b-da0988c4264d\") " Sep 9 05:05:20.935111 kubelet[2678]: I0909 05:05:20.934982 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-hostproc" (OuterVolumeSpecName: "hostproc") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:05:20.935111 kubelet[2678]: I0909 05:05:20.934984 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:05:20.935274 kubelet[2678]: I0909 05:05:20.935011 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:05:20.935346 kubelet[2678]: I0909 05:05:20.935320 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:05:20.935428 kubelet[2678]: I0909 05:05:20.935412 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:05:20.936203 kubelet[2678]: I0909 05:05:20.936172 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:05:20.936256 kubelet[2678]: I0909 05:05:20.936212 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:05:20.936680 kubelet[2678]: I0909 05:05:20.936652 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:05:20.936735 kubelet[2678]: I0909 05:05:20.936681 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cni-path" (OuterVolumeSpecName: "cni-path") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:05:20.936735 kubelet[2678]: I0909 05:05:20.936699 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:05:20.936735 kubelet[2678]: I0909 05:05:20.936718 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:05:20.938507 kubelet[2678]: I0909 05:05:20.938447 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8" (UID: "47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:05:20.939068 kubelet[2678]: I0909 05:05:20.939043 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13096f83-41ee-40dc-b00b-da0988c4264d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:05:20.939163 kubelet[2678]: I0909 05:05:20.939047 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8-kube-api-access-4gp67" (OuterVolumeSpecName: "kube-api-access-4gp67") pod "47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8" (UID: "47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8"). InnerVolumeSpecName "kube-api-access-4gp67". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:05:20.939223 kubelet[2678]: I0909 05:05:20.939064 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13096f83-41ee-40dc-b00b-da0988c4264d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:05:20.939272 kubelet[2678]: I0909 05:05:20.939181 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13096f83-41ee-40dc-b00b-da0988c4264d-kube-api-access-227zr" (OuterVolumeSpecName: "kube-api-access-227zr") pod "13096f83-41ee-40dc-b00b-da0988c4264d" (UID: "13096f83-41ee-40dc-b00b-da0988c4264d"). InnerVolumeSpecName "kube-api-access-227zr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:05:21.033913 kubelet[2678]: I0909 05:05:21.033868 2678 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.033913 kubelet[2678]: I0909 05:05:21.033898 2678 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.033913 kubelet[2678]: I0909 05:05:21.033910 2678 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.033913 kubelet[2678]: I0909 05:05:21.033919 2678 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034091 kubelet[2678]: I0909 05:05:21.033927 2678 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/13096f83-41ee-40dc-b00b-da0988c4264d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034091 kubelet[2678]: I0909 05:05:21.033938 2678 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034091 kubelet[2678]: I0909 05:05:21.033945 2678 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034091 kubelet[2678]: I0909 05:05:21.033953 2678 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034091 kubelet[2678]: I0909 05:05:21.033963 2678 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4gp67\" (UniqueName: \"kubernetes.io/projected/47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8-kube-api-access-4gp67\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034091 kubelet[2678]: I0909 05:05:21.033971 2678 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-227zr\" (UniqueName: \"kubernetes.io/projected/13096f83-41ee-40dc-b00b-da0988c4264d-kube-api-access-227zr\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034091 kubelet[2678]: I0909 05:05:21.033990 2678 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/13096f83-41ee-40dc-b00b-da0988c4264d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034091 kubelet[2678]: I0909 05:05:21.034000 2678 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034251 kubelet[2678]: I0909 05:05:21.034010 2678 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13096f83-41ee-40dc-b00b-da0988c4264d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034251 kubelet[2678]: I0909 05:05:21.034018 2678 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034251 kubelet[2678]: I0909 05:05:21.034026 2678 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.034251 kubelet[2678]: I0909 05:05:21.034034 2678 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/13096f83-41ee-40dc-b00b-da0988c4264d-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 05:05:21.229360 kubelet[2678]: E0909 05:05:21.229245 2678 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:05:21.408910 kubelet[2678]: I0909 05:05:21.408885 2678 scope.go:117] "RemoveContainer" containerID="dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6" Sep 9 05:05:21.411905 containerd[1534]: time="2025-09-09T05:05:21.411874611Z" level=info msg="RemoveContainer for \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\"" Sep 9 05:05:21.413630 systemd[1]: Removed slice kubepods-besteffort-pod47bf2e54_5d5a_4faf_95ca_09c48ae9f8a8.slice - libcontainer container kubepods-besteffort-pod47bf2e54_5d5a_4faf_95ca_09c48ae9f8a8.slice. Sep 9 05:05:21.419999 systemd[1]: Removed slice kubepods-burstable-pod13096f83_41ee_40dc_b00b_da0988c4264d.slice - libcontainer container kubepods-burstable-pod13096f83_41ee_40dc_b00b_da0988c4264d.slice. Sep 9 05:05:21.420082 systemd[1]: kubepods-burstable-pod13096f83_41ee_40dc_b00b_da0988c4264d.slice: Consumed 6.433s CPU time, 125.9M memory peak, 152K read from disk, 12.9M written to disk. Sep 9 05:05:21.427835 containerd[1534]: time="2025-09-09T05:05:21.427797065Z" level=info msg="RemoveContainer for \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\" returns successfully" Sep 9 05:05:21.428358 kubelet[2678]: I0909 05:05:21.428295 2678 scope.go:117] "RemoveContainer" containerID="dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6" Sep 9 05:05:21.428656 containerd[1534]: time="2025-09-09T05:05:21.428612212Z" level=error msg="ContainerStatus for \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\": not found" Sep 9 05:05:21.429310 kubelet[2678]: E0909 05:05:21.428761 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\": not found" containerID="dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6" Sep 9 05:05:21.432811 kubelet[2678]: I0909 05:05:21.432717 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6"} err="failed to get container status \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd92c88610ae96a1d483aa825e45824f8c0b349de11bf793cef7a12d22b743f6\": not found" Sep 9 05:05:21.432811 kubelet[2678]: I0909 05:05:21.432810 2678 scope.go:117] "RemoveContainer" containerID="1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc" Sep 9 05:05:21.434239 containerd[1534]: time="2025-09-09T05:05:21.434213851Z" level=info msg="RemoveContainer for \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\"" Sep 9 05:05:21.440204 containerd[1534]: time="2025-09-09T05:05:21.440176987Z" level=info msg="RemoveContainer for \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" returns successfully" Sep 9 05:05:21.440443 kubelet[2678]: I0909 05:05:21.440353 2678 scope.go:117] "RemoveContainer" containerID="bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94" Sep 9 05:05:21.441556 containerd[1534]: time="2025-09-09T05:05:21.441526340Z" level=info msg="RemoveContainer for \"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\"" Sep 9 05:05:21.445085 containerd[1534]: time="2025-09-09T05:05:21.445052873Z" level=info msg="RemoveContainer for \"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\" returns successfully" Sep 9 05:05:21.445211 kubelet[2678]: I0909 05:05:21.445192 2678 scope.go:117] "RemoveContainer" containerID="c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722" Sep 9 05:05:21.447613 containerd[1534]: time="2025-09-09T05:05:21.447591669Z" level=info msg="RemoveContainer for \"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\"" Sep 9 05:05:21.451544 containerd[1534]: time="2025-09-09T05:05:21.450735547Z" level=info msg="RemoveContainer for \"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\" returns successfully" Sep 9 05:05:21.453650 kubelet[2678]: I0909 05:05:21.453627 2678 scope.go:117] "RemoveContainer" containerID="6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6" Sep 9 05:05:21.455249 containerd[1534]: time="2025-09-09T05:05:21.455224978Z" level=info msg="RemoveContainer for \"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\"" Sep 9 05:05:21.458246 containerd[1534]: time="2025-09-09T05:05:21.458210425Z" level=info msg="RemoveContainer for \"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\" returns successfully" Sep 9 05:05:21.458419 kubelet[2678]: I0909 05:05:21.458401 2678 scope.go:117] "RemoveContainer" containerID="799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff" Sep 9 05:05:21.460073 containerd[1534]: time="2025-09-09T05:05:21.460052786Z" level=info msg="RemoveContainer for \"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\"" Sep 9 05:05:21.462381 containerd[1534]: time="2025-09-09T05:05:21.462360838Z" level=info msg="RemoveContainer for \"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\" returns successfully" Sep 9 05:05:21.462549 kubelet[2678]: I0909 05:05:21.462519 2678 scope.go:117] "RemoveContainer" containerID="1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc" Sep 9 05:05:21.462754 containerd[1534]: time="2025-09-09T05:05:21.462716375Z" level=error msg="ContainerStatus for \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\": not found" Sep 9 05:05:21.462923 kubelet[2678]: E0909 05:05:21.462895 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\": not found" containerID="1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc" Sep 9 05:05:21.463015 kubelet[2678]: I0909 05:05:21.462992 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc"} err="failed to get container status \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\": rpc error: code = NotFound desc = an error occurred when try to find container \"1562a93f68c7996c1e4f5b46bafeab95d0b08a2aafcaf2e9278d736267762afc\": not found" Sep 9 05:05:21.463066 kubelet[2678]: I0909 05:05:21.463057 2678 scope.go:117] "RemoveContainer" containerID="bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94" Sep 9 05:05:21.463297 containerd[1534]: time="2025-09-09T05:05:21.463269539Z" level=error msg="ContainerStatus for \"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\": not found" Sep 9 05:05:21.463383 kubelet[2678]: E0909 05:05:21.463365 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\": not found" containerID="bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94" Sep 9 05:05:21.463454 kubelet[2678]: I0909 05:05:21.463386 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94"} err="failed to get container status \"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc2a4466594266edad82986b2171bca3f9e15cc7c3374e6c10cd7964d4860f94\": not found" Sep 9 05:05:21.463454 kubelet[2678]: I0909 05:05:21.463402 2678 scope.go:117] "RemoveContainer" containerID="c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722" Sep 9 05:05:21.463638 containerd[1534]: time="2025-09-09T05:05:21.463552001Z" level=error msg="ContainerStatus for \"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\": not found" Sep 9 05:05:21.463761 kubelet[2678]: E0909 05:05:21.463744 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\": not found" containerID="c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722" Sep 9 05:05:21.463860 kubelet[2678]: I0909 05:05:21.463842 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722"} err="failed to get container status \"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4f65cd1bedcf200f6cfbb80a5123670453db3e9092affdfd92a24e05fbd2722\": not found" Sep 9 05:05:21.463980 kubelet[2678]: I0909 05:05:21.463909 2678 scope.go:117] "RemoveContainer" containerID="6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6" Sep 9 05:05:21.464080 containerd[1534]: time="2025-09-09T05:05:21.464048209Z" level=error msg="ContainerStatus for \"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\": not found" Sep 9 05:05:21.464161 kubelet[2678]: E0909 05:05:21.464144 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\": not found" containerID="6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6" Sep 9 05:05:21.464196 kubelet[2678]: I0909 05:05:21.464167 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6"} err="failed to get container status \"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f762ebbae5c59212b5127c6c94228a6c5f8d504371a24131361c1c952b273b6\": not found" Sep 9 05:05:21.464196 kubelet[2678]: I0909 05:05:21.464181 2678 scope.go:117] "RemoveContainer" containerID="799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff" Sep 9 05:05:21.464334 containerd[1534]: time="2025-09-09T05:05:21.464293593Z" level=error msg="ContainerStatus for \"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\": not found" Sep 9 05:05:21.464479 kubelet[2678]: E0909 05:05:21.464443 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\": not found" containerID="799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff" Sep 9 05:05:21.464609 kubelet[2678]: I0909 05:05:21.464467 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff"} err="failed to get container status \"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"799e2ff7d1ac2e181bb28f44cece9a555221a08b765f2bf7552e4ff5dd1b71ff\": not found" Sep 9 05:05:21.703578 systemd[1]: var-lib-kubelet-pods-47bf2e54\x2d5d5a\x2d4faf\x2d95ca\x2d09c48ae9f8a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4gp67.mount: Deactivated successfully. Sep 9 05:05:21.703674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d04a6d1641b7f55ec3a3a239483cb0e6d61ea0765e64125a0c457f386a02f212-shm.mount: Deactivated successfully. Sep 9 05:05:21.703724 systemd[1]: var-lib-kubelet-pods-13096f83\x2d41ee\x2d40dc\x2db00b\x2dda0988c4264d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d227zr.mount: Deactivated successfully. Sep 9 05:05:21.703777 systemd[1]: var-lib-kubelet-pods-13096f83\x2d41ee\x2d40dc\x2db00b\x2dda0988c4264d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 05:05:21.703838 systemd[1]: var-lib-kubelet-pods-13096f83\x2d41ee\x2d40dc\x2db00b\x2dda0988c4264d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 05:05:22.171768 kubelet[2678]: I0909 05:05:22.171736 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13096f83-41ee-40dc-b00b-da0988c4264d" path="/var/lib/kubelet/pods/13096f83-41ee-40dc-b00b-da0988c4264d/volumes" Sep 9 05:05:22.172246 kubelet[2678]: I0909 05:05:22.172229 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8" path="/var/lib/kubelet/pods/47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8/volumes" Sep 9 05:05:22.609942 sshd[4288]: Connection closed by 10.0.0.1 port 45380 Sep 9 05:05:22.610706 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:22.623659 systemd[1]: sshd@22-10.0.0.93:22-10.0.0.1:45380.service: Deactivated successfully. Sep 9 05:05:22.625952 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:05:22.627484 systemd-logind[1521]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:05:22.629826 systemd[1]: Started sshd@23-10.0.0.93:22-10.0.0.1:52706.service - OpenSSH per-connection server daemon (10.0.0.1:52706). Sep 9 05:05:22.630312 systemd-logind[1521]: Removed session 23. Sep 9 05:05:22.690445 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 52706 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:22.691730 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:22.695334 systemd-logind[1521]: New session 24 of user core. Sep 9 05:05:22.701640 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:05:23.853600 sshd[4443]: Connection closed by 10.0.0.1 port 52706 Sep 9 05:05:23.853959 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:23.863086 systemd[1]: sshd@23-10.0.0.93:22-10.0.0.1:52706.service: Deactivated successfully. Sep 9 05:05:23.866838 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:05:23.867701 systemd[1]: session-24.scope: Consumed 1.075s CPU time, 23.8M memory peak. Sep 9 05:05:23.869558 systemd-logind[1521]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:05:23.872644 systemd[1]: Started sshd@24-10.0.0.93:22-10.0.0.1:52716.service - OpenSSH per-connection server daemon (10.0.0.1:52716). Sep 9 05:05:23.876414 systemd-logind[1521]: Removed session 24. Sep 9 05:05:23.877692 kubelet[2678]: I0909 05:05:23.877658 2678 memory_manager.go:355] "RemoveStaleState removing state" podUID="13096f83-41ee-40dc-b00b-da0988c4264d" containerName="cilium-agent" Sep 9 05:05:23.877692 kubelet[2678]: I0909 05:05:23.877683 2678 memory_manager.go:355] "RemoveStaleState removing state" podUID="47bf2e54-5d5a-4faf-95ca-09c48ae9f8a8" containerName="cilium-operator" Sep 9 05:05:23.891232 systemd[1]: Created slice kubepods-burstable-pod3cff5b5d_de41_463d_b356_2b430dce30b6.slice - libcontainer container kubepods-burstable-pod3cff5b5d_de41_463d_b356_2b430dce30b6.slice. Sep 9 05:05:23.943636 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 52716 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:23.944813 sshd-session[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:23.948544 systemd-logind[1521]: New session 25 of user core. Sep 9 05:05:23.951468 kubelet[2678]: I0909 05:05:23.951440 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3cff5b5d-de41-463d-b356-2b430dce30b6-cilium-cgroup\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951578 kubelet[2678]: I0909 05:05:23.951476 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3cff5b5d-de41-463d-b356-2b430dce30b6-cni-path\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951578 kubelet[2678]: I0909 05:05:23.951506 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3cff5b5d-de41-463d-b356-2b430dce30b6-cilium-ipsec-secrets\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951578 kubelet[2678]: I0909 05:05:23.951524 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3cff5b5d-de41-463d-b356-2b430dce30b6-hostproc\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951578 kubelet[2678]: I0909 05:05:23.951540 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3cff5b5d-de41-463d-b356-2b430dce30b6-cilium-config-path\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951578 kubelet[2678]: I0909 05:05:23.951556 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3cff5b5d-de41-463d-b356-2b430dce30b6-host-proc-sys-net\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951578 kubelet[2678]: I0909 05:05:23.951571 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xngj4\" (UniqueName: \"kubernetes.io/projected/3cff5b5d-de41-463d-b356-2b430dce30b6-kube-api-access-xngj4\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951749 kubelet[2678]: I0909 05:05:23.951588 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3cff5b5d-de41-463d-b356-2b430dce30b6-cilium-run\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951749 kubelet[2678]: I0909 05:05:23.951602 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3cff5b5d-de41-463d-b356-2b430dce30b6-bpf-maps\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951749 kubelet[2678]: I0909 05:05:23.951616 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3cff5b5d-de41-463d-b356-2b430dce30b6-host-proc-sys-kernel\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951749 kubelet[2678]: I0909 05:05:23.951630 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3cff5b5d-de41-463d-b356-2b430dce30b6-hubble-tls\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951749 kubelet[2678]: I0909 05:05:23.951645 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cff5b5d-de41-463d-b356-2b430dce30b6-lib-modules\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951749 kubelet[2678]: I0909 05:05:23.951660 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3cff5b5d-de41-463d-b356-2b430dce30b6-clustermesh-secrets\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951879 kubelet[2678]: I0909 05:05:23.951674 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3cff5b5d-de41-463d-b356-2b430dce30b6-etc-cni-netd\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.951879 kubelet[2678]: I0909 05:05:23.951691 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cff5b5d-de41-463d-b356-2b430dce30b6-xtables-lock\") pod \"cilium-rvtf5\" (UID: \"3cff5b5d-de41-463d-b356-2b430dce30b6\") " pod="kube-system/cilium-rvtf5" Sep 9 05:05:23.956745 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 05:05:24.006521 sshd[4458]: Connection closed by 10.0.0.1 port 52716 Sep 9 05:05:24.008094 sshd-session[4455]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:24.016616 systemd[1]: sshd@24-10.0.0.93:22-10.0.0.1:52716.service: Deactivated successfully. Sep 9 05:05:24.018277 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 05:05:24.021050 systemd-logind[1521]: Session 25 logged out. Waiting for processes to exit. Sep 9 05:05:24.022736 systemd[1]: Started sshd@25-10.0.0.93:22-10.0.0.1:52722.service - OpenSSH per-connection server daemon (10.0.0.1:52722). Sep 9 05:05:24.024228 systemd-logind[1521]: Removed session 25. Sep 9 05:05:24.083727 sshd[4465]: Accepted publickey for core from 10.0.0.1 port 52722 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:05:24.084969 sshd-session[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:05:24.088679 systemd-logind[1521]: New session 26 of user core. Sep 9 05:05:24.098709 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 05:05:24.196395 kubelet[2678]: E0909 05:05:24.196172 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:24.198192 containerd[1534]: time="2025-09-09T05:05:24.198156141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rvtf5,Uid:3cff5b5d-de41-463d-b356-2b430dce30b6,Namespace:kube-system,Attempt:0,}" Sep 9 05:05:24.215346 containerd[1534]: time="2025-09-09T05:05:24.215304255Z" level=info msg="connecting to shim feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447" address="unix:///run/containerd/s/164d38510a21955347308a5fcc60dd5cdc9ce5ad306f453b6e23141c82dc7e11" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:05:24.242717 systemd[1]: Started cri-containerd-feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447.scope - libcontainer container feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447. Sep 9 05:05:24.263725 containerd[1534]: time="2025-09-09T05:05:24.263650517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rvtf5,Uid:3cff5b5d-de41-463d-b356-2b430dce30b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447\"" Sep 9 05:05:24.264401 kubelet[2678]: E0909 05:05:24.264368 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:24.267302 containerd[1534]: time="2025-09-09T05:05:24.267269050Z" level=info msg="CreateContainer within sandbox \"feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:05:24.275077 containerd[1534]: time="2025-09-09T05:05:24.275028010Z" level=info msg="Container a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:05:24.280243 containerd[1534]: time="2025-09-09T05:05:24.280205942Z" level=info msg="CreateContainer within sandbox \"feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914\"" Sep 9 05:05:24.280672 containerd[1534]: time="2025-09-09T05:05:24.280645359Z" level=info msg="StartContainer for \"a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914\"" Sep 9 05:05:24.281470 containerd[1534]: time="2025-09-09T05:05:24.281445518Z" level=info msg="connecting to shim a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914" address="unix:///run/containerd/s/164d38510a21955347308a5fcc60dd5cdc9ce5ad306f453b6e23141c82dc7e11" protocol=ttrpc version=3 Sep 9 05:05:24.299665 systemd[1]: Started cri-containerd-a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914.scope - libcontainer container a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914. Sep 9 05:05:24.323676 containerd[1534]: time="2025-09-09T05:05:24.323630938Z" level=info msg="StartContainer for \"a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914\" returns successfully" Sep 9 05:05:24.331173 systemd[1]: cri-containerd-a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914.scope: Deactivated successfully. Sep 9 05:05:24.332753 containerd[1534]: time="2025-09-09T05:05:24.332723549Z" level=info msg="received exit event container_id:\"a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914\" id:\"a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914\" pid:4537 exited_at:{seconds:1757394324 nanos:332474721}" Sep 9 05:05:24.332972 containerd[1534]: time="2025-09-09T05:05:24.332941737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914\" id:\"a5cfee979561930c8a558622624a6448ca2bff9b181aa5742db4e3cad9b79914\" pid:4537 exited_at:{seconds:1757394324 nanos:332474721}" Sep 9 05:05:24.425813 kubelet[2678]: E0909 05:05:24.425763 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:24.429601 containerd[1534]: time="2025-09-09T05:05:24.429517668Z" level=info msg="CreateContainer within sandbox \"feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:05:24.436064 containerd[1534]: time="2025-09-09T05:05:24.436028611Z" level=info msg="Container ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:05:24.440968 containerd[1534]: time="2025-09-09T05:05:24.440931478Z" level=info msg="CreateContainer within sandbox \"feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484\"" Sep 9 05:05:24.442820 containerd[1534]: time="2025-09-09T05:05:24.441879829Z" level=info msg="StartContainer for \"ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484\"" Sep 9 05:05:24.444856 containerd[1534]: time="2025-09-09T05:05:24.444793878Z" level=info msg="connecting to shim ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484" address="unix:///run/containerd/s/164d38510a21955347308a5fcc60dd5cdc9ce5ad306f453b6e23141c82dc7e11" protocol=ttrpc version=3 Sep 9 05:05:24.471694 systemd[1]: Started cri-containerd-ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484.scope - libcontainer container ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484. Sep 9 05:05:24.500881 systemd[1]: cri-containerd-ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484.scope: Deactivated successfully. Sep 9 05:05:24.503093 containerd[1534]: time="2025-09-09T05:05:24.503064708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484\" id:\"ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484\" pid:4582 exited_at:{seconds:1757394324 nanos:502834039}" Sep 9 05:05:24.512426 containerd[1534]: time="2025-09-09T05:05:24.512396465Z" level=info msg="received exit event container_id:\"ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484\" id:\"ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484\" pid:4582 exited_at:{seconds:1757394324 nanos:502834039}" Sep 9 05:05:24.513569 containerd[1534]: time="2025-09-09T05:05:24.513546926Z" level=info msg="StartContainer for \"ec3693a0d6334cc051a320ab3ff99e34642fa7e89add4b926f6cdd6afe1f5484\" returns successfully" Sep 9 05:05:25.168908 kubelet[2678]: E0909 05:05:25.168872 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:25.433208 kubelet[2678]: E0909 05:05:25.432795 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:25.434975 containerd[1534]: time="2025-09-09T05:05:25.434886955Z" level=info msg="CreateContainer within sandbox \"feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:05:25.454391 containerd[1534]: time="2025-09-09T05:05:25.454335388Z" level=info msg="Container 78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:05:25.472120 containerd[1534]: time="2025-09-09T05:05:25.472047343Z" level=info msg="CreateContainer within sandbox \"feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658\"" Sep 9 05:05:25.472801 containerd[1534]: time="2025-09-09T05:05:25.472759589Z" level=info msg="StartContainer for \"78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658\"" Sep 9 05:05:25.474316 containerd[1534]: time="2025-09-09T05:05:25.474247638Z" level=info msg="connecting to shim 78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658" address="unix:///run/containerd/s/164d38510a21955347308a5fcc60dd5cdc9ce5ad306f453b6e23141c82dc7e11" protocol=ttrpc version=3 Sep 9 05:05:25.502676 systemd[1]: Started cri-containerd-78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658.scope - libcontainer container 78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658. Sep 9 05:05:25.539563 systemd[1]: cri-containerd-78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658.scope: Deactivated successfully. Sep 9 05:05:25.540571 containerd[1534]: time="2025-09-09T05:05:25.540541678Z" level=info msg="StartContainer for \"78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658\" returns successfully" Sep 9 05:05:25.541910 containerd[1534]: time="2025-09-09T05:05:25.541887054Z" level=info msg="received exit event container_id:\"78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658\" id:\"78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658\" pid:4626 exited_at:{seconds:1757394325 nanos:541475393}" Sep 9 05:05:25.542883 containerd[1534]: time="2025-09-09T05:05:25.542855807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658\" id:\"78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658\" pid:4626 exited_at:{seconds:1757394325 nanos:541475393}" Sep 9 05:05:26.057256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78e2c036832f2a804bc85ae222ba8f92d71b9b001922d3fd7613d7717af7e658-rootfs.mount: Deactivated successfully. Sep 9 05:05:26.232092 kubelet[2678]: E0909 05:05:26.231756 2678 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:05:26.436985 kubelet[2678]: E0909 05:05:26.436942 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:26.438731 containerd[1534]: time="2025-09-09T05:05:26.438661871Z" level=info msg="CreateContainer within sandbox \"feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:05:26.447520 containerd[1534]: time="2025-09-09T05:05:26.447145299Z" level=info msg="Container 18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:05:26.456373 containerd[1534]: time="2025-09-09T05:05:26.455491533Z" level=info msg="CreateContainer within sandbox \"feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c\"" Sep 9 05:05:26.456373 containerd[1534]: time="2025-09-09T05:05:26.456209382Z" level=info msg="StartContainer for \"18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c\"" Sep 9 05:05:26.457000 containerd[1534]: time="2025-09-09T05:05:26.456958909Z" level=info msg="connecting to shim 18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c" address="unix:///run/containerd/s/164d38510a21955347308a5fcc60dd5cdc9ce5ad306f453b6e23141c82dc7e11" protocol=ttrpc version=3 Sep 9 05:05:26.477639 systemd[1]: Started cri-containerd-18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c.scope - libcontainer container 18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c. Sep 9 05:05:26.497872 systemd[1]: cri-containerd-18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c.scope: Deactivated successfully. Sep 9 05:05:26.499240 containerd[1534]: time="2025-09-09T05:05:26.499201378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c\" id:\"18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c\" pid:4665 exited_at:{seconds:1757394326 nanos:498338256}" Sep 9 05:05:26.501023 containerd[1534]: time="2025-09-09T05:05:26.500969701Z" level=info msg="received exit event container_id:\"18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c\" id:\"18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c\" pid:4665 exited_at:{seconds:1757394326 nanos:498338256}" Sep 9 05:05:26.503072 containerd[1534]: time="2025-09-09T05:05:26.503045970Z" level=info msg="StartContainer for \"18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c\" returns successfully" Sep 9 05:05:26.519434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18362153cfbf5fba0a2d34bf39a43705c29b9a254024dd2761f48b98f099f30c-rootfs.mount: Deactivated successfully. Sep 9 05:05:27.442710 kubelet[2678]: E0909 05:05:27.442151 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:27.446525 containerd[1534]: time="2025-09-09T05:05:27.446237615Z" level=info msg="CreateContainer within sandbox \"feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:05:27.486514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852588647.mount: Deactivated successfully. Sep 9 05:05:27.487648 containerd[1534]: time="2025-09-09T05:05:27.487548360Z" level=info msg="Container fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:05:27.497469 containerd[1534]: time="2025-09-09T05:05:27.497421644Z" level=info msg="CreateContainer within sandbox \"feb81ae15987fd3cef2aca335a6468b2d238e4d50ea444e053b32ccdc58f4447\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796\"" Sep 9 05:05:27.497897 containerd[1534]: time="2025-09-09T05:05:27.497877386Z" level=info msg="StartContainer for \"fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796\"" Sep 9 05:05:27.498974 containerd[1534]: time="2025-09-09T05:05:27.498948543Z" level=info msg="connecting to shim fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796" address="unix:///run/containerd/s/164d38510a21955347308a5fcc60dd5cdc9ce5ad306f453b6e23141c82dc7e11" protocol=ttrpc version=3 Sep 9 05:05:27.520671 systemd[1]: Started cri-containerd-fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796.scope - libcontainer container fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796. Sep 9 05:05:27.551688 containerd[1534]: time="2025-09-09T05:05:27.551633153Z" level=info msg="StartContainer for \"fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796\" returns successfully" Sep 9 05:05:27.604177 containerd[1534]: time="2025-09-09T05:05:27.604141449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796\" id:\"c4ddffd5342056f512de38e9567fad686c5085483d80039652dcf0043d82da2b\" pid:4733 exited_at:{seconds:1757394327 nanos:603813062}" Sep 9 05:05:27.792128 kubelet[2678]: I0909 05:05:27.791955 2678 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T05:05:27Z","lastTransitionTime":"2025-09-09T05:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 05:05:27.805553 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 05:05:28.449007 kubelet[2678]: E0909 05:05:28.448976 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:28.464393 kubelet[2678]: I0909 05:05:28.464334 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rvtf5" podStartSLOduration=5.4643172589999995 podStartE2EDuration="5.464317259s" podCreationTimestamp="2025-09-09 05:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:05:28.46346489 +0000 UTC m=+82.376755777" watchObservedRunningTime="2025-09-09 05:05:28.464317259 +0000 UTC m=+82.377608106" Sep 9 05:05:30.197140 kubelet[2678]: E0909 05:05:30.197079 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:30.435774 containerd[1534]: time="2025-09-09T05:05:30.435727273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796\" id:\"5784f1ad139e7c6e5d5f259ddaa2f8fcc9766fd1f9a50a620cf15b5c649a634f\" pid:5160 exit_status:1 exited_at:{seconds:1757394330 nanos:435410562}" Sep 9 05:05:30.619949 systemd-networkd[1453]: lxc_health: Link UP Sep 9 05:05:30.621170 systemd-networkd[1453]: lxc_health: Gained carrier Sep 9 05:05:32.198167 kubelet[2678]: E0909 05:05:32.197864 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:32.459888 kubelet[2678]: E0909 05:05:32.459789 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:05:32.567334 containerd[1534]: time="2025-09-09T05:05:32.567268547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796\" id:\"16b93f90e21bc711d7f817be32a792a8ee68f63d35c7fdb12c01a6e2da1c51e4\" pid:5276 exited_at:{seconds:1757394332 nanos:566896596}" Sep 9 05:05:32.663744 systemd-networkd[1453]: lxc_health: Gained IPv6LL Sep 9 05:05:34.664536 containerd[1534]: time="2025-09-09T05:05:34.664392864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796\" id:\"a8f17ae275183fecba47cf83a499d0c3aaa19ac1f20ed6d7e9c841c413ce34d0\" pid:5309 exited_at:{seconds:1757394334 nanos:663852553}" Sep 9 05:05:36.766613 containerd[1534]: time="2025-09-09T05:05:36.766573057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa5795057cf482245898273605f67593322679b636818e9da4ea4068ff86a796\" id:\"82a5e632755c03874d81977214e4b0ff561be6d8e86d8e3cba363742dd50576b\" pid:5333 exited_at:{seconds:1757394336 nanos:766261421}" Sep 9 05:05:36.771403 sshd[4472]: Connection closed by 10.0.0.1 port 52722 Sep 9 05:05:36.771863 sshd-session[4465]: pam_unix(sshd:session): session closed for user core Sep 9 05:05:36.775235 systemd[1]: sshd@25-10.0.0.93:22-10.0.0.1:52722.service: Deactivated successfully. Sep 9 05:05:36.778797 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 05:05:36.779595 systemd-logind[1521]: Session 26 logged out. Waiting for processes to exit. Sep 9 05:05:36.781796 systemd-logind[1521]: Removed session 26.