Sep 16 04:17:53.772516 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 16 04:17:53.772536 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 16 03:05:48 -00 2025 Sep 16 04:17:53.772545 kernel: KASLR enabled Sep 16 04:17:53.772550 kernel: efi: EFI v2.7 by EDK II Sep 16 04:17:53.772561 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Sep 16 04:17:53.772566 kernel: random: crng init done Sep 16 04:17:53.772578 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 16 04:17:53.772584 kernel: secureboot: Secure boot enabled Sep 16 04:17:53.772589 kernel: ACPI: Early table checksum verification disabled Sep 16 04:17:53.772599 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 16 04:17:53.772607 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 16 04:17:53.772614 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:17:53.772622 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:17:53.772628 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:17:53.772635 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:17:53.772643 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:17:53.772649 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:17:53.772655 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:17:53.772661 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:17:53.772667 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:17:53.772673 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 16 04:17:53.772679 kernel: ACPI: Use ACPI SPCR as default console: No Sep 16 04:17:53.772685 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 16 04:17:53.772691 kernel: NODE_DATA(0) allocated [mem 0xdc736a00-0xdc73dfff] Sep 16 04:17:53.772697 kernel: Zone ranges: Sep 16 04:17:53.772705 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 16 04:17:53.772711 kernel: DMA32 empty Sep 16 04:17:53.772717 kernel: Normal empty Sep 16 04:17:53.772723 kernel: Device empty Sep 16 04:17:53.772729 kernel: Movable zone start for each node Sep 16 04:17:53.772735 kernel: Early memory node ranges Sep 16 04:17:53.772741 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 16 04:17:53.772748 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 16 04:17:53.772754 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 16 04:17:53.772760 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 16 04:17:53.772766 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 16 04:17:53.772772 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 16 04:17:53.772779 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 16 04:17:53.772786 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 16 04:17:53.772792 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 16 04:17:53.772801 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 16 04:17:53.772807 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 16 04:17:53.772814 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 16 04:17:53.772820 kernel: psci: probing for conduit method from ACPI. Sep 16 04:17:53.772828 kernel: psci: PSCIv1.1 detected in firmware. Sep 16 04:17:53.772834 kernel: psci: Using standard PSCI v0.2 function IDs Sep 16 04:17:53.772841 kernel: psci: Trusted OS migration not required Sep 16 04:17:53.772847 kernel: psci: SMC Calling Convention v1.1 Sep 16 04:17:53.772854 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 16 04:17:53.772861 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 16 04:17:53.772868 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 16 04:17:53.772875 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 16 04:17:53.772882 kernel: Detected PIPT I-cache on CPU0 Sep 16 04:17:53.772890 kernel: CPU features: detected: GIC system register CPU interface Sep 16 04:17:53.772897 kernel: CPU features: detected: Spectre-v4 Sep 16 04:17:53.772904 kernel: CPU features: detected: Spectre-BHB Sep 16 04:17:53.772910 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 16 04:17:53.772917 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 16 04:17:53.772924 kernel: CPU features: detected: ARM erratum 1418040 Sep 16 04:17:53.772930 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 16 04:17:53.772937 kernel: alternatives: applying boot alternatives Sep 16 04:17:53.772944 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:17:53.772951 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:17:53.772958 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:17:53.772965 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:17:53.772972 kernel: Fallback order for Node 0: 0 Sep 16 04:17:53.772978 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 16 04:17:53.772984 kernel: Policy zone: DMA Sep 16 04:17:53.772991 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:17:53.772997 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 16 04:17:53.773003 kernel: software IO TLB: area num 4. Sep 16 04:17:53.773010 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 16 04:17:53.773017 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 16 04:17:53.773023 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 16 04:17:53.773030 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:17:53.773037 kernel: rcu: RCU event tracing is enabled. Sep 16 04:17:53.773044 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 16 04:17:53.773051 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:17:53.773058 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:17:53.773065 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:17:53.773071 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 16 04:17:53.773078 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:17:53.773085 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:17:53.773091 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 16 04:17:53.773098 kernel: GICv3: 256 SPIs implemented Sep 16 04:17:53.773104 kernel: GICv3: 0 Extended SPIs implemented Sep 16 04:17:53.773111 kernel: Root IRQ handler: gic_handle_irq Sep 16 04:17:53.773118 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 16 04:17:53.773125 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 16 04:17:53.773131 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 16 04:17:53.773147 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 16 04:17:53.773153 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 16 04:17:53.773160 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 16 04:17:53.773172 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 16 04:17:53.773179 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 16 04:17:53.773185 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:17:53.773192 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:17:53.773198 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 16 04:17:53.773205 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 16 04:17:53.773213 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 16 04:17:53.773219 kernel: arm-pv: using stolen time PV Sep 16 04:17:53.773226 kernel: Console: colour dummy device 80x25 Sep 16 04:17:53.773233 kernel: ACPI: Core revision 20240827 Sep 16 04:17:53.773239 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 16 04:17:53.773246 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:17:53.773253 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:17:53.773260 kernel: landlock: Up and running. Sep 16 04:17:53.773266 kernel: SELinux: Initializing. Sep 16 04:17:53.773274 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:17:53.773281 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:17:53.773288 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:17:53.773295 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:17:53.773302 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:17:53.773308 kernel: Remapping and enabling EFI services. Sep 16 04:17:53.773315 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:17:53.773322 kernel: Detected PIPT I-cache on CPU1 Sep 16 04:17:53.773328 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 16 04:17:53.773336 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 16 04:17:53.773347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:17:53.773354 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 16 04:17:53.773362 kernel: Detected PIPT I-cache on CPU2 Sep 16 04:17:53.773369 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 16 04:17:53.773377 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 16 04:17:53.773384 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:17:53.773390 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 16 04:17:53.773398 kernel: Detected PIPT I-cache on CPU3 Sep 16 04:17:53.773406 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 16 04:17:53.773413 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 16 04:17:53.773420 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:17:53.773427 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 16 04:17:53.773434 kernel: smp: Brought up 1 node, 4 CPUs Sep 16 04:17:53.773441 kernel: SMP: Total of 4 processors activated. Sep 16 04:17:53.773448 kernel: CPU: All CPU(s) started at EL1 Sep 16 04:17:53.773455 kernel: CPU features: detected: 32-bit EL0 Support Sep 16 04:17:53.773462 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 16 04:17:53.773472 kernel: CPU features: detected: Common not Private translations Sep 16 04:17:53.773479 kernel: CPU features: detected: CRC32 instructions Sep 16 04:17:53.773486 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 16 04:17:53.773493 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 16 04:17:53.773500 kernel: CPU features: detected: LSE atomic instructions Sep 16 04:17:53.773507 kernel: CPU features: detected: Privileged Access Never Sep 16 04:17:53.773514 kernel: CPU features: detected: RAS Extension Support Sep 16 04:17:53.773521 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 16 04:17:53.773528 kernel: alternatives: applying system-wide alternatives Sep 16 04:17:53.773536 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 16 04:17:53.773544 kernel: Memory: 2422368K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38976K init, 1038K bss, 127584K reserved, 16384K cma-reserved) Sep 16 04:17:53.773551 kernel: devtmpfs: initialized Sep 16 04:17:53.773558 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:17:53.773565 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 16 04:17:53.773572 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 16 04:17:53.773578 kernel: 0 pages in range for non-PLT usage Sep 16 04:17:53.773585 kernel: 508560 pages in range for PLT usage Sep 16 04:17:53.773592 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:17:53.773600 kernel: SMBIOS 3.0.0 present. Sep 16 04:17:53.773607 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 16 04:17:53.773614 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:17:53.773621 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:17:53.773628 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 16 04:17:53.773635 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 16 04:17:53.773642 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 16 04:17:53.773649 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:17:53.773656 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 16 04:17:53.773664 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:17:53.773671 kernel: cpuidle: using governor menu Sep 16 04:17:53.773678 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 16 04:17:53.773685 kernel: ASID allocator initialised with 32768 entries Sep 16 04:17:53.773691 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:17:53.773698 kernel: Serial: AMBA PL011 UART driver Sep 16 04:17:53.773705 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:17:53.773712 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:17:53.773719 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 16 04:17:53.773727 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 16 04:17:53.773734 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:17:53.773741 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:17:53.773747 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 16 04:17:53.773760 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 16 04:17:53.773767 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:17:53.773774 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:17:53.773781 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:17:53.773788 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:17:53.773797 kernel: ACPI: Interpreter enabled Sep 16 04:17:53.773804 kernel: ACPI: Using GIC for interrupt routing Sep 16 04:17:53.773811 kernel: ACPI: MCFG table detected, 1 entries Sep 16 04:17:53.773818 kernel: ACPI: CPU0 has been hot-added Sep 16 04:17:53.773825 kernel: ACPI: CPU1 has been hot-added Sep 16 04:17:53.773832 kernel: ACPI: CPU2 has been hot-added Sep 16 04:17:53.773839 kernel: ACPI: CPU3 has been hot-added Sep 16 04:17:53.773846 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 16 04:17:53.773854 kernel: printk: legacy console [ttyAMA0] enabled Sep 16 04:17:53.773863 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:17:53.773997 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:17:53.774065 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 16 04:17:53.774125 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 16 04:17:53.774206 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 16 04:17:53.774266 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 16 04:17:53.774275 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 16 04:17:53.774286 kernel: PCI host bridge to bus 0000:00 Sep 16 04:17:53.774357 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 16 04:17:53.774411 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 16 04:17:53.774464 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 16 04:17:53.774516 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:17:53.774591 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:17:53.774661 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 16 04:17:53.774724 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 16 04:17:53.774785 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 16 04:17:53.774843 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 16 04:17:53.774902 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 16 04:17:53.774961 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 16 04:17:53.775021 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 16 04:17:53.775078 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 16 04:17:53.775131 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 16 04:17:53.775209 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 16 04:17:53.775219 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 16 04:17:53.775227 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 16 04:17:53.775234 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 16 04:17:53.775241 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 16 04:17:53.775248 kernel: iommu: Default domain type: Translated Sep 16 04:17:53.775257 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 16 04:17:53.775265 kernel: efivars: Registered efivars operations Sep 16 04:17:53.775271 kernel: vgaarb: loaded Sep 16 04:17:53.775279 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 16 04:17:53.775286 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:17:53.775293 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:17:53.775300 kernel: pnp: PnP ACPI init Sep 16 04:17:53.775371 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 16 04:17:53.775381 kernel: pnp: PnP ACPI: found 1 devices Sep 16 04:17:53.775390 kernel: NET: Registered PF_INET protocol family Sep 16 04:17:53.775397 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:17:53.775404 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 16 04:17:53.775411 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:17:53.775418 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:17:53.775425 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 16 04:17:53.775432 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 16 04:17:53.775439 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:17:53.775446 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:17:53.775455 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:17:53.775462 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:17:53.775469 kernel: kvm [1]: HYP mode not available Sep 16 04:17:53.775475 kernel: Initialise system trusted keyrings Sep 16 04:17:53.775482 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 16 04:17:53.775489 kernel: Key type asymmetric registered Sep 16 04:17:53.775496 kernel: Asymmetric key parser 'x509' registered Sep 16 04:17:53.775503 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 16 04:17:53.775510 kernel: io scheduler mq-deadline registered Sep 16 04:17:53.775519 kernel: io scheduler kyber registered Sep 16 04:17:53.775526 kernel: io scheduler bfq registered Sep 16 04:17:53.775533 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 16 04:17:53.775539 kernel: ACPI: button: Power Button [PWRB] Sep 16 04:17:53.775547 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 16 04:17:53.775608 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 16 04:17:53.775617 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:17:53.775624 kernel: thunder_xcv, ver 1.0 Sep 16 04:17:53.775631 kernel: thunder_bgx, ver 1.0 Sep 16 04:17:53.775640 kernel: nicpf, ver 1.0 Sep 16 04:17:53.775661 kernel: nicvf, ver 1.0 Sep 16 04:17:53.775730 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 16 04:17:53.775789 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-16T04:17:53 UTC (1757996273) Sep 16 04:17:53.775798 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 16 04:17:53.775806 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 16 04:17:53.775813 kernel: watchdog: NMI not fully supported Sep 16 04:17:53.775819 kernel: watchdog: Hard watchdog permanently disabled Sep 16 04:17:53.775828 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:17:53.775835 kernel: Segment Routing with IPv6 Sep 16 04:17:53.775842 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:17:53.775849 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:17:53.775856 kernel: Key type dns_resolver registered Sep 16 04:17:53.775863 kernel: registered taskstats version 1 Sep 16 04:17:53.775870 kernel: Loading compiled-in X.509 certificates Sep 16 04:17:53.775877 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 99eb88579c3d58869b2224a85ec8efa5647af805' Sep 16 04:17:53.775884 kernel: Demotion targets for Node 0: null Sep 16 04:17:53.775892 kernel: Key type .fscrypt registered Sep 16 04:17:53.775899 kernel: Key type fscrypt-provisioning registered Sep 16 04:17:53.775906 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:17:53.775913 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:17:53.775920 kernel: ima: No architecture policies found Sep 16 04:17:53.775927 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 16 04:17:53.775934 kernel: clk: Disabling unused clocks Sep 16 04:17:53.775941 kernel: PM: genpd: Disabling unused power domains Sep 16 04:17:53.775948 kernel: Warning: unable to open an initial console. Sep 16 04:17:53.775956 kernel: Freeing unused kernel memory: 38976K Sep 16 04:17:53.775963 kernel: Run /init as init process Sep 16 04:17:53.775970 kernel: with arguments: Sep 16 04:17:53.775977 kernel: /init Sep 16 04:17:53.775983 kernel: with environment: Sep 16 04:17:53.775990 kernel: HOME=/ Sep 16 04:17:53.775997 kernel: TERM=linux Sep 16 04:17:53.776004 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:17:53.776012 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:17:53.776024 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:17:53.776032 systemd[1]: Detected virtualization kvm. Sep 16 04:17:53.776040 systemd[1]: Detected architecture arm64. Sep 16 04:17:53.776048 systemd[1]: Running in initrd. Sep 16 04:17:53.776055 systemd[1]: No hostname configured, using default hostname. Sep 16 04:17:53.776063 systemd[1]: Hostname set to . Sep 16 04:17:53.776070 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:17:53.776079 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:17:53.776086 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:17:53.776094 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:17:53.776102 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:17:53.776109 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:17:53.776117 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:17:53.776125 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:17:53.776199 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:17:53.776212 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:17:53.776220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:17:53.776227 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:17:53.776235 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:17:53.776242 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:17:53.776250 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:17:53.776257 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:17:53.776267 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:17:53.776275 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:17:53.776283 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:17:53.776290 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:17:53.776298 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:17:53.776306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:17:53.776313 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:17:53.776321 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:17:53.776329 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:17:53.776338 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:17:53.776345 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:17:53.776354 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:17:53.776361 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:17:53.776369 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:17:53.776376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:17:53.776384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:17:53.776391 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:17:53.776401 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:17:53.776409 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:17:53.776434 systemd-journald[245]: Collecting audit messages is disabled. Sep 16 04:17:53.776454 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:17:53.776463 systemd-journald[245]: Journal started Sep 16 04:17:53.776481 systemd-journald[245]: Runtime Journal (/run/log/journal/d48977590fd14dce845a94e4910b8da5) is 6M, max 48.5M, 42.4M free. Sep 16 04:17:53.781196 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:17:53.770084 systemd-modules-load[246]: Inserted module 'overlay' Sep 16 04:17:53.783984 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:17:53.785300 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 16 04:17:53.786737 kernel: Bridge firewalling registered Sep 16 04:17:53.786756 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:17:53.787780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:17:53.790189 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:17:53.794258 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:17:53.795888 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:17:53.797551 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:17:53.809989 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:17:53.817085 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:17:53.818601 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:17:53.819613 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:17:53.820704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:17:53.824863 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:17:53.829379 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:17:53.844682 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:17:53.858607 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:17:53.871796 systemd-resolved[285]: Positive Trust Anchors: Sep 16 04:17:53.871816 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:17:53.871847 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:17:53.876620 systemd-resolved[285]: Defaulting to hostname 'linux'. Sep 16 04:17:53.877538 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:17:53.879821 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:17:53.927163 kernel: SCSI subsystem initialized Sep 16 04:17:53.931159 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:17:53.938154 kernel: iscsi: registered transport (tcp) Sep 16 04:17:53.951162 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:17:53.951188 kernel: QLogic iSCSI HBA Driver Sep 16 04:17:53.967424 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:17:53.983206 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:17:53.986195 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:17:54.029801 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:17:54.031933 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:17:54.094170 kernel: raid6: neonx8 gen() 15773 MB/s Sep 16 04:17:54.111152 kernel: raid6: neonx4 gen() 15780 MB/s Sep 16 04:17:54.128151 kernel: raid6: neonx2 gen() 13240 MB/s Sep 16 04:17:54.145158 kernel: raid6: neonx1 gen() 10432 MB/s Sep 16 04:17:54.162152 kernel: raid6: int64x8 gen() 6906 MB/s Sep 16 04:17:54.179161 kernel: raid6: int64x4 gen() 7349 MB/s Sep 16 04:17:54.196151 kernel: raid6: int64x2 gen() 6098 MB/s Sep 16 04:17:54.213159 kernel: raid6: int64x1 gen() 5046 MB/s Sep 16 04:17:54.213180 kernel: raid6: using algorithm neonx4 gen() 15780 MB/s Sep 16 04:17:54.230168 kernel: raid6: .... xor() 12351 MB/s, rmw enabled Sep 16 04:17:54.230181 kernel: raid6: using neon recovery algorithm Sep 16 04:17:54.235156 kernel: xor: measuring software checksum speed Sep 16 04:17:54.235182 kernel: 8regs : 21630 MB/sec Sep 16 04:17:54.236206 kernel: 32regs : 18718 MB/sec Sep 16 04:17:54.236224 kernel: arm64_neon : 28080 MB/sec Sep 16 04:17:54.236233 kernel: xor: using function: arm64_neon (28080 MB/sec) Sep 16 04:17:54.289207 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:17:54.295013 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:17:54.298304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:17:54.324338 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 16 04:17:54.328622 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:17:54.330326 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:17:54.353321 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 16 04:17:54.375417 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:17:54.377566 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:17:54.431160 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:17:54.433302 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:17:54.478172 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 16 04:17:54.483751 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 16 04:17:54.485893 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:17:54.485915 kernel: GPT:9289727 != 19775487 Sep 16 04:17:54.485926 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:17:54.485935 kernel: GPT:9289727 != 19775487 Sep 16 04:17:54.487207 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:17:54.488153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:17:54.491012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:17:54.491129 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:17:54.498143 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:17:54.499676 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:17:54.524190 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 16 04:17:54.532295 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 16 04:17:54.533464 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:17:54.535170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:17:54.555851 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:17:54.562672 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 16 04:17:54.563643 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 16 04:17:54.566069 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:17:54.567866 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:17:54.569511 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:17:54.571694 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:17:54.573279 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:17:54.593563 disk-uuid[591]: Primary Header is updated. Sep 16 04:17:54.593563 disk-uuid[591]: Secondary Entries is updated. Sep 16 04:17:54.593563 disk-uuid[591]: Secondary Header is updated. Sep 16 04:17:54.597198 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:17:54.598457 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:17:55.604199 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:17:55.605314 disk-uuid[595]: The operation has completed successfully. Sep 16 04:17:55.631386 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:17:55.631487 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:17:55.664235 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:17:55.680329 sh[611]: Success Sep 16 04:17:55.693978 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:17:55.694037 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:17:55.694048 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:17:55.701183 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 16 04:17:55.725428 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:17:55.727914 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:17:55.741500 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:17:55.751012 kernel: BTRFS: device fsid 782b6948-7aaa-439e-9946-c8fdb4d8f287 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (623) Sep 16 04:17:55.751062 kernel: BTRFS info (device dm-0): first mount of filesystem 782b6948-7aaa-439e-9946-c8fdb4d8f287 Sep 16 04:17:55.751074 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:17:55.757712 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:17:55.757773 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:17:55.758950 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:17:55.760180 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:17:55.761277 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:17:55.762367 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:17:55.770857 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:17:55.799701 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Sep 16 04:17:55.799749 kernel: BTRFS info (device vda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:17:55.800967 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:17:55.803388 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:17:55.803418 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:17:55.807182 kernel: BTRFS info (device vda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:17:55.810174 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:17:55.811911 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:17:55.870776 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:17:55.874311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:17:55.915380 systemd-networkd[800]: lo: Link UP Sep 16 04:17:55.915391 systemd-networkd[800]: lo: Gained carrier Sep 16 04:17:55.916063 systemd-networkd[800]: Enumeration completed Sep 16 04:17:55.916170 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:17:55.916758 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:17:55.916762 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:17:55.917913 systemd-networkd[800]: eth0: Link UP Sep 16 04:17:55.917974 systemd[1]: Reached target network.target - Network. Sep 16 04:17:55.918290 systemd-networkd[800]: eth0: Gained carrier Sep 16 04:17:55.918300 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:17:55.925555 ignition[704]: Ignition 2.22.0 Sep 16 04:17:55.925563 ignition[704]: Stage: fetch-offline Sep 16 04:17:55.925590 ignition[704]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:17:55.925597 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:17:55.925674 ignition[704]: parsed url from cmdline: "" Sep 16 04:17:55.925677 ignition[704]: no config URL provided Sep 16 04:17:55.925682 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:17:55.925688 ignition[704]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:17:55.925705 ignition[704]: op(1): [started] loading QEMU firmware config module Sep 16 04:17:55.925709 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 16 04:17:55.930599 ignition[704]: op(1): [finished] loading QEMU firmware config module Sep 16 04:17:55.943203 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:17:55.971243 ignition[704]: parsing config with SHA512: f9bd1d9f12b37ec2c68c82296fa39c7cb568cbf01bebea47db63c649344ecf5115b4580a0f1e0d6472d3d6fa1e24c794a54eb8b87bc51eff0c4841098418cf6b Sep 16 04:17:55.976768 unknown[704]: fetched base config from "system" Sep 16 04:17:55.976782 unknown[704]: fetched user config from "qemu" Sep 16 04:17:55.977267 ignition[704]: fetch-offline: fetch-offline passed Sep 16 04:17:55.979293 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:17:55.977325 ignition[704]: Ignition finished successfully Sep 16 04:17:55.980299 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 16 04:17:55.981005 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:17:56.015270 ignition[814]: Ignition 2.22.0 Sep 16 04:17:56.015285 ignition[814]: Stage: kargs Sep 16 04:17:56.015403 ignition[814]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:17:56.015411 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:17:56.016112 ignition[814]: kargs: kargs passed Sep 16 04:17:56.018526 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:17:56.016179 ignition[814]: Ignition finished successfully Sep 16 04:17:56.021189 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:17:56.046467 ignition[822]: Ignition 2.22.0 Sep 16 04:17:56.046483 ignition[822]: Stage: disks Sep 16 04:17:56.046604 ignition[822]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:17:56.046675 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:17:56.047467 ignition[822]: disks: disks passed Sep 16 04:17:56.049028 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:17:56.047509 ignition[822]: Ignition finished successfully Sep 16 04:17:56.051334 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:17:56.052844 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:17:56.054218 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:17:56.055743 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:17:56.057198 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:17:56.059255 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:17:56.086091 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 16 04:17:56.090161 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:17:56.091917 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:17:56.147148 kernel: EXT4-fs (vda9): mounted filesystem a00d22d9-68b1-4a84-acfc-9fae1fca53dd r/w with ordered data mode. Quota mode: none. Sep 16 04:17:56.147813 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:17:56.148873 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:17:56.151412 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:17:56.153249 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:17:56.154065 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 04:17:56.154116 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:17:56.154176 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:17:56.163554 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:17:56.165256 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:17:56.168158 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Sep 16 04:17:56.169143 kernel: BTRFS info (device vda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:17:56.169163 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:17:56.171552 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:17:56.171581 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:17:56.172569 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:17:56.200469 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:17:56.204128 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:17:56.207743 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:17:56.211424 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:17:56.274859 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:17:56.277066 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:17:56.279925 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:17:56.303223 kernel: BTRFS info (device vda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:17:56.306449 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:17:56.332781 ignition[955]: INFO : Ignition 2.22.0 Sep 16 04:17:56.332781 ignition[955]: INFO : Stage: mount Sep 16 04:17:56.334133 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:17:56.334133 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:17:56.334133 ignition[955]: INFO : mount: mount passed Sep 16 04:17:56.334133 ignition[955]: INFO : Ignition finished successfully Sep 16 04:17:56.335729 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:17:56.337805 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:17:56.894225 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:17:56.895771 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:17:56.927410 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 16 04:17:56.927444 kernel: BTRFS info (device vda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:17:56.927454 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:17:56.931156 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:17:56.931177 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:17:56.932428 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:17:56.963720 ignition[983]: INFO : Ignition 2.22.0 Sep 16 04:17:56.963720 ignition[983]: INFO : Stage: files Sep 16 04:17:56.965103 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:17:56.965103 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:17:56.965103 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:17:56.967829 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:17:56.967829 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:17:56.967829 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:17:56.967829 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:17:56.967829 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:17:56.967119 unknown[983]: wrote ssh authorized keys file for user: core Sep 16 04:17:56.973958 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 16 04:17:56.973958 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 16 04:17:57.332280 systemd-networkd[800]: eth0: Gained IPv6LL Sep 16 04:17:57.894667 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:18:00.618594 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 16 04:18:00.618594 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:18:00.622808 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 16 04:18:00.958171 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 16 04:18:01.422753 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:18:01.424666 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:18:01.424666 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:18:01.424666 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:18:01.424666 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:18:01.424666 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:18:01.435672 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:18:01.435672 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:18:01.435672 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:18:01.442438 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:18:01.444706 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:18:01.444706 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 16 04:18:01.449824 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 16 04:18:01.449824 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 16 04:18:01.454856 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 16 04:18:01.896552 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 16 04:18:02.730474 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 16 04:18:02.730474 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 16 04:18:02.734642 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:18:02.738119 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:18:02.738119 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 16 04:18:02.741872 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 16 04:18:02.741872 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:18:02.741872 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:18:02.741872 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 16 04:18:02.741872 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 16 04:18:02.755381 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:18:02.760347 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:18:02.761935 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 16 04:18:02.761935 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:18:02.761935 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:18:02.761935 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:18:02.761935 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:18:02.761935 ignition[983]: INFO : files: files passed Sep 16 04:18:02.761935 ignition[983]: INFO : Ignition finished successfully Sep 16 04:18:02.762306 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:18:02.772308 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:18:02.788844 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:18:02.792433 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:18:02.794369 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:18:02.796703 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Sep 16 04:18:02.799217 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:18:02.799217 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:18:02.801745 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:18:02.803566 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:18:02.804756 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:18:02.807078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:18:02.865932 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:18:02.866035 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:18:02.867913 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:18:02.869352 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:18:02.870868 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:18:02.871609 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:18:02.906260 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:18:02.908377 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:18:02.927414 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:18:02.929371 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:18:02.930461 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:18:02.932053 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:18:02.932217 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:18:02.934960 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:18:02.937037 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:18:02.938667 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:18:02.940216 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:18:02.942179 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:18:02.944146 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:18:02.945968 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:18:02.947704 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:18:02.949584 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:18:02.951475 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:18:02.953129 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:18:02.954584 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:18:02.954709 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:18:02.957071 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:18:02.959014 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:18:02.960889 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:18:02.960987 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:18:02.963010 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:18:02.963143 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:18:02.966025 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:18:02.966171 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:18:02.968087 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:18:02.969613 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:18:02.969708 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:18:02.971441 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:18:02.973093 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:18:02.974660 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:18:02.974741 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:18:02.976327 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:18:02.976404 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:18:02.978411 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:18:02.978528 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:18:02.980210 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:18:02.980316 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:18:02.982726 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:18:02.984615 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:18:02.985775 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:18:02.985887 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:18:02.987593 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:18:02.987692 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:18:02.993498 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:18:03.005168 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:18:03.012429 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:18:03.016890 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:18:03.017023 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:18:03.020240 ignition[1038]: INFO : Ignition 2.22.0 Sep 16 04:18:03.020240 ignition[1038]: INFO : Stage: umount Sep 16 04:18:03.020240 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:18:03.020240 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:18:03.020240 ignition[1038]: INFO : umount: umount passed Sep 16 04:18:03.020240 ignition[1038]: INFO : Ignition finished successfully Sep 16 04:18:03.024085 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:18:03.026165 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:18:03.027859 systemd[1]: Stopped target network.target - Network. Sep 16 04:18:03.029286 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:18:03.029338 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:18:03.030812 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:18:03.030849 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:18:03.032464 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:18:03.032504 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:18:03.033981 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:18:03.034015 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:18:03.035673 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:18:03.035711 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:18:03.037376 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:18:03.038940 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:18:03.044468 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:18:03.044568 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:18:03.048287 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:18:03.048511 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:18:03.048546 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:18:03.052063 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:18:03.053010 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:18:03.053171 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:18:03.056612 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:18:03.056794 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:18:03.057675 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:18:03.057715 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:18:03.060194 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:18:03.061470 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:18:03.061522 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:18:03.063446 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:18:03.063485 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:18:03.065954 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:18:03.065994 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:18:03.067723 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:18:03.070038 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:18:03.081294 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:18:03.081424 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:18:03.083229 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:18:03.083358 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:18:03.087192 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:18:03.087251 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:18:03.088989 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:18:03.089017 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:18:03.090762 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:18:03.090807 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:18:03.093081 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:18:03.093156 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:18:03.095437 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:18:03.095487 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:18:03.098720 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:18:03.099882 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:18:03.099950 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:18:03.102995 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:18:03.103043 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:18:03.106271 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 16 04:18:03.106312 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:18:03.109373 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:18:03.109411 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:18:03.111406 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:18:03.111443 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:18:03.116261 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:18:03.116343 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:18:03.117510 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:18:03.119685 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:18:03.138237 systemd[1]: Switching root. Sep 16 04:18:03.168527 systemd-journald[245]: Journal stopped Sep 16 04:18:03.945349 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 16 04:18:03.945404 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:18:03.945416 kernel: SELinux: policy capability open_perms=1 Sep 16 04:18:03.945425 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:18:03.945434 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:18:03.945446 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:18:03.945458 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:18:03.945467 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:18:03.945493 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:18:03.945503 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:18:03.945514 kernel: audit: type=1403 audit(1757996283.401:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:18:03.945524 systemd[1]: Successfully loaded SELinux policy in 57.737ms. Sep 16 04:18:03.945544 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.262ms. Sep 16 04:18:03.945556 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:18:03.945566 systemd[1]: Detected virtualization kvm. Sep 16 04:18:03.945576 systemd[1]: Detected architecture arm64. Sep 16 04:18:03.945586 systemd[1]: Detected first boot. Sep 16 04:18:03.945597 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:18:03.945607 zram_generator::config[1084]: No configuration found. Sep 16 04:18:03.945624 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:18:03.945633 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:18:03.945644 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:18:03.945653 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:18:03.945663 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:18:03.945673 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:18:03.945683 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:18:03.945693 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:18:03.945706 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:18:03.945716 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:18:03.945726 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:18:03.945736 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:18:03.945746 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:18:03.945756 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:18:03.945766 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:18:03.945776 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:18:03.945786 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:18:03.945798 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:18:03.945808 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:18:03.945818 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:18:03.945828 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 16 04:18:03.945839 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:18:03.945849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:18:03.945859 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:18:03.945868 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:18:03.945879 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:18:03.945890 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:18:03.945900 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:18:03.945909 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:18:03.945920 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:18:03.945931 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:18:03.945941 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:18:03.945951 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:18:03.945961 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:18:03.945972 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:18:03.945982 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:18:03.945992 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:18:03.946001 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:18:03.946011 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:18:03.946021 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:18:03.946031 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:18:03.946041 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:18:03.946051 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:18:03.946062 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:18:03.946072 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:18:03.946082 systemd[1]: Reached target machines.target - Containers. Sep 16 04:18:03.946092 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:18:03.946102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:18:03.946119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:18:03.946129 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:18:03.946152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:18:03.946166 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:18:03.946176 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:18:03.946185 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:18:03.946196 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:18:03.946206 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:18:03.946216 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:18:03.946228 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:18:03.946239 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:18:03.946248 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:18:03.946260 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:18:03.946271 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:18:03.946280 kernel: loop: module loaded Sep 16 04:18:03.946289 kernel: fuse: init (API version 7.41) Sep 16 04:18:03.946299 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:18:03.946309 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:18:03.946319 kernel: ACPI: bus type drm_connector registered Sep 16 04:18:03.946328 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:18:03.946338 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:18:03.946349 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:18:03.946360 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:18:03.946370 systemd[1]: Stopped verity-setup.service. Sep 16 04:18:03.946379 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:18:03.946390 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:18:03.946400 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:18:03.946410 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:18:03.946420 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:18:03.946430 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:18:03.946441 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:18:03.946453 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:18:03.946463 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:18:03.946473 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:18:03.946483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:18:03.946492 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:18:03.946525 systemd-journald[1152]: Collecting audit messages is disabled. Sep 16 04:18:03.946545 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:18:03.946557 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:18:03.946568 systemd-journald[1152]: Journal started Sep 16 04:18:03.946587 systemd-journald[1152]: Runtime Journal (/run/log/journal/d48977590fd14dce845a94e4910b8da5) is 6M, max 48.5M, 42.4M free. Sep 16 04:18:03.749558 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:18:03.771069 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 16 04:18:03.771443 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:18:03.948332 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:18:03.949100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:18:03.949317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:18:03.950376 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:18:03.950520 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:18:03.951535 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:18:03.951681 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:18:03.952818 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:18:03.953925 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:18:03.955190 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:18:03.957239 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:18:03.967899 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:18:03.969917 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:18:03.971745 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:18:03.972642 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:18:03.972673 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:18:03.974246 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:18:03.982045 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:18:03.983036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:18:03.984245 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:18:03.985867 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:18:03.986918 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:18:03.989409 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:18:03.990704 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:18:03.991518 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:18:03.993932 systemd-journald[1152]: Time spent on flushing to /var/log/journal/d48977590fd14dce845a94e4910b8da5 is 16.518ms for 886 entries. Sep 16 04:18:03.993932 systemd-journald[1152]: System Journal (/var/log/journal/d48977590fd14dce845a94e4910b8da5) is 8M, max 195.6M, 187.6M free. Sep 16 04:18:04.015097 systemd-journald[1152]: Received client request to flush runtime journal. Sep 16 04:18:03.994777 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:18:04.000308 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:18:04.002704 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:18:04.006408 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:18:04.007339 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:18:04.010965 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:18:04.012617 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:18:04.015533 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:18:04.020205 kernel: loop0: detected capacity change from 0 to 211168 Sep 16 04:18:04.020454 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:18:04.030454 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:18:04.031312 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:18:04.035057 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 16 04:18:04.035251 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 16 04:18:04.039256 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:18:04.041581 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:18:04.047753 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:18:04.049516 kernel: loop1: detected capacity change from 0 to 100632 Sep 16 04:18:04.051282 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:18:04.075182 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:18:04.077151 kernel: loop2: detected capacity change from 0 to 119368 Sep 16 04:18:04.077761 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:18:04.100310 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 16 04:18:04.100652 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 16 04:18:04.103640 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:18:04.105290 kernel: loop3: detected capacity change from 0 to 211168 Sep 16 04:18:04.114206 kernel: loop4: detected capacity change from 0 to 100632 Sep 16 04:18:04.119183 kernel: loop5: detected capacity change from 0 to 119368 Sep 16 04:18:04.122650 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 16 04:18:04.122996 (sd-merge)[1226]: Merged extensions into '/usr'. Sep 16 04:18:04.126030 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:18:04.126045 systemd[1]: Reloading... Sep 16 04:18:04.170158 zram_generator::config[1253]: No configuration found. Sep 16 04:18:04.248894 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:18:04.303654 systemd[1]: Reloading finished in 177 ms. Sep 16 04:18:04.328450 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:18:04.331488 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:18:04.343288 systemd[1]: Starting ensure-sysext.service... Sep 16 04:18:04.344910 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:18:04.353864 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:18:04.353880 systemd[1]: Reloading... Sep 16 04:18:04.359449 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:18:04.359480 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:18:04.359709 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:18:04.359892 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:18:04.360648 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:18:04.360852 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Sep 16 04:18:04.360901 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Sep 16 04:18:04.363499 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:18:04.363506 systemd-tmpfiles[1290]: Skipping /boot Sep 16 04:18:04.369365 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:18:04.369375 systemd-tmpfiles[1290]: Skipping /boot Sep 16 04:18:04.396166 zram_generator::config[1317]: No configuration found. Sep 16 04:18:04.519134 systemd[1]: Reloading finished in 164 ms. Sep 16 04:18:04.536322 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:18:04.542215 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:18:04.552182 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:18:04.554276 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:18:04.556473 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:18:04.558926 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:18:04.563270 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:18:04.565853 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:18:04.576213 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:18:04.578961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:18:04.580070 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:18:04.583328 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:18:04.585371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:18:04.586169 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:18:04.586273 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:18:04.596295 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:18:04.598241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:18:04.598392 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:18:04.599777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:18:04.599938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:18:04.601504 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:18:04.602416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:18:04.603626 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:18:04.604020 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Sep 16 04:18:04.610087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:18:04.611805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:18:04.614187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:18:04.617430 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:18:04.618439 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:18:04.618566 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:18:04.619254 augenrules[1390]: No rules Sep 16 04:18:04.623391 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:18:04.624417 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:18:04.625666 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:18:04.625883 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:18:04.627369 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:18:04.629022 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:18:04.629294 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:18:04.630678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:18:04.630805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:18:04.632396 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:18:04.632532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:18:04.634329 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:18:04.636610 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:18:04.638878 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:18:04.650128 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:18:04.652514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:18:04.653703 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:18:04.659340 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:18:04.669486 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:18:04.671654 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:18:04.675338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:18:04.675379 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:18:04.679703 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:18:04.680918 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:18:04.683178 systemd[1]: Finished ensure-sysext.service. Sep 16 04:18:04.684474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:18:04.684613 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:18:04.686222 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:18:04.686365 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:18:04.687778 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:18:04.688268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:18:04.689687 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:18:04.691278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:18:04.695306 augenrules[1426]: /sbin/augenrules: No change Sep 16 04:18:04.707447 augenrules[1464]: No rules Sep 16 04:18:04.708299 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:18:04.708353 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:18:04.710886 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 16 04:18:04.713499 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:18:04.713726 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:18:04.716621 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 16 04:18:04.757827 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:18:04.761093 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:18:04.786128 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:18:04.818361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:18:04.845243 systemd-networkd[1445]: lo: Link UP Sep 16 04:18:04.845250 systemd-networkd[1445]: lo: Gained carrier Sep 16 04:18:04.845996 systemd-networkd[1445]: Enumeration completed Sep 16 04:18:04.846096 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:18:04.846454 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:18:04.846458 systemd-networkd[1445]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:18:04.846962 systemd-networkd[1445]: eth0: Link UP Sep 16 04:18:04.847078 systemd-networkd[1445]: eth0: Gained carrier Sep 16 04:18:04.847093 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:18:04.848270 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:18:04.850097 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:18:04.857202 systemd-networkd[1445]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:18:04.866270 systemd-resolved[1356]: Positive Trust Anchors: Sep 16 04:18:04.868068 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:18:04.868115 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:18:04.868592 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:18:04.873978 systemd-resolved[1356]: Defaulting to hostname 'linux'. Sep 16 04:18:04.876546 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:18:04.877637 systemd[1]: Reached target network.target - Network. Sep 16 04:18:04.878686 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:18:04.894829 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 16 04:18:04.896131 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:18:04.896309 systemd-timesyncd[1470]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 16 04:18:04.896361 systemd-timesyncd[1470]: Initial clock synchronization to Tue 2025-09-16 04:18:05.138516 UTC. Sep 16 04:18:04.902681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:18:04.904109 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:18:04.905251 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:18:04.906449 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:18:04.907824 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:18:04.909178 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:18:04.910394 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:18:04.911608 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:18:04.911640 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:18:04.912579 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:18:04.914359 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:18:04.916575 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:18:04.919283 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:18:04.920577 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:18:04.921563 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:18:04.924295 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:18:04.925349 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:18:04.927079 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:18:04.928263 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:18:04.929177 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:18:04.930067 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:18:04.930105 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:18:04.930968 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:18:04.932827 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:18:04.934632 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:18:04.936661 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:18:04.938549 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:18:04.939496 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:18:04.940349 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:18:04.943956 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:18:04.945029 jq[1509]: false Sep 16 04:18:04.945869 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:18:04.947887 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:18:04.951063 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:18:04.953943 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:18:04.954336 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:18:04.955553 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:18:04.956323 extend-filesystems[1510]: Found /dev/vda6 Sep 16 04:18:04.959284 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:18:04.962615 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:18:04.964083 extend-filesystems[1510]: Found /dev/vda9 Sep 16 04:18:04.966483 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:18:04.966923 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:18:04.972229 jq[1525]: true Sep 16 04:18:04.972391 extend-filesystems[1510]: Checking size of /dev/vda9 Sep 16 04:18:04.969221 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:18:04.969403 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:18:04.970529 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:18:04.970688 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:18:04.985160 extend-filesystems[1510]: Resized partition /dev/vda9 Sep 16 04:18:04.986715 update_engine[1521]: I20250916 04:18:04.986091 1521 main.cc:92] Flatcar Update Engine starting Sep 16 04:18:04.987045 extend-filesystems[1547]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:18:04.992215 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 16 04:18:04.986806 (ntainerd)[1535]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:18:04.995604 jq[1534]: true Sep 16 04:18:05.002889 tar[1531]: linux-arm64/LICENSE Sep 16 04:18:05.003110 tar[1531]: linux-arm64/helm Sep 16 04:18:05.016234 dbus-daemon[1507]: [system] SELinux support is enabled Sep 16 04:18:05.017248 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:18:05.022846 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (Power Button) Sep 16 04:18:05.023200 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:18:05.023235 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:18:05.023483 systemd-logind[1519]: New seat seat0. Sep 16 04:18:05.024894 update_engine[1521]: I20250916 04:18:05.024835 1521 update_check_scheduler.cc:74] Next update check in 8m57s Sep 16 04:18:05.026168 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:18:05.026191 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:18:05.029028 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:18:05.032185 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 16 04:18:05.034616 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:18:05.044417 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:18:05.051879 extend-filesystems[1547]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 16 04:18:05.051879 extend-filesystems[1547]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 16 04:18:05.051879 extend-filesystems[1547]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 16 04:18:05.063306 extend-filesystems[1510]: Resized filesystem in /dev/vda9 Sep 16 04:18:05.053684 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:18:05.067442 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:18:05.061335 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:18:05.065119 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:18:05.067385 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 16 04:18:05.106259 locksmithd[1565]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:18:05.188988 containerd[1535]: time="2025-09-16T04:18:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:18:05.189814 containerd[1535]: time="2025-09-16T04:18:05.189777981Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:18:05.200609 containerd[1535]: time="2025-09-16T04:18:05.200567192Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.457µs" Sep 16 04:18:05.200660 containerd[1535]: time="2025-09-16T04:18:05.200608033Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:18:05.200660 containerd[1535]: time="2025-09-16T04:18:05.200628639Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:18:05.200803 containerd[1535]: time="2025-09-16T04:18:05.200784624Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:18:05.200827 containerd[1535]: time="2025-09-16T04:18:05.200805518Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:18:05.200846 containerd[1535]: time="2025-09-16T04:18:05.200829997Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:18:05.200904 containerd[1535]: time="2025-09-16T04:18:05.200879698Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:18:05.200904 containerd[1535]: time="2025-09-16T04:18:05.200901046Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:18:05.201191 containerd[1535]: time="2025-09-16T04:18:05.201155320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:18:05.201191 containerd[1535]: time="2025-09-16T04:18:05.201188495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:18:05.201231 containerd[1535]: time="2025-09-16T04:18:05.201201517Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:18:05.201231 containerd[1535]: time="2025-09-16T04:18:05.201210254Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:18:05.201304 containerd[1535]: time="2025-09-16T04:18:05.201286166Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:18:05.201508 containerd[1535]: time="2025-09-16T04:18:05.201489008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:18:05.201535 containerd[1535]: time="2025-09-16T04:18:05.201522842Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:18:05.201561 containerd[1535]: time="2025-09-16T04:18:05.201533681Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:18:05.201579 containerd[1535]: time="2025-09-16T04:18:05.201565290Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:18:05.201791 containerd[1535]: time="2025-09-16T04:18:05.201775221Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:18:05.201851 containerd[1535]: time="2025-09-16T04:18:05.201836090Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:18:05.207064 containerd[1535]: time="2025-09-16T04:18:05.207025463Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:18:05.207155 containerd[1535]: time="2025-09-16T04:18:05.207096058Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:18:05.207155 containerd[1535]: time="2025-09-16T04:18:05.207111306Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:18:05.207155 containerd[1535]: time="2025-09-16T04:18:05.207123257Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:18:05.207155 containerd[1535]: time="2025-09-16T04:18:05.207136321Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:18:05.207155 containerd[1535]: time="2025-09-16T04:18:05.207148891Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:18:05.207269 containerd[1535]: time="2025-09-16T04:18:05.207161543Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:18:05.207269 containerd[1535]: time="2025-09-16T04:18:05.207188577Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:18:05.207269 containerd[1535]: time="2025-09-16T04:18:05.207203455Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:18:05.207269 containerd[1535]: time="2025-09-16T04:18:05.207214128Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:18:05.207269 containerd[1535]: time="2025-09-16T04:18:05.207224060Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:18:05.207269 containerd[1535]: time="2025-09-16T04:18:05.207242317Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:18:05.207366 containerd[1535]: time="2025-09-16T04:18:05.207357297Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:18:05.207385 containerd[1535]: time="2025-09-16T04:18:05.207377820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:18:05.207403 containerd[1535]: time="2025-09-16T04:18:05.207394510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:18:05.207420 containerd[1535]: time="2025-09-16T04:18:05.207405967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:18:05.207420 containerd[1535]: time="2025-09-16T04:18:05.207416229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:18:05.207452 containerd[1535]: time="2025-09-16T04:18:05.207427562Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:18:05.207452 containerd[1535]: time="2025-09-16T04:18:05.207440255Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:18:05.207452 containerd[1535]: time="2025-09-16T04:18:05.207450187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:18:05.207507 containerd[1535]: time="2025-09-16T04:18:05.207461273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:18:05.207507 containerd[1535]: time="2025-09-16T04:18:05.207477180Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:18:05.207507 containerd[1535]: time="2025-09-16T04:18:05.207488349Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:18:05.209172 containerd[1535]: time="2025-09-16T04:18:05.207876065Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:18:05.209172 containerd[1535]: time="2025-09-16T04:18:05.207937305Z" level=info msg="Start snapshots syncer" Sep 16 04:18:05.209172 containerd[1535]: time="2025-09-16T04:18:05.207963598Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:18:05.209723 containerd[1535]: time="2025-09-16T04:18:05.209678855Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:18:05.209815 containerd[1535]: time="2025-09-16T04:18:05.209739930Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:18:05.209861 containerd[1535]: time="2025-09-16T04:18:05.209841557Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:18:05.209992 containerd[1535]: time="2025-09-16T04:18:05.209973104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:18:05.210018 containerd[1535]: time="2025-09-16T04:18:05.210006691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:18:05.210037 containerd[1535]: time="2025-09-16T04:18:05.210018725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:18:05.210037 containerd[1535]: time="2025-09-16T04:18:05.210030140Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:18:05.210070 containerd[1535]: time="2025-09-16T04:18:05.210041597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:18:05.210070 containerd[1535]: time="2025-09-16T04:18:05.210052477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:18:05.210070 containerd[1535]: time="2025-09-16T04:18:05.210063687Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:18:05.210124 containerd[1535]: time="2025-09-16T04:18:05.210088496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:18:05.210124 containerd[1535]: time="2025-09-16T04:18:05.210100241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:18:05.210124 containerd[1535]: time="2025-09-16T04:18:05.210111615Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:18:05.210186 containerd[1535]: time="2025-09-16T04:18:05.210138691Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:18:05.210186 containerd[1535]: time="2025-09-16T04:18:05.210153445Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:18:05.210226 containerd[1535]: time="2025-09-16T04:18:05.210184642Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:18:05.210226 containerd[1535]: time="2025-09-16T04:18:05.210195563Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:18:05.210226 containerd[1535]: time="2025-09-16T04:18:05.210203393Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:18:05.210226 containerd[1535]: time="2025-09-16T04:18:05.210217652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:18:05.210305 containerd[1535]: time="2025-09-16T04:18:05.210228820Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:18:05.210325 containerd[1535]: time="2025-09-16T04:18:05.210318455Z" level=info msg="runtime interface created" Sep 16 04:18:05.210348 containerd[1535]: time="2025-09-16T04:18:05.210324637Z" level=info msg="created NRI interface" Sep 16 04:18:05.210348 containerd[1535]: time="2025-09-16T04:18:05.210333827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:18:05.210348 containerd[1535]: time="2025-09-16T04:18:05.210346231Z" level=info msg="Connect containerd service" Sep 16 04:18:05.210397 containerd[1535]: time="2025-09-16T04:18:05.210372359Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:18:05.211054 containerd[1535]: time="2025-09-16T04:18:05.211025724Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:18:05.275459 containerd[1535]: time="2025-09-16T04:18:05.275389760Z" level=info msg="Start subscribing containerd event" Sep 16 04:18:05.275543 containerd[1535]: time="2025-09-16T04:18:05.275475233Z" level=info msg="Start recovering state" Sep 16 04:18:05.275543 containerd[1535]: time="2025-09-16T04:18:05.275418979Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:18:05.275598 containerd[1535]: time="2025-09-16T04:18:05.275579085Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:18:05.275778 containerd[1535]: time="2025-09-16T04:18:05.275756129Z" level=info msg="Start event monitor" Sep 16 04:18:05.275802 containerd[1535]: time="2025-09-16T04:18:05.275779455Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:18:05.275802 containerd[1535]: time="2025-09-16T04:18:05.275788109Z" level=info msg="Start streaming server" Sep 16 04:18:05.275802 containerd[1535]: time="2025-09-16T04:18:05.275796351Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:18:05.275855 containerd[1535]: time="2025-09-16T04:18:05.275802986Z" level=info msg="runtime interface starting up..." Sep 16 04:18:05.275855 containerd[1535]: time="2025-09-16T04:18:05.275808921Z" level=info msg="starting plugins..." Sep 16 04:18:05.275855 containerd[1535]: time="2025-09-16T04:18:05.275825818Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:18:05.276068 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:18:05.277063 containerd[1535]: time="2025-09-16T04:18:05.277018226Z" level=info msg="containerd successfully booted in 0.088393s" Sep 16 04:18:05.343453 tar[1531]: linux-arm64/README.md Sep 16 04:18:05.358059 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:18:05.681888 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:18:05.701057 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:18:05.703510 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:18:05.721565 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:18:05.721806 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:18:05.724181 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:18:05.745985 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:18:05.748516 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:18:05.750244 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 16 04:18:05.751264 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:18:06.772812 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:18:06.774834 systemd[1]: Started sshd@0-10.0.0.23:22-10.0.0.1:55960.service - OpenSSH per-connection server daemon (10.0.0.1:55960). Sep 16 04:18:06.804322 systemd-networkd[1445]: eth0: Gained IPv6LL Sep 16 04:18:06.807090 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:18:06.808677 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:18:06.812946 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 16 04:18:06.815259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:18:06.817483 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:18:06.853674 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 16 04:18:06.854839 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 16 04:18:06.857484 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:18:06.858309 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 55960 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:18:06.860131 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:18:06.860620 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:18:06.868012 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:18:06.870213 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:18:06.878708 systemd-logind[1519]: New session 1 of user core. Sep 16 04:18:06.901797 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:18:06.908601 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:18:06.930364 (systemd)[1640]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:18:06.933973 systemd-logind[1519]: New session c1 of user core. Sep 16 04:18:07.053979 systemd[1640]: Queued start job for default target default.target. Sep 16 04:18:07.078274 systemd[1640]: Created slice app.slice - User Application Slice. Sep 16 04:18:07.078461 systemd[1640]: Reached target paths.target - Paths. Sep 16 04:18:07.078514 systemd[1640]: Reached target timers.target - Timers. Sep 16 04:18:07.079795 systemd[1640]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:18:07.089996 systemd[1640]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:18:07.090056 systemd[1640]: Reached target sockets.target - Sockets. Sep 16 04:18:07.090098 systemd[1640]: Reached target basic.target - Basic System. Sep 16 04:18:07.090135 systemd[1640]: Reached target default.target - Main User Target. Sep 16 04:18:07.090183 systemd[1640]: Startup finished in 148ms. Sep 16 04:18:07.090245 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:18:07.093198 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:18:07.160646 systemd[1]: Started sshd@1-10.0.0.23:22-10.0.0.1:55966.service - OpenSSH per-connection server daemon (10.0.0.1:55966). Sep 16 04:18:07.226746 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 55966 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:18:07.227621 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:18:07.231682 systemd-logind[1519]: New session 2 of user core. Sep 16 04:18:07.243376 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:18:07.297264 sshd[1654]: Connection closed by 10.0.0.1 port 55966 Sep 16 04:18:07.297728 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Sep 16 04:18:07.307155 systemd[1]: sshd@1-10.0.0.23:22-10.0.0.1:55966.service: Deactivated successfully. Sep 16 04:18:07.309050 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:18:07.310272 systemd-logind[1519]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:18:07.313699 systemd[1]: Started sshd@2-10.0.0.23:22-10.0.0.1:55978.service - OpenSSH per-connection server daemon (10.0.0.1:55978). Sep 16 04:18:07.315912 systemd-logind[1519]: Removed session 2. Sep 16 04:18:07.373565 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 55978 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:18:07.374637 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:18:07.378970 systemd-logind[1519]: New session 3 of user core. Sep 16 04:18:07.386347 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:18:07.437684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:18:07.439037 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:18:07.441454 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:18:07.444255 systemd[1]: Startup finished in 2.007s (kernel) + 9.794s (initrd) + 4.101s (userspace) = 15.903s. Sep 16 04:18:07.446038 sshd[1663]: Connection closed by 10.0.0.1 port 55978 Sep 16 04:18:07.446262 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Sep 16 04:18:07.451956 systemd[1]: sshd@2-10.0.0.23:22-10.0.0.1:55978.service: Deactivated successfully. Sep 16 04:18:07.453830 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:18:07.456203 systemd-logind[1519]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:18:07.458088 systemd-logind[1519]: Removed session 3. Sep 16 04:18:07.823497 kubelet[1670]: E0916 04:18:07.823444 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:18:07.826264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:18:07.826410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:18:07.826800 systemd[1]: kubelet.service: Consumed 757ms CPU time, 257.2M memory peak. Sep 16 04:18:17.592534 systemd[1]: Started sshd@3-10.0.0.23:22-10.0.0.1:56770.service - OpenSSH per-connection server daemon (10.0.0.1:56770). Sep 16 04:18:17.654420 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 56770 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:18:17.655707 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:18:17.662209 systemd-logind[1519]: New session 4 of user core. Sep 16 04:18:17.678340 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:18:17.733396 sshd[1689]: Connection closed by 10.0.0.1 port 56770 Sep 16 04:18:17.733698 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Sep 16 04:18:17.752746 systemd[1]: sshd@3-10.0.0.23:22-10.0.0.1:56770.service: Deactivated successfully. Sep 16 04:18:17.754523 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:18:17.756795 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:18:17.759734 systemd[1]: Started sshd@4-10.0.0.23:22-10.0.0.1:56786.service - OpenSSH per-connection server daemon (10.0.0.1:56786). Sep 16 04:18:17.760791 systemd-logind[1519]: Removed session 4. Sep 16 04:18:17.818944 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 56786 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:18:17.820091 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:18:17.824206 systemd-logind[1519]: New session 5 of user core. Sep 16 04:18:17.830308 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:18:17.830996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:18:17.832345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:18:17.881169 sshd[1699]: Connection closed by 10.0.0.1 port 56786 Sep 16 04:18:17.881602 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Sep 16 04:18:17.890292 systemd[1]: sshd@4-10.0.0.23:22-10.0.0.1:56786.service: Deactivated successfully. Sep 16 04:18:17.893793 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:18:17.896432 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:18:17.899441 systemd[1]: Started sshd@5-10.0.0.23:22-10.0.0.1:56792.service - OpenSSH per-connection server daemon (10.0.0.1:56792). Sep 16 04:18:17.902616 systemd-logind[1519]: Removed session 5. Sep 16 04:18:17.966864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:18:17.970344 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:18:17.974421 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 56792 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:18:17.972649 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:18:17.978764 systemd-logind[1519]: New session 6 of user core. Sep 16 04:18:17.981949 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:18:18.005556 kubelet[1715]: E0916 04:18:18.005503 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:18:18.008758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:18:18.008888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:18:18.009221 systemd[1]: kubelet.service: Consumed 139ms CPU time, 107.2M memory peak. Sep 16 04:18:18.037314 sshd[1722]: Connection closed by 10.0.0.1 port 56792 Sep 16 04:18:18.036749 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Sep 16 04:18:18.044917 systemd[1]: sshd@5-10.0.0.23:22-10.0.0.1:56792.service: Deactivated successfully. Sep 16 04:18:18.047374 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:18:18.049024 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:18:18.051205 systemd[1]: Started sshd@6-10.0.0.23:22-10.0.0.1:56796.service - OpenSSH per-connection server daemon (10.0.0.1:56796). Sep 16 04:18:18.053900 systemd-logind[1519]: Removed session 6. Sep 16 04:18:18.119277 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 56796 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:18:18.120436 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:18:18.124191 systemd-logind[1519]: New session 7 of user core. Sep 16 04:18:18.144383 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:18:18.201354 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:18:18.201626 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:18:18.217882 sudo[1733]: pam_unix(sudo:session): session closed for user root Sep 16 04:18:18.219362 sshd[1732]: Connection closed by 10.0.0.1 port 56796 Sep 16 04:18:18.220089 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Sep 16 04:18:18.240909 systemd[1]: sshd@6-10.0.0.23:22-10.0.0.1:56796.service: Deactivated successfully. Sep 16 04:18:18.242309 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:18:18.243904 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:18:18.245845 systemd-logind[1519]: Removed session 7. Sep 16 04:18:18.248301 systemd[1]: Started sshd@7-10.0.0.23:22-10.0.0.1:56808.service - OpenSSH per-connection server daemon (10.0.0.1:56808). Sep 16 04:18:18.300553 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 56808 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:18:18.304005 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:18:18.311732 systemd-logind[1519]: New session 8 of user core. Sep 16 04:18:18.331335 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:18:18.381824 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:18:18.382075 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:18:18.456414 sudo[1744]: pam_unix(sudo:session): session closed for user root Sep 16 04:18:18.461329 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:18:18.461573 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:18:18.470794 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:18:18.516678 augenrules[1766]: No rules Sep 16 04:18:18.517790 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:18:18.517990 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:18:18.519069 sudo[1743]: pam_unix(sudo:session): session closed for user root Sep 16 04:18:18.521042 sshd[1742]: Connection closed by 10.0.0.1 port 56808 Sep 16 04:18:18.521699 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Sep 16 04:18:18.527908 systemd[1]: sshd@7-10.0.0.23:22-10.0.0.1:56808.service: Deactivated successfully. Sep 16 04:18:18.529323 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:18:18.529928 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:18:18.531933 systemd[1]: Started sshd@8-10.0.0.23:22-10.0.0.1:56816.service - OpenSSH per-connection server daemon (10.0.0.1:56816). Sep 16 04:18:18.535859 systemd-logind[1519]: Removed session 8. Sep 16 04:18:18.587960 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 56816 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:18:18.589117 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:18:18.593061 systemd-logind[1519]: New session 9 of user core. Sep 16 04:18:18.606316 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:18:18.657280 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:18:18.657533 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:18:18.943740 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:18:18.966548 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:18:19.188230 dockerd[1800]: time="2025-09-16T04:18:19.188174653Z" level=info msg="Starting up" Sep 16 04:18:19.189364 dockerd[1800]: time="2025-09-16T04:18:19.189342522Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:18:19.199316 dockerd[1800]: time="2025-09-16T04:18:19.199237999Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:18:19.229052 dockerd[1800]: time="2025-09-16T04:18:19.229013605Z" level=info msg="Loading containers: start." Sep 16 04:18:19.239181 kernel: Initializing XFRM netlink socket Sep 16 04:18:19.427249 systemd-networkd[1445]: docker0: Link UP Sep 16 04:18:19.431186 dockerd[1800]: time="2025-09-16T04:18:19.431138264Z" level=info msg="Loading containers: done." Sep 16 04:18:19.443696 dockerd[1800]: time="2025-09-16T04:18:19.443659387Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:18:19.443901 dockerd[1800]: time="2025-09-16T04:18:19.443880977Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:18:19.444038 dockerd[1800]: time="2025-09-16T04:18:19.444021510Z" level=info msg="Initializing buildkit" Sep 16 04:18:19.465565 dockerd[1800]: time="2025-09-16T04:18:19.465281292Z" level=info msg="Completed buildkit initialization" Sep 16 04:18:19.472562 dockerd[1800]: time="2025-09-16T04:18:19.472523396Z" level=info msg="Daemon has completed initialization" Sep 16 04:18:19.472776 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:18:19.473439 dockerd[1800]: time="2025-09-16T04:18:19.472700177Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:18:20.336574 containerd[1535]: time="2025-09-16T04:18:20.336535880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 16 04:18:21.047043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473840890.mount: Deactivated successfully. Sep 16 04:18:22.527031 containerd[1535]: time="2025-09-16T04:18:22.526957569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:22.527470 containerd[1535]: time="2025-09-16T04:18:22.527425067Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 16 04:18:22.530143 containerd[1535]: time="2025-09-16T04:18:22.530104022Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:22.532560 containerd[1535]: time="2025-09-16T04:18:22.532526457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:22.533619 containerd[1535]: time="2025-09-16T04:18:22.533593907Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.197016384s" Sep 16 04:18:22.533673 containerd[1535]: time="2025-09-16T04:18:22.533627371Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 16 04:18:22.534770 containerd[1535]: time="2025-09-16T04:18:22.534742609Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 16 04:18:24.062063 containerd[1535]: time="2025-09-16T04:18:24.062001323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:24.062739 containerd[1535]: time="2025-09-16T04:18:24.062712944Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 16 04:18:24.063420 containerd[1535]: time="2025-09-16T04:18:24.063393129Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:24.066415 containerd[1535]: time="2025-09-16T04:18:24.066361021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:24.067253 containerd[1535]: time="2025-09-16T04:18:24.067227291Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.532454073s" Sep 16 04:18:24.067297 containerd[1535]: time="2025-09-16T04:18:24.067258445Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 16 04:18:24.067686 containerd[1535]: time="2025-09-16T04:18:24.067667102Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 16 04:18:25.529669 containerd[1535]: time="2025-09-16T04:18:25.529216903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:25.529669 containerd[1535]: time="2025-09-16T04:18:25.529672015Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 16 04:18:25.530576 containerd[1535]: time="2025-09-16T04:18:25.530517463Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:25.533563 containerd[1535]: time="2025-09-16T04:18:25.533527238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:25.534604 containerd[1535]: time="2025-09-16T04:18:25.534568095Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.466779144s" Sep 16 04:18:25.534636 containerd[1535]: time="2025-09-16T04:18:25.534602968Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 16 04:18:25.535344 containerd[1535]: time="2025-09-16T04:18:25.535316219Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 16 04:18:26.466234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount313264870.mount: Deactivated successfully. Sep 16 04:18:27.728273 containerd[1535]: time="2025-09-16T04:18:27.728216730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:27.729041 containerd[1535]: time="2025-09-16T04:18:27.729016892Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 16 04:18:27.729957 containerd[1535]: time="2025-09-16T04:18:27.729929034Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:27.732215 containerd[1535]: time="2025-09-16T04:18:27.732183005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:27.732749 containerd[1535]: time="2025-09-16T04:18:27.732722549Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 2.197372785s" Sep 16 04:18:27.732801 containerd[1535]: time="2025-09-16T04:18:27.732754080Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 16 04:18:27.733306 containerd[1535]: time="2025-09-16T04:18:27.733270908Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 16 04:18:28.153592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:18:28.154871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:18:28.288565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:18:28.292297 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:18:28.306652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount344413158.mount: Deactivated successfully. Sep 16 04:18:28.339426 kubelet[2103]: E0916 04:18:28.339371 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:18:28.343019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:18:28.343165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:18:28.344413 systemd[1]: kubelet.service: Consumed 136ms CPU time, 106.8M memory peak. Sep 16 04:18:29.233685 containerd[1535]: time="2025-09-16T04:18:29.233636951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:29.234596 containerd[1535]: time="2025-09-16T04:18:29.234159712Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 16 04:18:29.235468 containerd[1535]: time="2025-09-16T04:18:29.235436559Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:29.238813 containerd[1535]: time="2025-09-16T04:18:29.238759477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:29.240127 containerd[1535]: time="2025-09-16T04:18:29.240088587Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.506783227s" Sep 16 04:18:29.240127 containerd[1535]: time="2025-09-16T04:18:29.240126514Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 16 04:18:29.240640 containerd[1535]: time="2025-09-16T04:18:29.240610548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:18:29.665699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3283957087.mount: Deactivated successfully. Sep 16 04:18:29.669391 containerd[1535]: time="2025-09-16T04:18:29.669351319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:18:29.669803 containerd[1535]: time="2025-09-16T04:18:29.669784090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 16 04:18:29.670730 containerd[1535]: time="2025-09-16T04:18:29.670702858Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:18:29.672764 containerd[1535]: time="2025-09-16T04:18:29.672714526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:18:29.673402 containerd[1535]: time="2025-09-16T04:18:29.673372253Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 432.726101ms" Sep 16 04:18:29.673467 containerd[1535]: time="2025-09-16T04:18:29.673406535Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 16 04:18:29.674070 containerd[1535]: time="2025-09-16T04:18:29.674047642Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 16 04:18:30.115319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2251682050.mount: Deactivated successfully. Sep 16 04:18:33.183072 containerd[1535]: time="2025-09-16T04:18:33.183005346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:33.184296 containerd[1535]: time="2025-09-16T04:18:33.184244357Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 16 04:18:33.185091 containerd[1535]: time="2025-09-16T04:18:33.185064147Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:33.187822 containerd[1535]: time="2025-09-16T04:18:33.187767173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:18:33.190157 containerd[1535]: time="2025-09-16T04:18:33.189913117Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.515820945s" Sep 16 04:18:33.190157 containerd[1535]: time="2025-09-16T04:18:33.189954347Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 16 04:18:38.403652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 16 04:18:38.405161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:18:38.526091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:18:38.541443 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:18:38.576146 kubelet[2255]: E0916 04:18:38.575718 2255 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:18:38.578045 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:18:38.578204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:18:38.578538 systemd[1]: kubelet.service: Consumed 134ms CPU time, 105.8M memory peak. Sep 16 04:18:38.608717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:18:38.608854 systemd[1]: kubelet.service: Consumed 134ms CPU time, 105.8M memory peak. Sep 16 04:18:38.610838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:18:38.632853 systemd[1]: Reload requested from client PID 2268 ('systemctl') (unit session-9.scope)... Sep 16 04:18:38.632876 systemd[1]: Reloading... Sep 16 04:18:38.707511 zram_generator::config[2312]: No configuration found. Sep 16 04:18:38.975448 systemd[1]: Reloading finished in 342 ms. Sep 16 04:18:39.028693 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:18:39.028783 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:18:39.029034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:18:39.029087 systemd[1]: kubelet.service: Consumed 87ms CPU time, 95M memory peak. Sep 16 04:18:39.031563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:18:39.144355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:18:39.147839 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:18:39.178124 kubelet[2357]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:18:39.178124 kubelet[2357]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:18:39.178124 kubelet[2357]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:18:39.178463 kubelet[2357]: I0916 04:18:39.178172 2357 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:18:40.030085 kubelet[2357]: I0916 04:18:40.030045 2357 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 04:18:40.031226 kubelet[2357]: I0916 04:18:40.030173 2357 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:18:40.031226 kubelet[2357]: I0916 04:18:40.030396 2357 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 04:18:40.050182 kubelet[2357]: E0916 04:18:40.050122 2357 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 16 04:18:40.053664 kubelet[2357]: I0916 04:18:40.053564 2357 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:18:40.059816 kubelet[2357]: I0916 04:18:40.059793 2357 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:18:40.062442 kubelet[2357]: I0916 04:18:40.062426 2357 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:18:40.064193 kubelet[2357]: I0916 04:18:40.064153 2357 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:18:40.064356 kubelet[2357]: I0916 04:18:40.064193 2357 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:18:40.064444 kubelet[2357]: I0916 04:18:40.064412 2357 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:18:40.064444 kubelet[2357]: I0916 04:18:40.064420 2357 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 04:18:40.065128 kubelet[2357]: I0916 04:18:40.065093 2357 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:18:40.067494 kubelet[2357]: I0916 04:18:40.067476 2357 kubelet.go:480] "Attempting to sync node with API server" Sep 16 04:18:40.067543 kubelet[2357]: I0916 04:18:40.067500 2357 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:18:40.067543 kubelet[2357]: I0916 04:18:40.067524 2357 kubelet.go:386] "Adding apiserver pod source" Sep 16 04:18:40.067543 kubelet[2357]: I0916 04:18:40.067534 2357 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:18:40.069164 kubelet[2357]: I0916 04:18:40.069105 2357 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:18:40.069164 kubelet[2357]: E0916 04:18:40.069128 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 16 04:18:40.069805 kubelet[2357]: E0916 04:18:40.069778 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 16 04:18:40.069894 kubelet[2357]: I0916 04:18:40.069838 2357 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 04:18:40.070022 kubelet[2357]: W0916 04:18:40.070009 2357 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:18:40.072543 kubelet[2357]: I0916 04:18:40.072522 2357 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:18:40.072597 kubelet[2357]: I0916 04:18:40.072576 2357 server.go:1289] "Started kubelet" Sep 16 04:18:40.073228 kubelet[2357]: I0916 04:18:40.072671 2357 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:18:40.076235 kubelet[2357]: I0916 04:18:40.076157 2357 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:18:40.076716 kubelet[2357]: I0916 04:18:40.076677 2357 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:18:40.076986 kubelet[2357]: I0916 04:18:40.076958 2357 server.go:317] "Adding debug handlers to kubelet server" Sep 16 04:18:40.078027 kubelet[2357]: I0916 04:18:40.078005 2357 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:18:40.079304 kubelet[2357]: I0916 04:18:40.079259 2357 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:18:40.079793 kubelet[2357]: E0916 04:18:40.079772 2357 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:18:40.079898 kubelet[2357]: I0916 04:18:40.079871 2357 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:18:40.080513 kubelet[2357]: E0916 04:18:40.080486 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="200ms" Sep 16 04:18:40.080723 kubelet[2357]: E0916 04:18:40.079596 2357 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1865a856be5e1ed9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-16 04:18:40.072539865 +0000 UTC m=+0.921614675,LastTimestamp:2025-09-16 04:18:40.072539865 +0000 UTC m=+0.921614675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 16 04:18:40.081303 kubelet[2357]: I0916 04:18:40.081280 2357 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:18:40.081357 kubelet[2357]: I0916 04:18:40.081321 2357 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:18:40.081662 kubelet[2357]: I0916 04:18:40.081634 2357 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:18:40.081831 kubelet[2357]: E0916 04:18:40.081739 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 16 04:18:40.082219 kubelet[2357]: E0916 04:18:40.082198 2357 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:18:40.083497 kubelet[2357]: I0916 04:18:40.083476 2357 factory.go:223] Registration of the containerd container factory successfully Sep 16 04:18:40.083497 kubelet[2357]: I0916 04:18:40.083494 2357 factory.go:223] Registration of the systemd container factory successfully Sep 16 04:18:40.095172 kubelet[2357]: I0916 04:18:40.095024 2357 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:18:40.095172 kubelet[2357]: I0916 04:18:40.095039 2357 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:18:40.095172 kubelet[2357]: I0916 04:18:40.095055 2357 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:18:40.096172 kubelet[2357]: I0916 04:18:40.096031 2357 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 04:18:40.096964 kubelet[2357]: I0916 04:18:40.096950 2357 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 04:18:40.097034 kubelet[2357]: I0916 04:18:40.097025 2357 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 04:18:40.097093 kubelet[2357]: I0916 04:18:40.097083 2357 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:18:40.097165 kubelet[2357]: I0916 04:18:40.097155 2357 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 04:18:40.097248 kubelet[2357]: E0916 04:18:40.097233 2357 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:18:40.168119 kubelet[2357]: E0916 04:18:40.168078 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 16 04:18:40.168119 kubelet[2357]: I0916 04:18:40.168109 2357 policy_none.go:49] "None policy: Start" Sep 16 04:18:40.168119 kubelet[2357]: I0916 04:18:40.168132 2357 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:18:40.168294 kubelet[2357]: I0916 04:18:40.168160 2357 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:18:40.173496 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:18:40.180323 kubelet[2357]: E0916 04:18:40.180294 2357 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:18:40.186134 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:18:40.198000 kubelet[2357]: E0916 04:18:40.197980 2357 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 16 04:18:40.210100 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:18:40.211269 kubelet[2357]: E0916 04:18:40.211244 2357 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 04:18:40.211432 kubelet[2357]: I0916 04:18:40.211418 2357 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:18:40.211468 kubelet[2357]: I0916 04:18:40.211436 2357 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:18:40.212053 kubelet[2357]: I0916 04:18:40.212014 2357 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:18:40.213168 kubelet[2357]: E0916 04:18:40.213149 2357 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:18:40.213237 kubelet[2357]: E0916 04:18:40.213183 2357 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 16 04:18:40.281547 kubelet[2357]: E0916 04:18:40.281454 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="400ms" Sep 16 04:18:40.312569 kubelet[2357]: I0916 04:18:40.312536 2357 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 04:18:40.312976 kubelet[2357]: E0916 04:18:40.312952 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Sep 16 04:18:40.409117 systemd[1]: Created slice kubepods-burstable-pod88ab6e0bed2e5ff24f20da5ce13a6290.slice - libcontainer container kubepods-burstable-pod88ab6e0bed2e5ff24f20da5ce13a6290.slice. Sep 16 04:18:40.433816 kubelet[2357]: E0916 04:18:40.433779 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:18:40.436999 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 16 04:18:40.439243 kubelet[2357]: E0916 04:18:40.439215 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:18:40.441417 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 16 04:18:40.442819 kubelet[2357]: E0916 04:18:40.442681 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:18:40.483069 kubelet[2357]: I0916 04:18:40.483021 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:40.483069 kubelet[2357]: I0916 04:18:40.483060 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:40.483233 kubelet[2357]: I0916 04:18:40.483085 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:18:40.483233 kubelet[2357]: I0916 04:18:40.483102 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:40.483233 kubelet[2357]: I0916 04:18:40.483117 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:40.483233 kubelet[2357]: I0916 04:18:40.483172 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88ab6e0bed2e5ff24f20da5ce13a6290-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"88ab6e0bed2e5ff24f20da5ce13a6290\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:40.483233 kubelet[2357]: I0916 04:18:40.483213 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88ab6e0bed2e5ff24f20da5ce13a6290-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"88ab6e0bed2e5ff24f20da5ce13a6290\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:40.483371 kubelet[2357]: I0916 04:18:40.483238 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88ab6e0bed2e5ff24f20da5ce13a6290-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"88ab6e0bed2e5ff24f20da5ce13a6290\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:40.483371 kubelet[2357]: I0916 04:18:40.483256 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:40.514165 kubelet[2357]: I0916 04:18:40.514115 2357 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 04:18:40.514514 kubelet[2357]: E0916 04:18:40.514481 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Sep 16 04:18:40.682075 kubelet[2357]: E0916 04:18:40.682026 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="800ms" Sep 16 04:18:40.734594 kubelet[2357]: E0916 04:18:40.734558 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:40.735199 containerd[1535]: time="2025-09-16T04:18:40.735154268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:88ab6e0bed2e5ff24f20da5ce13a6290,Namespace:kube-system,Attempt:0,}" Sep 16 04:18:40.740443 kubelet[2357]: E0916 04:18:40.740381 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:40.740909 containerd[1535]: time="2025-09-16T04:18:40.740878578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 16 04:18:40.743415 kubelet[2357]: E0916 04:18:40.743393 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:40.743758 containerd[1535]: time="2025-09-16T04:18:40.743729889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 16 04:18:40.758796 containerd[1535]: time="2025-09-16T04:18:40.758756313Z" level=info msg="connecting to shim 9ed262a89a85d031e5daa3a47d3fc611a1198121db934d948758d07c048fb1ec" address="unix:///run/containerd/s/189e848f3a941f2bfe8f0023bcb954ebeba2e0b7925f9f2b208e0982c6884f50" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:18:40.773610 containerd[1535]: time="2025-09-16T04:18:40.773513428Z" level=info msg="connecting to shim aa689e5fa0b383c1abfc85b658d45a5b3bc8c4d9ce9d21f1bef2327dd26cebae" address="unix:///run/containerd/s/32ead1199949b73b6ccebcc070c91bde41bd6a606cc3131562866ba2dfb98637" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:18:40.782304 containerd[1535]: time="2025-09-16T04:18:40.780998409Z" level=info msg="connecting to shim e99177bcaa4c7426c1b566f0dec8dcfe960a8bc363f42657e7c9b71a148c07e4" address="unix:///run/containerd/s/db19fcd137e01246a21cd4bd84167a339fc01103531878dc73fa97af5298bcae" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:18:40.791327 systemd[1]: Started cri-containerd-9ed262a89a85d031e5daa3a47d3fc611a1198121db934d948758d07c048fb1ec.scope - libcontainer container 9ed262a89a85d031e5daa3a47d3fc611a1198121db934d948758d07c048fb1ec. Sep 16 04:18:40.806342 systemd[1]: Started cri-containerd-aa689e5fa0b383c1abfc85b658d45a5b3bc8c4d9ce9d21f1bef2327dd26cebae.scope - libcontainer container aa689e5fa0b383c1abfc85b658d45a5b3bc8c4d9ce9d21f1bef2327dd26cebae. Sep 16 04:18:40.809369 systemd[1]: Started cri-containerd-e99177bcaa4c7426c1b566f0dec8dcfe960a8bc363f42657e7c9b71a148c07e4.scope - libcontainer container e99177bcaa4c7426c1b566f0dec8dcfe960a8bc363f42657e7c9b71a148c07e4. Sep 16 04:18:40.840022 containerd[1535]: time="2025-09-16T04:18:40.839980932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:88ab6e0bed2e5ff24f20da5ce13a6290,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ed262a89a85d031e5daa3a47d3fc611a1198121db934d948758d07c048fb1ec\"" Sep 16 04:18:40.841878 kubelet[2357]: E0916 04:18:40.841854 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:40.847214 containerd[1535]: time="2025-09-16T04:18:40.847181437Z" level=info msg="CreateContainer within sandbox \"9ed262a89a85d031e5daa3a47d3fc611a1198121db934d948758d07c048fb1ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:18:40.855817 containerd[1535]: time="2025-09-16T04:18:40.855762580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e99177bcaa4c7426c1b566f0dec8dcfe960a8bc363f42657e7c9b71a148c07e4\"" Sep 16 04:18:40.856536 containerd[1535]: time="2025-09-16T04:18:40.856513523Z" level=info msg="Container e094b3f09709d02f236c24232841d6bbbea2a11bfdf138f56b9681de00d3f2ad: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:18:40.856759 kubelet[2357]: E0916 04:18:40.856735 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:40.858242 containerd[1535]: time="2025-09-16T04:18:40.858216771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa689e5fa0b383c1abfc85b658d45a5b3bc8c4d9ce9d21f1bef2327dd26cebae\"" Sep 16 04:18:40.859005 kubelet[2357]: E0916 04:18:40.858984 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:40.860329 containerd[1535]: time="2025-09-16T04:18:40.860290128Z" level=info msg="CreateContainer within sandbox \"e99177bcaa4c7426c1b566f0dec8dcfe960a8bc363f42657e7c9b71a148c07e4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:18:40.863432 containerd[1535]: time="2025-09-16T04:18:40.863394580Z" level=info msg="CreateContainer within sandbox \"aa689e5fa0b383c1abfc85b658d45a5b3bc8c4d9ce9d21f1bef2327dd26cebae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:18:40.866053 containerd[1535]: time="2025-09-16T04:18:40.866008115Z" level=info msg="CreateContainer within sandbox \"9ed262a89a85d031e5daa3a47d3fc611a1198121db934d948758d07c048fb1ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e094b3f09709d02f236c24232841d6bbbea2a11bfdf138f56b9681de00d3f2ad\"" Sep 16 04:18:40.866632 containerd[1535]: time="2025-09-16T04:18:40.866601274Z" level=info msg="StartContainer for \"e094b3f09709d02f236c24232841d6bbbea2a11bfdf138f56b9681de00d3f2ad\"" Sep 16 04:18:40.867909 containerd[1535]: time="2025-09-16T04:18:40.867879590Z" level=info msg="connecting to shim e094b3f09709d02f236c24232841d6bbbea2a11bfdf138f56b9681de00d3f2ad" address="unix:///run/containerd/s/189e848f3a941f2bfe8f0023bcb954ebeba2e0b7925f9f2b208e0982c6884f50" protocol=ttrpc version=3 Sep 16 04:18:40.872445 containerd[1535]: time="2025-09-16T04:18:40.872384248Z" level=info msg="Container bd2e75bf14fb736836776d7f87fac4d9568a3e29e960c52d1634cab86dd4983a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:18:40.875183 containerd[1535]: time="2025-09-16T04:18:40.875150004Z" level=info msg="Container 6ed8fa6d5eaedb3bbce271167dc22c2ee7f668f8f457d34d6b17a39181bb6f23: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:18:40.880000 containerd[1535]: time="2025-09-16T04:18:40.879960986Z" level=info msg="CreateContainer within sandbox \"e99177bcaa4c7426c1b566f0dec8dcfe960a8bc363f42657e7c9b71a148c07e4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bd2e75bf14fb736836776d7f87fac4d9568a3e29e960c52d1634cab86dd4983a\"" Sep 16 04:18:40.880444 containerd[1535]: time="2025-09-16T04:18:40.880414969Z" level=info msg="StartContainer for \"bd2e75bf14fb736836776d7f87fac4d9568a3e29e960c52d1634cab86dd4983a\"" Sep 16 04:18:40.881723 containerd[1535]: time="2025-09-16T04:18:40.881682200Z" level=info msg="connecting to shim bd2e75bf14fb736836776d7f87fac4d9568a3e29e960c52d1634cab86dd4983a" address="unix:///run/containerd/s/db19fcd137e01246a21cd4bd84167a339fc01103531878dc73fa97af5298bcae" protocol=ttrpc version=3 Sep 16 04:18:40.887489 containerd[1535]: time="2025-09-16T04:18:40.887451529Z" level=info msg="CreateContainer within sandbox \"aa689e5fa0b383c1abfc85b658d45a5b3bc8c4d9ce9d21f1bef2327dd26cebae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ed8fa6d5eaedb3bbce271167dc22c2ee7f668f8f457d34d6b17a39181bb6f23\"" Sep 16 04:18:40.887872 containerd[1535]: time="2025-09-16T04:18:40.887845048Z" level=info msg="StartContainer for \"6ed8fa6d5eaedb3bbce271167dc22c2ee7f668f8f457d34d6b17a39181bb6f23\"" Sep 16 04:18:40.888897 containerd[1535]: time="2025-09-16T04:18:40.888849933Z" level=info msg="connecting to shim 6ed8fa6d5eaedb3bbce271167dc22c2ee7f668f8f457d34d6b17a39181bb6f23" address="unix:///run/containerd/s/32ead1199949b73b6ccebcc070c91bde41bd6a606cc3131562866ba2dfb98637" protocol=ttrpc version=3 Sep 16 04:18:40.890394 systemd[1]: Started cri-containerd-e094b3f09709d02f236c24232841d6bbbea2a11bfdf138f56b9681de00d3f2ad.scope - libcontainer container e094b3f09709d02f236c24232841d6bbbea2a11bfdf138f56b9681de00d3f2ad. Sep 16 04:18:40.906304 systemd[1]: Started cri-containerd-bd2e75bf14fb736836776d7f87fac4d9568a3e29e960c52d1634cab86dd4983a.scope - libcontainer container bd2e75bf14fb736836776d7f87fac4d9568a3e29e960c52d1634cab86dd4983a. Sep 16 04:18:40.910768 systemd[1]: Started cri-containerd-6ed8fa6d5eaedb3bbce271167dc22c2ee7f668f8f457d34d6b17a39181bb6f23.scope - libcontainer container 6ed8fa6d5eaedb3bbce271167dc22c2ee7f668f8f457d34d6b17a39181bb6f23. Sep 16 04:18:40.916336 kubelet[2357]: I0916 04:18:40.916298 2357 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 04:18:40.918193 kubelet[2357]: E0916 04:18:40.918133 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Sep 16 04:18:40.939387 kubelet[2357]: E0916 04:18:40.939080 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 16 04:18:40.957585 containerd[1535]: time="2025-09-16T04:18:40.957288152Z" level=info msg="StartContainer for \"e094b3f09709d02f236c24232841d6bbbea2a11bfdf138f56b9681de00d3f2ad\" returns successfully" Sep 16 04:18:40.957585 containerd[1535]: time="2025-09-16T04:18:40.957428729Z" level=info msg="StartContainer for \"bd2e75bf14fb736836776d7f87fac4d9568a3e29e960c52d1634cab86dd4983a\" returns successfully" Sep 16 04:18:40.959471 containerd[1535]: time="2025-09-16T04:18:40.959409648Z" level=info msg="StartContainer for \"6ed8fa6d5eaedb3bbce271167dc22c2ee7f668f8f457d34d6b17a39181bb6f23\" returns successfully" Sep 16 04:18:41.108442 kubelet[2357]: E0916 04:18:41.108412 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:18:41.108571 kubelet[2357]: E0916 04:18:41.108552 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:41.114223 kubelet[2357]: E0916 04:18:41.114163 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:18:41.114336 kubelet[2357]: E0916 04:18:41.114265 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:18:41.114336 kubelet[2357]: E0916 04:18:41.114291 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:41.114436 kubelet[2357]: E0916 04:18:41.114423 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:41.719578 kubelet[2357]: I0916 04:18:41.719549 2357 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 04:18:42.117623 kubelet[2357]: E0916 04:18:42.117283 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:18:42.117623 kubelet[2357]: E0916 04:18:42.117410 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:42.118689 kubelet[2357]: E0916 04:18:42.118658 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:18:42.118847 kubelet[2357]: E0916 04:18:42.118795 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:42.374305 kubelet[2357]: E0916 04:18:42.374215 2357 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 16 04:18:42.545614 kubelet[2357]: I0916 04:18:42.545496 2357 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 16 04:18:42.581113 kubelet[2357]: I0916 04:18:42.580662 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:42.596667 kubelet[2357]: E0916 04:18:42.596629 2357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:42.596667 kubelet[2357]: I0916 04:18:42.596662 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:42.603340 kubelet[2357]: E0916 04:18:42.603304 2357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:42.603340 kubelet[2357]: I0916 04:18:42.603328 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 16 04:18:42.607897 kubelet[2357]: E0916 04:18:42.607863 2357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 16 04:18:43.069565 kubelet[2357]: I0916 04:18:43.069526 2357 apiserver.go:52] "Watching apiserver" Sep 16 04:18:43.081462 kubelet[2357]: I0916 04:18:43.081359 2357 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:18:44.232831 kubelet[2357]: I0916 04:18:44.232793 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:44.239076 kubelet[2357]: E0916 04:18:44.238996 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:44.586827 systemd[1]: Reload requested from client PID 2644 ('systemctl') (unit session-9.scope)... Sep 16 04:18:44.586842 systemd[1]: Reloading... Sep 16 04:18:44.658165 zram_generator::config[2690]: No configuration found. Sep 16 04:18:44.819796 systemd[1]: Reloading finished in 232 ms. Sep 16 04:18:44.839259 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:18:44.855114 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:18:44.855369 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:18:44.855428 systemd[1]: kubelet.service: Consumed 1.279s CPU time, 128.2M memory peak. Sep 16 04:18:44.857073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:18:44.987819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:18:45.006532 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:18:45.043786 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:18:45.045192 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:18:45.045192 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:18:45.045192 kubelet[2729]: I0916 04:18:45.044213 2729 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:18:45.049981 kubelet[2729]: I0916 04:18:45.049946 2729 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 04:18:45.049981 kubelet[2729]: I0916 04:18:45.049972 2729 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:18:45.050186 kubelet[2729]: I0916 04:18:45.050172 2729 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 04:18:45.051441 kubelet[2729]: I0916 04:18:45.051420 2729 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 16 04:18:45.053653 kubelet[2729]: I0916 04:18:45.053625 2729 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:18:45.057012 kubelet[2729]: I0916 04:18:45.056986 2729 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:18:45.060598 kubelet[2729]: I0916 04:18:45.060566 2729 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:18:45.060801 kubelet[2729]: I0916 04:18:45.060768 2729 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:18:45.060943 kubelet[2729]: I0916 04:18:45.060794 2729 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:18:45.061021 kubelet[2729]: I0916 04:18:45.060948 2729 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:18:45.061021 kubelet[2729]: I0916 04:18:45.060956 2729 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 04:18:45.061021 kubelet[2729]: I0916 04:18:45.060994 2729 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:18:45.061248 kubelet[2729]: I0916 04:18:45.061223 2729 kubelet.go:480] "Attempting to sync node with API server" Sep 16 04:18:45.061248 kubelet[2729]: I0916 04:18:45.061246 2729 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:18:45.061289 kubelet[2729]: I0916 04:18:45.061272 2729 kubelet.go:386] "Adding apiserver pod source" Sep 16 04:18:45.061289 kubelet[2729]: I0916 04:18:45.061286 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:18:45.061964 kubelet[2729]: I0916 04:18:45.061934 2729 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:18:45.062496 kubelet[2729]: I0916 04:18:45.062479 2729 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 04:18:45.064469 kubelet[2729]: I0916 04:18:45.064448 2729 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:18:45.064534 kubelet[2729]: I0916 04:18:45.064501 2729 server.go:1289] "Started kubelet" Sep 16 04:18:45.064593 kubelet[2729]: I0916 04:18:45.064573 2729 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:18:45.065385 kubelet[2729]: I0916 04:18:45.065350 2729 server.go:317] "Adding debug handlers to kubelet server" Sep 16 04:18:45.066957 kubelet[2729]: I0916 04:18:45.066933 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:18:45.067079 kubelet[2729]: I0916 04:18:45.067022 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:18:45.067246 kubelet[2729]: I0916 04:18:45.067225 2729 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:18:45.070144 kubelet[2729]: I0916 04:18:45.069421 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:18:45.075593 kubelet[2729]: E0916 04:18:45.075546 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:18:45.075593 kubelet[2729]: I0916 04:18:45.075592 2729 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:18:45.076492 kubelet[2729]: I0916 04:18:45.075831 2729 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:18:45.076492 kubelet[2729]: I0916 04:18:45.076387 2729 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:18:45.081979 kubelet[2729]: I0916 04:18:45.081950 2729 factory.go:223] Registration of the systemd container factory successfully Sep 16 04:18:45.082068 kubelet[2729]: I0916 04:18:45.082047 2729 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:18:45.083949 kubelet[2729]: E0916 04:18:45.083816 2729 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:18:45.084346 kubelet[2729]: I0916 04:18:45.084244 2729 factory.go:223] Registration of the containerd container factory successfully Sep 16 04:18:45.088867 kubelet[2729]: I0916 04:18:45.088723 2729 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 04:18:45.092000 kubelet[2729]: I0916 04:18:45.091928 2729 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 04:18:45.092088 kubelet[2729]: I0916 04:18:45.092077 2729 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 04:18:45.092159 kubelet[2729]: I0916 04:18:45.092147 2729 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:18:45.092256 kubelet[2729]: I0916 04:18:45.092246 2729 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 04:18:45.092348 kubelet[2729]: E0916 04:18:45.092331 2729 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:18:45.117792 kubelet[2729]: I0916 04:18:45.117746 2729 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:18:45.117792 kubelet[2729]: I0916 04:18:45.117785 2729 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:18:45.117936 kubelet[2729]: I0916 04:18:45.117805 2729 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:18:45.117936 kubelet[2729]: I0916 04:18:45.117923 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:18:45.117974 kubelet[2729]: I0916 04:18:45.117933 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:18:45.117974 kubelet[2729]: I0916 04:18:45.117949 2729 policy_none.go:49] "None policy: Start" Sep 16 04:18:45.117974 kubelet[2729]: I0916 04:18:45.117957 2729 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:18:45.117974 kubelet[2729]: I0916 04:18:45.117966 2729 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:18:45.118055 kubelet[2729]: I0916 04:18:45.118043 2729 state_mem.go:75] "Updated machine memory state" Sep 16 04:18:45.121297 kubelet[2729]: E0916 04:18:45.121272 2729 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 04:18:45.121440 kubelet[2729]: I0916 04:18:45.121425 2729 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:18:45.121474 kubelet[2729]: I0916 04:18:45.121444 2729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:18:45.121911 kubelet[2729]: I0916 04:18:45.121842 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:18:45.123308 kubelet[2729]: E0916 04:18:45.123253 2729 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:18:45.193572 kubelet[2729]: I0916 04:18:45.193422 2729 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:45.193572 kubelet[2729]: I0916 04:18:45.193470 2729 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 16 04:18:45.193572 kubelet[2729]: I0916 04:18:45.193489 2729 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:45.198996 kubelet[2729]: E0916 04:18:45.198960 2729 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:45.225287 kubelet[2729]: I0916 04:18:45.225259 2729 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 04:18:45.231255 kubelet[2729]: I0916 04:18:45.231230 2729 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 16 04:18:45.231327 kubelet[2729]: I0916 04:18:45.231303 2729 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 16 04:18:45.377635 kubelet[2729]: I0916 04:18:45.377523 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88ab6e0bed2e5ff24f20da5ce13a6290-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"88ab6e0bed2e5ff24f20da5ce13a6290\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:45.377635 kubelet[2729]: I0916 04:18:45.377569 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:45.377635 kubelet[2729]: I0916 04:18:45.377611 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:45.377635 kubelet[2729]: I0916 04:18:45.377630 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:45.377807 kubelet[2729]: I0916 04:18:45.377662 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:18:45.377807 kubelet[2729]: I0916 04:18:45.377681 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88ab6e0bed2e5ff24f20da5ce13a6290-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"88ab6e0bed2e5ff24f20da5ce13a6290\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:45.377807 kubelet[2729]: I0916 04:18:45.377696 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88ab6e0bed2e5ff24f20da5ce13a6290-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"88ab6e0bed2e5ff24f20da5ce13a6290\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:45.377807 kubelet[2729]: I0916 04:18:45.377733 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:45.377807 kubelet[2729]: I0916 04:18:45.377749 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:45.500074 kubelet[2729]: E0916 04:18:45.499844 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:45.500074 kubelet[2729]: E0916 04:18:45.499890 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:45.500074 kubelet[2729]: E0916 04:18:45.500017 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:45.584389 sudo[2773]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 16 04:18:45.584653 sudo[2773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 16 04:18:45.904382 sudo[2773]: pam_unix(sudo:session): session closed for user root Sep 16 04:18:46.062055 kubelet[2729]: I0916 04:18:46.061943 2729 apiserver.go:52] "Watching apiserver" Sep 16 04:18:46.076872 kubelet[2729]: I0916 04:18:46.076808 2729 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:18:46.104760 kubelet[2729]: I0916 04:18:46.104516 2729 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 16 04:18:46.105401 kubelet[2729]: I0916 04:18:46.104696 2729 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:46.105902 kubelet[2729]: I0916 04:18:46.104826 2729 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:46.116226 kubelet[2729]: E0916 04:18:46.116200 2729 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 16 04:18:46.116425 kubelet[2729]: E0916 04:18:46.116363 2729 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:18:46.116589 kubelet[2729]: E0916 04:18:46.116567 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:46.116715 kubelet[2729]: E0916 04:18:46.116698 2729 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 16 04:18:46.116866 kubelet[2729]: E0916 04:18:46.116809 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:46.117001 kubelet[2729]: E0916 04:18:46.116988 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:46.133409 kubelet[2729]: I0916 04:18:46.133292 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.1332595420000002 podStartE2EDuration="2.133259542s" podCreationTimestamp="2025-09-16 04:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:18:46.13273823 +0000 UTC m=+1.122459088" watchObservedRunningTime="2025-09-16 04:18:46.133259542 +0000 UTC m=+1.122980400" Sep 16 04:18:46.157241 kubelet[2729]: I0916 04:18:46.154876 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.154859469 podStartE2EDuration="1.154859469s" podCreationTimestamp="2025-09-16 04:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:18:46.14122314 +0000 UTC m=+1.130944078" watchObservedRunningTime="2025-09-16 04:18:46.154859469 +0000 UTC m=+1.144580367" Sep 16 04:18:46.169193 kubelet[2729]: I0916 04:18:46.168583 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.168566339 podStartE2EDuration="1.168566339s" podCreationTimestamp="2025-09-16 04:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:18:46.15482666 +0000 UTC m=+1.144547558" watchObservedRunningTime="2025-09-16 04:18:46.168566339 +0000 UTC m=+1.158287197" Sep 16 04:18:47.106768 kubelet[2729]: E0916 04:18:47.106732 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:47.107451 kubelet[2729]: E0916 04:18:47.107394 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:47.107451 kubelet[2729]: E0916 04:18:47.107419 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:47.491127 sudo[1779]: pam_unix(sudo:session): session closed for user root Sep 16 04:18:47.492232 sshd[1778]: Connection closed by 10.0.0.1 port 56816 Sep 16 04:18:47.492564 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Sep 16 04:18:47.495878 systemd[1]: sshd@8-10.0.0.23:22-10.0.0.1:56816.service: Deactivated successfully. Sep 16 04:18:47.497997 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:18:47.498246 systemd[1]: session-9.scope: Consumed 7.101s CPU time, 254.2M memory peak. Sep 16 04:18:47.499304 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:18:47.500681 systemd-logind[1519]: Removed session 9. Sep 16 04:18:48.107489 kubelet[2729]: E0916 04:18:48.107444 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:48.107489 kubelet[2729]: E0916 04:18:48.107478 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:50.057208 kubelet[2729]: I0916 04:18:50.057113 2729 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:18:50.057834 kubelet[2729]: I0916 04:18:50.057615 2729 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:18:50.057880 containerd[1535]: time="2025-09-16T04:18:50.057470716Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:18:50.544313 update_engine[1521]: I20250916 04:18:50.544235 1521 update_attempter.cc:509] Updating boot flags... Sep 16 04:18:51.014196 kubelet[2729]: I0916 04:18:51.013591 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/393898ef-3b58-4ad2-a308-92c6a8be0ff3-kube-proxy\") pod \"kube-proxy-g7868\" (UID: \"393898ef-3b58-4ad2-a308-92c6a8be0ff3\") " pod="kube-system/kube-proxy-g7868" Sep 16 04:18:51.014196 kubelet[2729]: I0916 04:18:51.013683 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/393898ef-3b58-4ad2-a308-92c6a8be0ff3-xtables-lock\") pod \"kube-proxy-g7868\" (UID: \"393898ef-3b58-4ad2-a308-92c6a8be0ff3\") " pod="kube-system/kube-proxy-g7868" Sep 16 04:18:51.014196 kubelet[2729]: I0916 04:18:51.013710 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/393898ef-3b58-4ad2-a308-92c6a8be0ff3-lib-modules\") pod \"kube-proxy-g7868\" (UID: \"393898ef-3b58-4ad2-a308-92c6a8be0ff3\") " pod="kube-system/kube-proxy-g7868" Sep 16 04:18:51.014196 kubelet[2729]: I0916 04:18:51.013730 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwpk4\" (UniqueName: \"kubernetes.io/projected/393898ef-3b58-4ad2-a308-92c6a8be0ff3-kube-api-access-zwpk4\") pod \"kube-proxy-g7868\" (UID: \"393898ef-3b58-4ad2-a308-92c6a8be0ff3\") " pod="kube-system/kube-proxy-g7868" Sep 16 04:18:51.016611 systemd[1]: Created slice kubepods-besteffort-pod393898ef_3b58_4ad2_a308_92c6a8be0ff3.slice - libcontainer container kubepods-besteffort-pod393898ef_3b58_4ad2_a308_92c6a8be0ff3.slice. Sep 16 04:18:51.036423 systemd[1]: Created slice kubepods-burstable-pod8361ad84_87e9_4783_b197_bfc57da9a1a8.slice - libcontainer container kubepods-burstable-pod8361ad84_87e9_4783_b197_bfc57da9a1a8.slice. Sep 16 04:18:51.114786 kubelet[2729]: I0916 04:18:51.114726 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-config-path\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.114786 kubelet[2729]: I0916 04:18:51.114781 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8361ad84-87e9-4783-b197-bfc57da9a1a8-hubble-tls\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115198 kubelet[2729]: I0916 04:18:51.114799 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-run\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115198 kubelet[2729]: I0916 04:18:51.114813 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-bpf-maps\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115198 kubelet[2729]: I0916 04:18:51.114840 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-cgroup\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115198 kubelet[2729]: I0916 04:18:51.114855 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-lib-modules\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115198 kubelet[2729]: I0916 04:18:51.114869 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-xtables-lock\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115198 kubelet[2729]: I0916 04:18:51.114883 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-host-proc-sys-kernel\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115350 kubelet[2729]: I0916 04:18:51.114936 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cni-path\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115350 kubelet[2729]: I0916 04:18:51.114954 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-etc-cni-netd\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115350 kubelet[2729]: I0916 04:18:51.114970 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8361ad84-87e9-4783-b197-bfc57da9a1a8-clustermesh-secrets\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115350 kubelet[2729]: I0916 04:18:51.115003 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-host-proc-sys-net\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115350 kubelet[2729]: I0916 04:18:51.115018 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-582zs\" (UniqueName: \"kubernetes.io/projected/8361ad84-87e9-4783-b197-bfc57da9a1a8-kube-api-access-582zs\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.115350 kubelet[2729]: I0916 04:18:51.115048 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-hostproc\") pod \"cilium-4ksbr\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " pod="kube-system/cilium-4ksbr" Sep 16 04:18:51.250740 systemd[1]: Created slice kubepods-besteffort-pod64a4907b_0046_4414_8bba_8cd535a72115.slice - libcontainer container kubepods-besteffort-pod64a4907b_0046_4414_8bba_8cd535a72115.slice. Sep 16 04:18:51.316105 kubelet[2729]: I0916 04:18:51.315983 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrzvf\" (UniqueName: \"kubernetes.io/projected/64a4907b-0046-4414-8bba-8cd535a72115-kube-api-access-xrzvf\") pod \"cilium-operator-6c4d7847fc-4fdxs\" (UID: \"64a4907b-0046-4414-8bba-8cd535a72115\") " pod="kube-system/cilium-operator-6c4d7847fc-4fdxs" Sep 16 04:18:51.316105 kubelet[2729]: I0916 04:18:51.316030 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64a4907b-0046-4414-8bba-8cd535a72115-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4fdxs\" (UID: \"64a4907b-0046-4414-8bba-8cd535a72115\") " pod="kube-system/cilium-operator-6c4d7847fc-4fdxs" Sep 16 04:18:51.334597 kubelet[2729]: E0916 04:18:51.334511 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:51.335279 containerd[1535]: time="2025-09-16T04:18:51.335231307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7868,Uid:393898ef-3b58-4ad2-a308-92c6a8be0ff3,Namespace:kube-system,Attempt:0,}" Sep 16 04:18:51.339809 kubelet[2729]: E0916 04:18:51.339774 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:51.340588 containerd[1535]: time="2025-09-16T04:18:51.340554787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4ksbr,Uid:8361ad84-87e9-4783-b197-bfc57da9a1a8,Namespace:kube-system,Attempt:0,}" Sep 16 04:18:51.360167 containerd[1535]: time="2025-09-16T04:18:51.359749715Z" level=info msg="connecting to shim ac93b436aa5050376cb96c8a6050c360456f7226bb3d4f7ddd2e283107f8d72b" address="unix:///run/containerd/s/1f1c5446543a8bb1b148ee2e2ca2cb3ec4bac8e90b6876b1d4a700fa508fdd3a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:18:51.366335 containerd[1535]: time="2025-09-16T04:18:51.366281988Z" level=info msg="connecting to shim 6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5" address="unix:///run/containerd/s/af6e241ba2aab814e80b679f0cc1a250f227bbcd744b712d894be0dbdcff64b6" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:18:51.387381 systemd[1]: Started cri-containerd-ac93b436aa5050376cb96c8a6050c360456f7226bb3d4f7ddd2e283107f8d72b.scope - libcontainer container ac93b436aa5050376cb96c8a6050c360456f7226bb3d4f7ddd2e283107f8d72b. Sep 16 04:18:51.390604 systemd[1]: Started cri-containerd-6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5.scope - libcontainer container 6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5. Sep 16 04:18:51.418896 containerd[1535]: time="2025-09-16T04:18:51.418855402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7868,Uid:393898ef-3b58-4ad2-a308-92c6a8be0ff3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac93b436aa5050376cb96c8a6050c360456f7226bb3d4f7ddd2e283107f8d72b\"" Sep 16 04:18:51.425014 kubelet[2729]: E0916 04:18:51.424978 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:51.429058 containerd[1535]: time="2025-09-16T04:18:51.428952199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4ksbr,Uid:8361ad84-87e9-4783-b197-bfc57da9a1a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\"" Sep 16 04:18:51.429895 kubelet[2729]: E0916 04:18:51.429870 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:51.432300 containerd[1535]: time="2025-09-16T04:18:51.432262865Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 16 04:18:51.434885 containerd[1535]: time="2025-09-16T04:18:51.434850849Z" level=info msg="CreateContainer within sandbox \"ac93b436aa5050376cb96c8a6050c360456f7226bb3d4f7ddd2e283107f8d72b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:18:51.443291 containerd[1535]: time="2025-09-16T04:18:51.443167404Z" level=info msg="Container b42872fca9ba6d30b9e9a88d4bcf13ea4bb0b5c121e3f16cfbd633c7905bb495: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:18:51.450690 containerd[1535]: time="2025-09-16T04:18:51.450654412Z" level=info msg="CreateContainer within sandbox \"ac93b436aa5050376cb96c8a6050c360456f7226bb3d4f7ddd2e283107f8d72b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b42872fca9ba6d30b9e9a88d4bcf13ea4bb0b5c121e3f16cfbd633c7905bb495\"" Sep 16 04:18:51.454014 containerd[1535]: time="2025-09-16T04:18:51.453985163Z" level=info msg="StartContainer for \"b42872fca9ba6d30b9e9a88d4bcf13ea4bb0b5c121e3f16cfbd633c7905bb495\"" Sep 16 04:18:51.455534 containerd[1535]: time="2025-09-16T04:18:51.455480301Z" level=info msg="connecting to shim b42872fca9ba6d30b9e9a88d4bcf13ea4bb0b5c121e3f16cfbd633c7905bb495" address="unix:///run/containerd/s/1f1c5446543a8bb1b148ee2e2ca2cb3ec4bac8e90b6876b1d4a700fa508fdd3a" protocol=ttrpc version=3 Sep 16 04:18:51.477350 systemd[1]: Started cri-containerd-b42872fca9ba6d30b9e9a88d4bcf13ea4bb0b5c121e3f16cfbd633c7905bb495.scope - libcontainer container b42872fca9ba6d30b9e9a88d4bcf13ea4bb0b5c121e3f16cfbd633c7905bb495. Sep 16 04:18:51.513868 containerd[1535]: time="2025-09-16T04:18:51.513821816Z" level=info msg="StartContainer for \"b42872fca9ba6d30b9e9a88d4bcf13ea4bb0b5c121e3f16cfbd633c7905bb495\" returns successfully" Sep 16 04:18:51.556377 kubelet[2729]: E0916 04:18:51.555995 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:51.558048 containerd[1535]: time="2025-09-16T04:18:51.558010459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4fdxs,Uid:64a4907b-0046-4414-8bba-8cd535a72115,Namespace:kube-system,Attempt:0,}" Sep 16 04:18:52.060414 containerd[1535]: time="2025-09-16T04:18:52.060255590Z" level=info msg="connecting to shim 58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b" address="unix:///run/containerd/s/cba8a4e25051fca8e02e84038d4456d04399a76335c22140ad8fd2506796934f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:18:52.086341 systemd[1]: Started cri-containerd-58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b.scope - libcontainer container 58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b. Sep 16 04:18:52.117337 kubelet[2729]: E0916 04:18:52.117304 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:52.120633 containerd[1535]: time="2025-09-16T04:18:52.120591465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4fdxs,Uid:64a4907b-0046-4414-8bba-8cd535a72115,Namespace:kube-system,Attempt:0,} returns sandbox id \"58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b\"" Sep 16 04:18:52.121717 kubelet[2729]: E0916 04:18:52.121662 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:52.707185 kubelet[2729]: E0916 04:18:52.707103 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:52.724678 kubelet[2729]: I0916 04:18:52.724617 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g7868" podStartSLOduration=2.724602676 podStartE2EDuration="2.724602676s" podCreationTimestamp="2025-09-16 04:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:18:52.130589812 +0000 UTC m=+7.120310710" watchObservedRunningTime="2025-09-16 04:18:52.724602676 +0000 UTC m=+7.714323574" Sep 16 04:18:53.120536 kubelet[2729]: E0916 04:18:53.120412 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:53.992425 kubelet[2729]: E0916 04:18:53.992376 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:54.121568 kubelet[2729]: E0916 04:18:54.121530 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:55.125781 kubelet[2729]: E0916 04:18:55.125210 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:56.980915 kubelet[2729]: E0916 04:18:56.980883 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:18:57.126540 kubelet[2729]: E0916 04:18:57.126491 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:01.476749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373497394.mount: Deactivated successfully. Sep 16 04:19:02.905426 containerd[1535]: time="2025-09-16T04:19:02.905374393Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:19:02.906365 containerd[1535]: time="2025-09-16T04:19:02.906115655Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 16 04:19:02.907035 containerd[1535]: time="2025-09-16T04:19:02.906997977Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:19:02.919540 containerd[1535]: time="2025-09-16T04:19:02.919502861Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.487196147s" Sep 16 04:19:02.919736 containerd[1535]: time="2025-09-16T04:19:02.919637040Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 16 04:19:02.920557 containerd[1535]: time="2025-09-16T04:19:02.920531283Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 16 04:19:02.925422 containerd[1535]: time="2025-09-16T04:19:02.925380872Z" level=info msg="CreateContainer within sandbox \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:19:02.937386 containerd[1535]: time="2025-09-16T04:19:02.937325959Z" level=info msg="Container 0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:19:02.953537 containerd[1535]: time="2025-09-16T04:19:02.953484308Z" level=info msg="CreateContainer within sandbox \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\"" Sep 16 04:19:02.954041 containerd[1535]: time="2025-09-16T04:19:02.954015861Z" level=info msg="StartContainer for \"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\"" Sep 16 04:19:02.954873 containerd[1535]: time="2025-09-16T04:19:02.954847736Z" level=info msg="connecting to shim 0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09" address="unix:///run/containerd/s/af6e241ba2aab814e80b679f0cc1a250f227bbcd744b712d894be0dbdcff64b6" protocol=ttrpc version=3 Sep 16 04:19:02.998298 systemd[1]: Started cri-containerd-0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09.scope - libcontainer container 0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09. Sep 16 04:19:03.024258 containerd[1535]: time="2025-09-16T04:19:03.024217260Z" level=info msg="StartContainer for \"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\" returns successfully" Sep 16 04:19:03.040218 systemd[1]: cri-containerd-0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09.scope: Deactivated successfully. Sep 16 04:19:03.040901 systemd[1]: cri-containerd-0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09.scope: Consumed 23ms CPU time, 5.6M memory peak, 3.1M written to disk. Sep 16 04:19:03.106347 containerd[1535]: time="2025-09-16T04:19:03.106299344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\" id:\"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\" pid:3178 exited_at:{seconds:1757996343 nanos:102318617}" Sep 16 04:19:03.106566 containerd[1535]: time="2025-09-16T04:19:03.106336669Z" level=info msg="received exit event container_id:\"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\" id:\"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\" pid:3178 exited_at:{seconds:1757996343 nanos:102318617}" Sep 16 04:19:03.163497 kubelet[2729]: E0916 04:19:03.162948 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:03.936131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09-rootfs.mount: Deactivated successfully. Sep 16 04:19:04.165760 kubelet[2729]: E0916 04:19:04.165701 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:04.171982 containerd[1535]: time="2025-09-16T04:19:04.171922287Z" level=info msg="CreateContainer within sandbox \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:19:04.209491 containerd[1535]: time="2025-09-16T04:19:04.208925770Z" level=info msg="Container 51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:19:04.212597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount577407359.mount: Deactivated successfully. Sep 16 04:19:04.217411 containerd[1535]: time="2025-09-16T04:19:04.217295839Z" level=info msg="CreateContainer within sandbox \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\"" Sep 16 04:19:04.218802 containerd[1535]: time="2025-09-16T04:19:04.218776948Z" level=info msg="StartContainer for \"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\"" Sep 16 04:19:04.220997 containerd[1535]: time="2025-09-16T04:19:04.220935703Z" level=info msg="connecting to shim 51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37" address="unix:///run/containerd/s/af6e241ba2aab814e80b679f0cc1a250f227bbcd744b712d894be0dbdcff64b6" protocol=ttrpc version=3 Sep 16 04:19:04.243301 systemd[1]: Started cri-containerd-51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37.scope - libcontainer container 51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37. Sep 16 04:19:04.308772 containerd[1535]: time="2025-09-16T04:19:04.308730150Z" level=info msg="StartContainer for \"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\" returns successfully" Sep 16 04:19:04.322310 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:19:04.322682 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:19:04.322933 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:19:04.324252 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:19:04.325966 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:19:04.326344 systemd[1]: cri-containerd-51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37.scope: Deactivated successfully. Sep 16 04:19:04.330475 containerd[1535]: time="2025-09-16T04:19:04.330440001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\" id:\"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\" pid:3221 exited_at:{seconds:1757996344 nanos:327322403}" Sep 16 04:19:04.330782 containerd[1535]: time="2025-09-16T04:19:04.330731198Z" level=info msg="received exit event container_id:\"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\" id:\"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\" pid:3221 exited_at:{seconds:1757996344 nanos:327322403}" Sep 16 04:19:04.370275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:19:04.936362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37-rootfs.mount: Deactivated successfully. Sep 16 04:19:05.171071 kubelet[2729]: E0916 04:19:05.170922 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:05.181432 containerd[1535]: time="2025-09-16T04:19:05.181381466Z" level=info msg="CreateContainer within sandbox \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:19:05.201251 containerd[1535]: time="2025-09-16T04:19:05.200434970Z" level=info msg="Container 04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:19:05.200908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977635396.mount: Deactivated successfully. Sep 16 04:19:05.209157 containerd[1535]: time="2025-09-16T04:19:05.209091274Z" level=info msg="CreateContainer within sandbox \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\"" Sep 16 04:19:05.209952 containerd[1535]: time="2025-09-16T04:19:05.209679027Z" level=info msg="StartContainer for \"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\"" Sep 16 04:19:05.212313 containerd[1535]: time="2025-09-16T04:19:05.212289388Z" level=info msg="connecting to shim 04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18" address="unix:///run/containerd/s/af6e241ba2aab814e80b679f0cc1a250f227bbcd744b712d894be0dbdcff64b6" protocol=ttrpc version=3 Sep 16 04:19:05.239289 systemd[1]: Started cri-containerd-04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18.scope - libcontainer container 04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18. Sep 16 04:19:05.276217 systemd[1]: cri-containerd-04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18.scope: Deactivated successfully. Sep 16 04:19:05.282286 containerd[1535]: time="2025-09-16T04:19:05.282182104Z" level=info msg="StartContainer for \"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\" returns successfully" Sep 16 04:19:05.282636 containerd[1535]: time="2025-09-16T04:19:05.282589394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\" id:\"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\" pid:3278 exited_at:{seconds:1757996345 nanos:281616034}" Sep 16 04:19:05.289290 containerd[1535]: time="2025-09-16T04:19:05.289240172Z" level=info msg="received exit event container_id:\"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\" id:\"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\" pid:3278 exited_at:{seconds:1757996345 nanos:281616034}" Sep 16 04:19:05.306985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18-rootfs.mount: Deactivated successfully. Sep 16 04:19:06.107291 containerd[1535]: time="2025-09-16T04:19:06.107234074Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:19:06.108191 containerd[1535]: time="2025-09-16T04:19:06.108155904Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 16 04:19:06.110157 containerd[1535]: time="2025-09-16T04:19:06.110114656Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:19:06.111665 containerd[1535]: time="2025-09-16T04:19:06.111554947Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.190870883s" Sep 16 04:19:06.111665 containerd[1535]: time="2025-09-16T04:19:06.111586191Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 16 04:19:06.117964 containerd[1535]: time="2025-09-16T04:19:06.117933984Z" level=info msg="CreateContainer within sandbox \"58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 16 04:19:06.125006 containerd[1535]: time="2025-09-16T04:19:06.124405671Z" level=info msg="Container 71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:19:06.130700 containerd[1535]: time="2025-09-16T04:19:06.130597606Z" level=info msg="CreateContainer within sandbox \"58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\"" Sep 16 04:19:06.131611 containerd[1535]: time="2025-09-16T04:19:06.131587283Z" level=info msg="StartContainer for \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\"" Sep 16 04:19:06.132610 containerd[1535]: time="2025-09-16T04:19:06.132582361Z" level=info msg="connecting to shim 71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858" address="unix:///run/containerd/s/cba8a4e25051fca8e02e84038d4456d04399a76335c22140ad8fd2506796934f" protocol=ttrpc version=3 Sep 16 04:19:06.152640 systemd[1]: Started cri-containerd-71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858.scope - libcontainer container 71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858. Sep 16 04:19:06.180108 containerd[1535]: time="2025-09-16T04:19:06.180078516Z" level=info msg="StartContainer for \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\" returns successfully" Sep 16 04:19:06.185701 kubelet[2729]: E0916 04:19:06.185669 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:06.191529 containerd[1535]: time="2025-09-16T04:19:06.191479428Z" level=info msg="CreateContainer within sandbox \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:19:06.204411 containerd[1535]: time="2025-09-16T04:19:06.204375838Z" level=info msg="Container 515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:19:06.224015 containerd[1535]: time="2025-09-16T04:19:06.223971683Z" level=info msg="CreateContainer within sandbox \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\"" Sep 16 04:19:06.224872 containerd[1535]: time="2025-09-16T04:19:06.224841666Z" level=info msg="StartContainer for \"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\"" Sep 16 04:19:06.227219 containerd[1535]: time="2025-09-16T04:19:06.227187824Z" level=info msg="connecting to shim 515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe" address="unix:///run/containerd/s/af6e241ba2aab814e80b679f0cc1a250f227bbcd744b712d894be0dbdcff64b6" protocol=ttrpc version=3 Sep 16 04:19:06.255325 systemd[1]: Started cri-containerd-515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe.scope - libcontainer container 515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe. Sep 16 04:19:06.287065 systemd[1]: cri-containerd-515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe.scope: Deactivated successfully. Sep 16 04:19:06.293457 containerd[1535]: time="2025-09-16T04:19:06.293414680Z" level=info msg="TaskExit event in podsandbox handler container_id:\"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\" id:\"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\" pid:3360 exited_at:{seconds:1757996346 nanos:291557900}" Sep 16 04:19:06.295803 containerd[1535]: time="2025-09-16T04:19:06.295692711Z" level=info msg="received exit event container_id:\"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\" id:\"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\" pid:3360 exited_at:{seconds:1757996346 nanos:291557900}" Sep 16 04:19:06.300008 containerd[1535]: time="2025-09-16T04:19:06.299976819Z" level=info msg="StartContainer for \"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\" returns successfully" Sep 16 04:19:06.338481 containerd[1535]: time="2025-09-16T04:19:06.319014397Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8361ad84_87e9_4783_b197_bfc57da9a1a8.slice/cri-containerd-515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe.scope/memory.events\": no such file or directory" Sep 16 04:19:07.126650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe-rootfs.mount: Deactivated successfully. Sep 16 04:19:07.189483 kubelet[2729]: E0916 04:19:07.189454 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:07.192030 kubelet[2729]: E0916 04:19:07.191903 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:07.196275 containerd[1535]: time="2025-09-16T04:19:07.196230097Z" level=info msg="CreateContainer within sandbox \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:19:07.216381 containerd[1535]: time="2025-09-16T04:19:07.216346681Z" level=info msg="Container 0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:19:07.223113 containerd[1535]: time="2025-09-16T04:19:07.223063290Z" level=info msg="CreateContainer within sandbox \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\"" Sep 16 04:19:07.226706 containerd[1535]: time="2025-09-16T04:19:07.226421875Z" level=info msg="StartContainer for \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\"" Sep 16 04:19:07.227993 containerd[1535]: time="2025-09-16T04:19:07.227962331Z" level=info msg="connecting to shim 0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af" address="unix:///run/containerd/s/af6e241ba2aab814e80b679f0cc1a250f227bbcd744b712d894be0dbdcff64b6" protocol=ttrpc version=3 Sep 16 04:19:07.247280 systemd[1]: Started cri-containerd-0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af.scope - libcontainer container 0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af. Sep 16 04:19:07.275845 containerd[1535]: time="2025-09-16T04:19:07.275809891Z" level=info msg="StartContainer for \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" returns successfully" Sep 16 04:19:07.359571 containerd[1535]: time="2025-09-16T04:19:07.359532321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" id:\"99f9f603d2338108760e80d4d739bf3848b1144153109cbb65f073f2ba61e98f\" pid:3429 exited_at:{seconds:1757996347 nanos:359239887}" Sep 16 04:19:07.452399 kubelet[2729]: I0916 04:19:07.452328 2729 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 04:19:07.481382 kubelet[2729]: I0916 04:19:07.481328 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4fdxs" podStartSLOduration=2.491250493 podStartE2EDuration="16.481310709s" podCreationTimestamp="2025-09-16 04:18:51 +0000 UTC" firstStartedPulling="2025-09-16 04:18:52.122238859 +0000 UTC m=+7.111959757" lastFinishedPulling="2025-09-16 04:19:06.112299075 +0000 UTC m=+21.102019973" observedRunningTime="2025-09-16 04:19:07.213209641 +0000 UTC m=+22.202930539" watchObservedRunningTime="2025-09-16 04:19:07.481310709 +0000 UTC m=+22.471031607" Sep 16 04:19:07.506203 systemd[1]: Created slice kubepods-burstable-podeb1c99db_b94a_43c6_afd2_4b5b8c65e4e3.slice - libcontainer container kubepods-burstable-podeb1c99db_b94a_43c6_afd2_4b5b8c65e4e3.slice. Sep 16 04:19:07.514180 systemd[1]: Created slice kubepods-burstable-pod42499d6a_10c4_4209_838c_3cd131928882.slice - libcontainer container kubepods-burstable-pod42499d6a_10c4_4209_838c_3cd131928882.slice. Sep 16 04:19:07.528853 kubelet[2729]: I0916 04:19:07.528818 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb1c99db-b94a-43c6-afd2-4b5b8c65e4e3-config-volume\") pod \"coredns-674b8bbfcf-c8w26\" (UID: \"eb1c99db-b94a-43c6-afd2-4b5b8c65e4e3\") " pod="kube-system/coredns-674b8bbfcf-c8w26" Sep 16 04:19:07.529131 kubelet[2729]: I0916 04:19:07.529027 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzhtf\" (UniqueName: \"kubernetes.io/projected/42499d6a-10c4-4209-838c-3cd131928882-kube-api-access-pzhtf\") pod \"coredns-674b8bbfcf-h2xdf\" (UID: \"42499d6a-10c4-4209-838c-3cd131928882\") " pod="kube-system/coredns-674b8bbfcf-h2xdf" Sep 16 04:19:07.529131 kubelet[2729]: I0916 04:19:07.529067 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42499d6a-10c4-4209-838c-3cd131928882-config-volume\") pod \"coredns-674b8bbfcf-h2xdf\" (UID: \"42499d6a-10c4-4209-838c-3cd131928882\") " pod="kube-system/coredns-674b8bbfcf-h2xdf" Sep 16 04:19:07.529131 kubelet[2729]: I0916 04:19:07.529082 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4gw\" (UniqueName: \"kubernetes.io/projected/eb1c99db-b94a-43c6-afd2-4b5b8c65e4e3-kube-api-access-kd4gw\") pod \"coredns-674b8bbfcf-c8w26\" (UID: \"eb1c99db-b94a-43c6-afd2-4b5b8c65e4e3\") " pod="kube-system/coredns-674b8bbfcf-c8w26" Sep 16 04:19:07.810924 kubelet[2729]: E0916 04:19:07.810810 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:07.811583 containerd[1535]: time="2025-09-16T04:19:07.811526050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c8w26,Uid:eb1c99db-b94a-43c6-afd2-4b5b8c65e4e3,Namespace:kube-system,Attempt:0,}" Sep 16 04:19:07.818398 kubelet[2729]: E0916 04:19:07.817478 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:07.818751 containerd[1535]: time="2025-09-16T04:19:07.818706033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2xdf,Uid:42499d6a-10c4-4209-838c-3cd131928882,Namespace:kube-system,Attempt:0,}" Sep 16 04:19:08.197625 kubelet[2729]: E0916 04:19:08.197592 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:08.201213 kubelet[2729]: E0916 04:19:08.201179 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:08.224296 kubelet[2729]: I0916 04:19:08.224223 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4ksbr" podStartSLOduration=6.735664848 podStartE2EDuration="18.224205299s" podCreationTimestamp="2025-09-16 04:18:50 +0000 UTC" firstStartedPulling="2025-09-16 04:18:51.431862575 +0000 UTC m=+6.421583433" lastFinishedPulling="2025-09-16 04:19:02.920402986 +0000 UTC m=+17.910123884" observedRunningTime="2025-09-16 04:19:08.222470947 +0000 UTC m=+23.212191885" watchObservedRunningTime="2025-09-16 04:19:08.224205299 +0000 UTC m=+23.213926237" Sep 16 04:19:09.198879 kubelet[2729]: E0916 04:19:09.198823 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:10.106597 systemd-networkd[1445]: cilium_host: Link UP Sep 16 04:19:10.107510 systemd-networkd[1445]: cilium_net: Link UP Sep 16 04:19:10.107722 systemd-networkd[1445]: cilium_net: Gained carrier Sep 16 04:19:10.108225 systemd-networkd[1445]: cilium_host: Gained carrier Sep 16 04:19:10.189130 systemd-networkd[1445]: cilium_vxlan: Link UP Sep 16 04:19:10.189154 systemd-networkd[1445]: cilium_vxlan: Gained carrier Sep 16 04:19:10.196234 systemd-networkd[1445]: cilium_host: Gained IPv6LL Sep 16 04:19:10.203355 kubelet[2729]: E0916 04:19:10.203326 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:10.465164 kernel: NET: Registered PF_ALG protocol family Sep 16 04:19:10.532313 systemd-networkd[1445]: cilium_net: Gained IPv6LL Sep 16 04:19:11.096212 systemd-networkd[1445]: lxc_health: Link UP Sep 16 04:19:11.097574 systemd-networkd[1445]: lxc_health: Gained carrier Sep 16 04:19:11.351837 kubelet[2729]: E0916 04:19:11.351729 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:11.352457 kernel: eth0: renamed from tmpa2f09 Sep 16 04:19:11.355983 systemd-networkd[1445]: lxce4821a70b122: Link UP Sep 16 04:19:11.359437 systemd-networkd[1445]: lxce4821a70b122: Gained carrier Sep 16 04:19:11.366195 kernel: eth0: renamed from tmp0316a Sep 16 04:19:11.368492 systemd-networkd[1445]: tmp0316a: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:19:11.368567 systemd-networkd[1445]: tmp0316a: Cannot enable IPv6, ignoring: No such file or directory Sep 16 04:19:11.368579 systemd-networkd[1445]: tmp0316a: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Sep 16 04:19:11.368590 systemd-networkd[1445]: tmp0316a: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Sep 16 04:19:11.368607 systemd-networkd[1445]: tmp0316a: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Sep 16 04:19:11.368628 systemd-networkd[1445]: tmp0316a: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Sep 16 04:19:11.370469 systemd-networkd[1445]: lxc0fa9dd4c2fe3: Link UP Sep 16 04:19:11.370810 systemd-networkd[1445]: lxc0fa9dd4c2fe3: Gained carrier Sep 16 04:19:11.956300 systemd-networkd[1445]: cilium_vxlan: Gained IPv6LL Sep 16 04:19:12.208396 kubelet[2729]: E0916 04:19:12.208183 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:12.532303 systemd-networkd[1445]: lxce4821a70b122: Gained IPv6LL Sep 16 04:19:12.660704 systemd-networkd[1445]: lxc_health: Gained IPv6LL Sep 16 04:19:12.788748 systemd-networkd[1445]: lxc0fa9dd4c2fe3: Gained IPv6LL Sep 16 04:19:13.211232 kubelet[2729]: E0916 04:19:13.211195 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:14.758450 systemd[1]: Started sshd@9-10.0.0.23:22-10.0.0.1:35616.service - OpenSSH per-connection server daemon (10.0.0.1:35616). Sep 16 04:19:14.817001 sshd[3909]: Accepted publickey for core from 10.0.0.1 port 35616 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:14.818511 sshd-session[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:14.823911 systemd-logind[1519]: New session 10 of user core. Sep 16 04:19:14.833313 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:19:14.988171 sshd[3912]: Connection closed by 10.0.0.1 port 35616 Sep 16 04:19:14.986780 sshd-session[3909]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:14.993926 systemd[1]: sshd@9-10.0.0.23:22-10.0.0.1:35616.service: Deactivated successfully. Sep 16 04:19:14.995504 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:19:14.999554 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:19:15.000751 systemd-logind[1519]: Removed session 10. Sep 16 04:19:15.099247 containerd[1535]: time="2025-09-16T04:19:15.099037293Z" level=info msg="connecting to shim 0316a31b570db85f51192f28c8c9091b2665f7072b86845dad594039a6d4eed7" address="unix:///run/containerd/s/5bfd0bea722edbf507a2024cc8df1200608cd4288762cf751ab29418c2689a92" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:19:15.100790 containerd[1535]: time="2025-09-16T04:19:15.099042734Z" level=info msg="connecting to shim a2f0970bb0ae329518a2e4527f4e08e5a0c46b43917ca334501636c3e1e4826b" address="unix:///run/containerd/s/202b8c7d2853f52257b43fa843e13d9fd328926e02bbdc3ff230935b78a63867" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:19:15.122765 systemd[1]: Started cri-containerd-a2f0970bb0ae329518a2e4527f4e08e5a0c46b43917ca334501636c3e1e4826b.scope - libcontainer container a2f0970bb0ae329518a2e4527f4e08e5a0c46b43917ca334501636c3e1e4826b. Sep 16 04:19:15.125769 systemd[1]: Started cri-containerd-0316a31b570db85f51192f28c8c9091b2665f7072b86845dad594039a6d4eed7.scope - libcontainer container 0316a31b570db85f51192f28c8c9091b2665f7072b86845dad594039a6d4eed7. Sep 16 04:19:15.142778 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:19:15.144071 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:19:15.165783 containerd[1535]: time="2025-09-16T04:19:15.165714760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2xdf,Uid:42499d6a-10c4-4209-838c-3cd131928882,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2f0970bb0ae329518a2e4527f4e08e5a0c46b43917ca334501636c3e1e4826b\"" Sep 16 04:19:15.166534 kubelet[2729]: E0916 04:19:15.166509 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:15.170271 containerd[1535]: time="2025-09-16T04:19:15.170171520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c8w26,Uid:eb1c99db-b94a-43c6-afd2-4b5b8c65e4e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0316a31b570db85f51192f28c8c9091b2665f7072b86845dad594039a6d4eed7\"" Sep 16 04:19:15.170603 containerd[1535]: time="2025-09-16T04:19:15.170579237Z" level=info msg="CreateContainer within sandbox \"a2f0970bb0ae329518a2e4527f4e08e5a0c46b43917ca334501636c3e1e4826b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:19:15.171156 kubelet[2729]: E0916 04:19:15.171113 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:15.175885 containerd[1535]: time="2025-09-16T04:19:15.175850270Z" level=info msg="CreateContainer within sandbox \"0316a31b570db85f51192f28c8c9091b2665f7072b86845dad594039a6d4eed7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:19:15.189181 containerd[1535]: time="2025-09-16T04:19:15.188537650Z" level=info msg="Container 4cb8b62565376460cf3f9788feef8860f3fabc36fad5b04514e7b482b8e3fafb: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:19:15.195410 containerd[1535]: time="2025-09-16T04:19:15.195371623Z" level=info msg="CreateContainer within sandbox \"0316a31b570db85f51192f28c8c9091b2665f7072b86845dad594039a6d4eed7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4cb8b62565376460cf3f9788feef8860f3fabc36fad5b04514e7b482b8e3fafb\"" Sep 16 04:19:15.196071 containerd[1535]: time="2025-09-16T04:19:15.196024922Z" level=info msg="StartContainer for \"4cb8b62565376460cf3f9788feef8860f3fabc36fad5b04514e7b482b8e3fafb\"" Sep 16 04:19:15.196939 containerd[1535]: time="2025-09-16T04:19:15.196889359Z" level=info msg="connecting to shim 4cb8b62565376460cf3f9788feef8860f3fabc36fad5b04514e7b482b8e3fafb" address="unix:///run/containerd/s/5bfd0bea722edbf507a2024cc8df1200608cd4288762cf751ab29418c2689a92" protocol=ttrpc version=3 Sep 16 04:19:15.203813 containerd[1535]: time="2025-09-16T04:19:15.202695641Z" level=info msg="Container 08e784819781564792a81e44ad1b7b137b1e8ec43d9c7d7a2a153918250299ad: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:19:15.208113 containerd[1535]: time="2025-09-16T04:19:15.208074124Z" level=info msg="CreateContainer within sandbox \"a2f0970bb0ae329518a2e4527f4e08e5a0c46b43917ca334501636c3e1e4826b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08e784819781564792a81e44ad1b7b137b1e8ec43d9c7d7a2a153918250299ad\"" Sep 16 04:19:15.208908 containerd[1535]: time="2025-09-16T04:19:15.208875476Z" level=info msg="StartContainer for \"08e784819781564792a81e44ad1b7b137b1e8ec43d9c7d7a2a153918250299ad\"" Sep 16 04:19:15.209705 containerd[1535]: time="2025-09-16T04:19:15.209671187Z" level=info msg="connecting to shim 08e784819781564792a81e44ad1b7b137b1e8ec43d9c7d7a2a153918250299ad" address="unix:///run/containerd/s/202b8c7d2853f52257b43fa843e13d9fd328926e02bbdc3ff230935b78a63867" protocol=ttrpc version=3 Sep 16 04:19:15.225320 systemd[1]: Started cri-containerd-4cb8b62565376460cf3f9788feef8860f3fabc36fad5b04514e7b482b8e3fafb.scope - libcontainer container 4cb8b62565376460cf3f9788feef8860f3fabc36fad5b04514e7b482b8e3fafb. Sep 16 04:19:15.228421 systemd[1]: Started cri-containerd-08e784819781564792a81e44ad1b7b137b1e8ec43d9c7d7a2a153918250299ad.scope - libcontainer container 08e784819781564792a81e44ad1b7b137b1e8ec43d9c7d7a2a153918250299ad. Sep 16 04:19:15.260905 containerd[1535]: time="2025-09-16T04:19:15.260848342Z" level=info msg="StartContainer for \"4cb8b62565376460cf3f9788feef8860f3fabc36fad5b04514e7b482b8e3fafb\" returns successfully" Sep 16 04:19:15.261170 containerd[1535]: time="2025-09-16T04:19:15.261120527Z" level=info msg="StartContainer for \"08e784819781564792a81e44ad1b7b137b1e8ec43d9c7d7a2a153918250299ad\" returns successfully" Sep 16 04:19:16.076684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894083668.mount: Deactivated successfully. Sep 16 04:19:16.222668 kubelet[2729]: E0916 04:19:16.222430 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:16.226051 kubelet[2729]: E0916 04:19:16.225874 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:16.238166 kubelet[2729]: I0916 04:19:16.237632 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h2xdf" podStartSLOduration=25.237617425 podStartE2EDuration="25.237617425s" podCreationTimestamp="2025-09-16 04:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:19:16.237456731 +0000 UTC m=+31.227177669" watchObservedRunningTime="2025-09-16 04:19:16.237617425 +0000 UTC m=+31.227338323" Sep 16 04:19:16.268002 kubelet[2729]: I0916 04:19:16.267937 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-c8w26" podStartSLOduration=25.267919876 podStartE2EDuration="25.267919876s" podCreationTimestamp="2025-09-16 04:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:19:16.252321432 +0000 UTC m=+31.242042330" watchObservedRunningTime="2025-09-16 04:19:16.267919876 +0000 UTC m=+31.257640774" Sep 16 04:19:17.227988 kubelet[2729]: E0916 04:19:17.227945 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:17.229184 kubelet[2729]: E0916 04:19:17.228096 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:18.229717 kubelet[2729]: E0916 04:19:18.229611 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:18.229717 kubelet[2729]: E0916 04:19:18.229625 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:19:20.004778 systemd[1]: Started sshd@10-10.0.0.23:22-10.0.0.1:36568.service - OpenSSH per-connection server daemon (10.0.0.1:36568). Sep 16 04:19:20.064321 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 36568 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:20.065736 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:20.070371 systemd-logind[1519]: New session 11 of user core. Sep 16 04:19:20.085552 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:19:20.225324 sshd[4107]: Connection closed by 10.0.0.1 port 36568 Sep 16 04:19:20.225687 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:20.229233 systemd[1]: sshd@10-10.0.0.23:22-10.0.0.1:36568.service: Deactivated successfully. Sep 16 04:19:20.231380 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:19:20.233512 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:19:20.235344 systemd-logind[1519]: Removed session 11. Sep 16 04:19:25.239700 systemd[1]: Started sshd@11-10.0.0.23:22-10.0.0.1:36570.service - OpenSSH per-connection server daemon (10.0.0.1:36570). Sep 16 04:19:25.286537 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 36570 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:25.287624 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:25.291242 systemd-logind[1519]: New session 12 of user core. Sep 16 04:19:25.302288 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:19:25.412847 sshd[4127]: Connection closed by 10.0.0.1 port 36570 Sep 16 04:19:25.413255 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:25.415994 systemd[1]: sshd@11-10.0.0.23:22-10.0.0.1:36570.service: Deactivated successfully. Sep 16 04:19:25.417644 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:19:25.419610 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:19:25.420790 systemd-logind[1519]: Removed session 12. Sep 16 04:19:30.431298 systemd[1]: Started sshd@12-10.0.0.23:22-10.0.0.1:50594.service - OpenSSH per-connection server daemon (10.0.0.1:50594). Sep 16 04:19:30.491510 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 50594 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:30.492657 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:30.499854 systemd-logind[1519]: New session 13 of user core. Sep 16 04:19:30.507340 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:19:30.640668 sshd[4146]: Connection closed by 10.0.0.1 port 50594 Sep 16 04:19:30.641019 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:30.659515 systemd[1]: sshd@12-10.0.0.23:22-10.0.0.1:50594.service: Deactivated successfully. Sep 16 04:19:30.662178 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:19:30.663353 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:19:30.668117 systemd-logind[1519]: Removed session 13. Sep 16 04:19:30.670018 systemd[1]: Started sshd@13-10.0.0.23:22-10.0.0.1:50602.service - OpenSSH per-connection server daemon (10.0.0.1:50602). Sep 16 04:19:30.731068 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 50602 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:30.732694 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:30.738473 systemd-logind[1519]: New session 14 of user core. Sep 16 04:19:30.751349 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:19:30.919216 sshd[4163]: Connection closed by 10.0.0.1 port 50602 Sep 16 04:19:30.920061 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:30.931373 systemd[1]: sshd@13-10.0.0.23:22-10.0.0.1:50602.service: Deactivated successfully. Sep 16 04:19:30.934624 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:19:30.935283 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:19:30.938613 systemd[1]: Started sshd@14-10.0.0.23:22-10.0.0.1:50612.service - OpenSSH per-connection server daemon (10.0.0.1:50612). Sep 16 04:19:30.941617 systemd-logind[1519]: Removed session 14. Sep 16 04:19:31.005955 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 50612 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:31.007313 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:31.011598 systemd-logind[1519]: New session 15 of user core. Sep 16 04:19:31.021290 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:19:31.136946 sshd[4177]: Connection closed by 10.0.0.1 port 50612 Sep 16 04:19:31.137283 sshd-session[4174]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:31.140886 systemd[1]: sshd@14-10.0.0.23:22-10.0.0.1:50612.service: Deactivated successfully. Sep 16 04:19:31.142688 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:19:31.143456 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:19:31.145207 systemd-logind[1519]: Removed session 15. Sep 16 04:19:36.158606 systemd[1]: Started sshd@15-10.0.0.23:22-10.0.0.1:50638.service - OpenSSH per-connection server daemon (10.0.0.1:50638). Sep 16 04:19:36.225974 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 50638 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:36.228002 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:36.233478 systemd-logind[1519]: New session 16 of user core. Sep 16 04:19:36.241393 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:19:36.380632 sshd[4193]: Connection closed by 10.0.0.1 port 50638 Sep 16 04:19:36.381309 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:36.385086 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:19:36.385378 systemd[1]: sshd@15-10.0.0.23:22-10.0.0.1:50638.service: Deactivated successfully. Sep 16 04:19:36.387810 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:19:36.390500 systemd-logind[1519]: Removed session 16. Sep 16 04:19:41.393043 systemd[1]: Started sshd@16-10.0.0.23:22-10.0.0.1:49308.service - OpenSSH per-connection server daemon (10.0.0.1:49308). Sep 16 04:19:41.460825 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 49308 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:41.462211 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:41.467573 systemd-logind[1519]: New session 17 of user core. Sep 16 04:19:41.484410 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:19:41.599564 sshd[4210]: Connection closed by 10.0.0.1 port 49308 Sep 16 04:19:41.602960 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:41.613089 systemd[1]: sshd@16-10.0.0.23:22-10.0.0.1:49308.service: Deactivated successfully. Sep 16 04:19:41.615082 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:19:41.616535 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:19:41.618646 systemd[1]: Started sshd@17-10.0.0.23:22-10.0.0.1:49318.service - OpenSSH per-connection server daemon (10.0.0.1:49318). Sep 16 04:19:41.619644 systemd-logind[1519]: Removed session 17. Sep 16 04:19:41.675435 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 49318 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:41.676773 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:41.681218 systemd-logind[1519]: New session 18 of user core. Sep 16 04:19:41.693343 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:19:41.876319 sshd[4227]: Connection closed by 10.0.0.1 port 49318 Sep 16 04:19:41.876824 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:41.884913 systemd[1]: sshd@17-10.0.0.23:22-10.0.0.1:49318.service: Deactivated successfully. Sep 16 04:19:41.886910 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:19:41.888948 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:19:41.892618 systemd[1]: Started sshd@18-10.0.0.23:22-10.0.0.1:49332.service - OpenSSH per-connection server daemon (10.0.0.1:49332). Sep 16 04:19:41.893416 systemd-logind[1519]: Removed session 18. Sep 16 04:19:41.960836 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 49332 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:41.961949 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:41.966737 systemd-logind[1519]: New session 19 of user core. Sep 16 04:19:41.986358 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:19:42.643757 sshd[4241]: Connection closed by 10.0.0.1 port 49332 Sep 16 04:19:42.644643 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:42.654807 systemd[1]: sshd@18-10.0.0.23:22-10.0.0.1:49332.service: Deactivated successfully. Sep 16 04:19:42.659614 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:19:42.663006 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:19:42.664866 systemd[1]: Started sshd@19-10.0.0.23:22-10.0.0.1:49338.service - OpenSSH per-connection server daemon (10.0.0.1:49338). Sep 16 04:19:42.667144 systemd-logind[1519]: Removed session 19. Sep 16 04:19:42.726702 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 49338 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:42.728047 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:42.732738 systemd-logind[1519]: New session 20 of user core. Sep 16 04:19:42.742368 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:19:43.012240 sshd[4265]: Connection closed by 10.0.0.1 port 49338 Sep 16 04:19:43.012799 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:43.022857 systemd[1]: sshd@19-10.0.0.23:22-10.0.0.1:49338.service: Deactivated successfully. Sep 16 04:19:43.024902 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:19:43.029293 systemd-logind[1519]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:19:43.034970 systemd[1]: Started sshd@20-10.0.0.23:22-10.0.0.1:49350.service - OpenSSH per-connection server daemon (10.0.0.1:49350). Sep 16 04:19:43.039132 systemd-logind[1519]: Removed session 20. Sep 16 04:19:43.103849 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 49350 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:43.105444 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:43.110036 systemd-logind[1519]: New session 21 of user core. Sep 16 04:19:43.127402 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:19:43.244569 sshd[4279]: Connection closed by 10.0.0.1 port 49350 Sep 16 04:19:43.244923 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:43.247956 systemd[1]: sshd@20-10.0.0.23:22-10.0.0.1:49350.service: Deactivated successfully. Sep 16 04:19:43.249870 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:19:43.251788 systemd-logind[1519]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:19:43.256267 systemd-logind[1519]: Removed session 21. Sep 16 04:19:48.260303 systemd[1]: Started sshd@21-10.0.0.23:22-10.0.0.1:49362.service - OpenSSH per-connection server daemon (10.0.0.1:49362). Sep 16 04:19:48.324094 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 49362 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:48.325406 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:48.330005 systemd-logind[1519]: New session 22 of user core. Sep 16 04:19:48.344378 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 04:19:48.475199 sshd[4303]: Connection closed by 10.0.0.1 port 49362 Sep 16 04:19:48.475725 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:48.479294 systemd[1]: sshd@21-10.0.0.23:22-10.0.0.1:49362.service: Deactivated successfully. Sep 16 04:19:48.481015 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 04:19:48.481814 systemd-logind[1519]: Session 22 logged out. Waiting for processes to exit. Sep 16 04:19:48.482937 systemd-logind[1519]: Removed session 22. Sep 16 04:19:53.499409 systemd[1]: Started sshd@22-10.0.0.23:22-10.0.0.1:36818.service - OpenSSH per-connection server daemon (10.0.0.1:36818). Sep 16 04:19:53.565798 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 36818 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:53.567774 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:53.572312 systemd-logind[1519]: New session 23 of user core. Sep 16 04:19:53.582319 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 04:19:53.704223 sshd[4322]: Connection closed by 10.0.0.1 port 36818 Sep 16 04:19:53.704728 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:53.709929 systemd[1]: sshd@22-10.0.0.23:22-10.0.0.1:36818.service: Deactivated successfully. Sep 16 04:19:53.712392 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 04:19:53.713149 systemd-logind[1519]: Session 23 logged out. Waiting for processes to exit. Sep 16 04:19:53.714425 systemd-logind[1519]: Removed session 23. Sep 16 04:19:58.719581 systemd[1]: Started sshd@23-10.0.0.23:22-10.0.0.1:36830.service - OpenSSH per-connection server daemon (10.0.0.1:36830). Sep 16 04:19:58.786066 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 36830 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:58.787235 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:58.791910 systemd-logind[1519]: New session 24 of user core. Sep 16 04:19:58.805319 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 04:19:58.931835 sshd[4338]: Connection closed by 10.0.0.1 port 36830 Sep 16 04:19:58.931590 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Sep 16 04:19:58.945282 systemd[1]: sshd@23-10.0.0.23:22-10.0.0.1:36830.service: Deactivated successfully. Sep 16 04:19:58.948579 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 04:19:58.950625 systemd-logind[1519]: Session 24 logged out. Waiting for processes to exit. Sep 16 04:19:58.953517 systemd[1]: Started sshd@24-10.0.0.23:22-10.0.0.1:36832.service - OpenSSH per-connection server daemon (10.0.0.1:36832). Sep 16 04:19:58.954249 systemd-logind[1519]: Removed session 24. Sep 16 04:19:59.018991 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 36832 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:19:59.020106 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:19:59.026762 systemd-logind[1519]: New session 25 of user core. Sep 16 04:19:59.042308 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 04:20:01.001357 containerd[1535]: time="2025-09-16T04:20:01.001316089Z" level=info msg="StopContainer for \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\" with timeout 30 (s)" Sep 16 04:20:01.001831 containerd[1535]: time="2025-09-16T04:20:01.001652285Z" level=info msg="Stop container \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\" with signal terminated" Sep 16 04:20:01.019773 systemd[1]: cri-containerd-71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858.scope: Deactivated successfully. Sep 16 04:20:01.022534 containerd[1535]: time="2025-09-16T04:20:01.022495721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\" id:\"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\" pid:3325 exited_at:{seconds:1757996401 nanos:22132964}" Sep 16 04:20:01.022659 containerd[1535]: time="2025-09-16T04:20:01.022557360Z" level=info msg="received exit event container_id:\"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\" id:\"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\" pid:3325 exited_at:{seconds:1757996401 nanos:22132964}" Sep 16 04:20:01.028395 containerd[1535]: time="2025-09-16T04:20:01.028358823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" id:\"5ed37d59908280b596eed15d036bc9dcee14f30d96cdd7d3a66734313a5295fd\" pid:4382 exited_at:{seconds:1757996401 nanos:28156345}" Sep 16 04:20:01.030823 containerd[1535]: time="2025-09-16T04:20:01.030792359Z" level=info msg="StopContainer for \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" with timeout 2 (s)" Sep 16 04:20:01.031110 containerd[1535]: time="2025-09-16T04:20:01.031086956Z" level=info msg="Stop container \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" with signal terminated" Sep 16 04:20:01.039409 systemd-networkd[1445]: lxc_health: Link DOWN Sep 16 04:20:01.039415 systemd-networkd[1445]: lxc_health: Lost carrier Sep 16 04:20:01.043364 containerd[1535]: time="2025-09-16T04:20:01.043316396Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:20:01.054788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858-rootfs.mount: Deactivated successfully. Sep 16 04:20:01.057606 systemd[1]: cri-containerd-0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af.scope: Deactivated successfully. Sep 16 04:20:01.057902 systemd[1]: cri-containerd-0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af.scope: Consumed 6.339s CPU time, 122.7M memory peak, 128K read from disk, 12.9M written to disk. Sep 16 04:20:01.058930 containerd[1535]: time="2025-09-16T04:20:01.058898443Z" level=info msg="received exit event container_id:\"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" id:\"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" pid:3398 exited_at:{seconds:1757996401 nanos:58640606}" Sep 16 04:20:01.059414 containerd[1535]: time="2025-09-16T04:20:01.059387158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" id:\"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" pid:3398 exited_at:{seconds:1757996401 nanos:58640606}" Sep 16 04:20:01.065374 containerd[1535]: time="2025-09-16T04:20:01.065328700Z" level=info msg="StopContainer for \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\" returns successfully" Sep 16 04:20:01.068075 containerd[1535]: time="2025-09-16T04:20:01.068026553Z" level=info msg="StopPodSandbox for \"58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b\"" Sep 16 04:20:01.076662 containerd[1535]: time="2025-09-16T04:20:01.076493870Z" level=info msg="Container to stop \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:20:01.080956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af-rootfs.mount: Deactivated successfully. Sep 16 04:20:01.085068 systemd[1]: cri-containerd-58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b.scope: Deactivated successfully. Sep 16 04:20:01.091458 containerd[1535]: time="2025-09-16T04:20:01.091326885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b\" id:\"58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b\" pid:3121 exit_status:137 exited_at:{seconds:1757996401 nanos:90652691}" Sep 16 04:20:01.093232 containerd[1535]: time="2025-09-16T04:20:01.093191226Z" level=info msg="StopContainer for \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" returns successfully" Sep 16 04:20:01.093675 containerd[1535]: time="2025-09-16T04:20:01.093643022Z" level=info msg="StopPodSandbox for \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\"" Sep 16 04:20:01.093754 containerd[1535]: time="2025-09-16T04:20:01.093731741Z" level=info msg="Container to stop \"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:20:01.093795 containerd[1535]: time="2025-09-16T04:20:01.093750181Z" level=info msg="Container to stop \"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:20:01.093795 containerd[1535]: time="2025-09-16T04:20:01.093767901Z" level=info msg="Container to stop \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:20:01.093795 containerd[1535]: time="2025-09-16T04:20:01.093777980Z" level=info msg="Container to stop \"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:20:01.093795 containerd[1535]: time="2025-09-16T04:20:01.093790460Z" level=info msg="Container to stop \"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:20:01.099447 systemd[1]: cri-containerd-6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5.scope: Deactivated successfully. Sep 16 04:20:01.120739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5-rootfs.mount: Deactivated successfully. Sep 16 04:20:01.126260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b-rootfs.mount: Deactivated successfully. Sep 16 04:20:01.128268 containerd[1535]: time="2025-09-16T04:20:01.128208282Z" level=info msg="shim disconnected" id=6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5 namespace=k8s.io Sep 16 04:20:01.135384 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b-shm.mount: Deactivated successfully. Sep 16 04:20:01.139359 containerd[1535]: time="2025-09-16T04:20:01.128268882Z" level=warning msg="cleaning up after shim disconnected" id=6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5 namespace=k8s.io Sep 16 04:20:01.139459 containerd[1535]: time="2025-09-16T04:20:01.139357373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:20:01.139459 containerd[1535]: time="2025-09-16T04:20:01.132113644Z" level=info msg="shim disconnected" id=58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b namespace=k8s.io Sep 16 04:20:01.139508 containerd[1535]: time="2025-09-16T04:20:01.139463292Z" level=warning msg="cleaning up after shim disconnected" id=58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b namespace=k8s.io Sep 16 04:20:01.139508 containerd[1535]: time="2025-09-16T04:20:01.139490171Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:20:01.139639 containerd[1535]: time="2025-09-16T04:20:01.132630679Z" level=info msg="TearDown network for sandbox \"58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b\" successfully" Sep 16 04:20:01.139639 containerd[1535]: time="2025-09-16T04:20:01.139602930Z" level=info msg="StopPodSandbox for \"58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b\" returns successfully" Sep 16 04:20:01.140183 containerd[1535]: time="2025-09-16T04:20:01.133006475Z" level=info msg="received exit event sandbox_id:\"58c1274190b82711d74b2904115a4fff75ca67315118f19a55f956bfe017180b\" exit_status:137 exited_at:{seconds:1757996401 nanos:90652691}" Sep 16 04:20:01.159546 containerd[1535]: time="2025-09-16T04:20:01.159500895Z" level=info msg="received exit event sandbox_id:\"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" exit_status:137 exited_at:{seconds:1757996401 nanos:104734753}" Sep 16 04:20:01.159795 containerd[1535]: time="2025-09-16T04:20:01.159770652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" id:\"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" pid:2905 exit_status:137 exited_at:{seconds:1757996401 nanos:104734753}" Sep 16 04:20:01.159947 containerd[1535]: time="2025-09-16T04:20:01.159780372Z" level=info msg="TearDown network for sandbox \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" successfully" Sep 16 04:20:01.160022 containerd[1535]: time="2025-09-16T04:20:01.159999730Z" level=info msg="StopPodSandbox for \"6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5\" returns successfully" Sep 16 04:20:01.322220 kubelet[2729]: I0916 04:20:01.322094 2729 scope.go:117] "RemoveContainer" containerID="0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af" Sep 16 04:20:01.326366 containerd[1535]: time="2025-09-16T04:20:01.326144338Z" level=info msg="RemoveContainer for \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\"" Sep 16 04:20:01.330957 kubelet[2729]: I0916 04:20:01.330922 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-xtables-lock\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.330957 kubelet[2729]: I0916 04:20:01.330954 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-lib-modules\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331087 kubelet[2729]: I0916 04:20:01.330976 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrzvf\" (UniqueName: \"kubernetes.io/projected/64a4907b-0046-4414-8bba-8cd535a72115-kube-api-access-xrzvf\") pod \"64a4907b-0046-4414-8bba-8cd535a72115\" (UID: \"64a4907b-0046-4414-8bba-8cd535a72115\") " Sep 16 04:20:01.331087 kubelet[2729]: I0916 04:20:01.330998 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-hostproc\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331087 kubelet[2729]: I0916 04:20:01.331048 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-config-path\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331087 kubelet[2729]: I0916 04:20:01.331066 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8361ad84-87e9-4783-b197-bfc57da9a1a8-clustermesh-secrets\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331087 kubelet[2729]: I0916 04:20:01.331081 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-bpf-maps\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331233 kubelet[2729]: I0916 04:20:01.331097 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-582zs\" (UniqueName: \"kubernetes.io/projected/8361ad84-87e9-4783-b197-bfc57da9a1a8-kube-api-access-582zs\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331233 kubelet[2729]: I0916 04:20:01.331112 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-host-proc-sys-kernel\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331233 kubelet[2729]: I0916 04:20:01.331126 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-cgroup\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331233 kubelet[2729]: I0916 04:20:01.331159 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-run\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331233 kubelet[2729]: I0916 04:20:01.331173 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cni-path\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331233 kubelet[2729]: I0916 04:20:01.331196 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-host-proc-sys-net\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331351 kubelet[2729]: I0916 04:20:01.331211 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-etc-cni-netd\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.331351 kubelet[2729]: I0916 04:20:01.331230 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64a4907b-0046-4414-8bba-8cd535a72115-cilium-config-path\") pod \"64a4907b-0046-4414-8bba-8cd535a72115\" (UID: \"64a4907b-0046-4414-8bba-8cd535a72115\") " Sep 16 04:20:01.331351 kubelet[2729]: I0916 04:20:01.331247 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8361ad84-87e9-4783-b197-bfc57da9a1a8-hubble-tls\") pod \"8361ad84-87e9-4783-b197-bfc57da9a1a8\" (UID: \"8361ad84-87e9-4783-b197-bfc57da9a1a8\") " Sep 16 04:20:01.334241 containerd[1535]: time="2025-09-16T04:20:01.334204419Z" level=info msg="RemoveContainer for \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" returns successfully" Sep 16 04:20:01.335575 kubelet[2729]: I0916 04:20:01.335543 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:20:01.335648 kubelet[2729]: I0916 04:20:01.335611 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:20:01.335764 kubelet[2729]: I0916 04:20:01.335742 2729 scope.go:117] "RemoveContainer" containerID="515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe" Sep 16 04:20:01.335856 kubelet[2729]: I0916 04:20:01.335842 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:20:01.338696 kubelet[2729]: I0916 04:20:01.338658 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8361ad84-87e9-4783-b197-bfc57da9a1a8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:20:01.338786 kubelet[2729]: I0916 04:20:01.338714 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:20:01.340178 kubelet[2729]: I0916 04:20:01.339293 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8361ad84-87e9-4783-b197-bfc57da9a1a8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:20:01.340178 kubelet[2729]: I0916 04:20:01.339336 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:20:01.340178 kubelet[2729]: I0916 04:20:01.339352 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cni-path" (OuterVolumeSpecName: "cni-path") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:20:01.340178 kubelet[2729]: I0916 04:20:01.339366 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:20:01.340178 kubelet[2729]: I0916 04:20:01.339380 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:20:01.340457 kubelet[2729]: I0916 04:20:01.340415 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:20:01.340457 kubelet[2729]: I0916 04:20:01.340429 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a4907b-0046-4414-8bba-8cd535a72115-kube-api-access-xrzvf" (OuterVolumeSpecName: "kube-api-access-xrzvf") pod "64a4907b-0046-4414-8bba-8cd535a72115" (UID: "64a4907b-0046-4414-8bba-8cd535a72115"). InnerVolumeSpecName "kube-api-access-xrzvf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:20:01.340525 kubelet[2729]: I0916 04:20:01.340480 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-hostproc" (OuterVolumeSpecName: "hostproc") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:20:01.340592 kubelet[2729]: I0916 04:20:01.340574 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:20:01.341307 kubelet[2729]: I0916 04:20:01.341273 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8361ad84-87e9-4783-b197-bfc57da9a1a8-kube-api-access-582zs" (OuterVolumeSpecName: "kube-api-access-582zs") pod "8361ad84-87e9-4783-b197-bfc57da9a1a8" (UID: "8361ad84-87e9-4783-b197-bfc57da9a1a8"). InnerVolumeSpecName "kube-api-access-582zs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:20:01.341940 containerd[1535]: time="2025-09-16T04:20:01.341912463Z" level=info msg="RemoveContainer for \"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\"" Sep 16 04:20:01.342520 kubelet[2729]: I0916 04:20:01.342492 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64a4907b-0046-4414-8bba-8cd535a72115-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "64a4907b-0046-4414-8bba-8cd535a72115" (UID: "64a4907b-0046-4414-8bba-8cd535a72115"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:20:01.347543 containerd[1535]: time="2025-09-16T04:20:01.347500009Z" level=info msg="RemoveContainer for \"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\" returns successfully" Sep 16 04:20:01.347850 kubelet[2729]: I0916 04:20:01.347701 2729 scope.go:117] "RemoveContainer" containerID="04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18" Sep 16 04:20:01.349910 containerd[1535]: time="2025-09-16T04:20:01.349882985Z" level=info msg="RemoveContainer for \"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\"" Sep 16 04:20:01.353451 containerd[1535]: time="2025-09-16T04:20:01.353427670Z" level=info msg="RemoveContainer for \"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\" returns successfully" Sep 16 04:20:01.353567 kubelet[2729]: I0916 04:20:01.353552 2729 scope.go:117] "RemoveContainer" containerID="51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37" Sep 16 04:20:01.354836 containerd[1535]: time="2025-09-16T04:20:01.354766337Z" level=info msg="RemoveContainer for \"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\"" Sep 16 04:20:01.367658 containerd[1535]: time="2025-09-16T04:20:01.367625451Z" level=info msg="RemoveContainer for \"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\" returns successfully" Sep 16 04:20:01.367800 kubelet[2729]: I0916 04:20:01.367776 2729 scope.go:117] "RemoveContainer" containerID="0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09" Sep 16 04:20:01.368974 containerd[1535]: time="2025-09-16T04:20:01.368951398Z" level=info msg="RemoveContainer for \"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\"" Sep 16 04:20:01.372200 containerd[1535]: time="2025-09-16T04:20:01.372175606Z" level=info msg="RemoveContainer for \"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\" returns successfully" Sep 16 04:20:01.372382 kubelet[2729]: I0916 04:20:01.372333 2729 scope.go:117] "RemoveContainer" containerID="0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af" Sep 16 04:20:01.377499 containerd[1535]: time="2025-09-16T04:20:01.372606562Z" level=error msg="ContainerStatus for \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\": not found" Sep 16 04:20:01.380089 kubelet[2729]: E0916 04:20:01.379919 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\": not found" containerID="0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af" Sep 16 04:20:01.380089 kubelet[2729]: I0916 04:20:01.379963 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af"} err="failed to get container status \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f217e35cbb55d5516642b0d937cbe116e494c61e95bcac0ee77e21d478446af\": not found" Sep 16 04:20:01.380089 kubelet[2729]: I0916 04:20:01.380005 2729 scope.go:117] "RemoveContainer" containerID="515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe" Sep 16 04:20:01.380225 containerd[1535]: time="2025-09-16T04:20:01.380185808Z" level=error msg="ContainerStatus for \"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\": not found" Sep 16 04:20:01.380347 kubelet[2729]: E0916 04:20:01.380324 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\": not found" containerID="515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe" Sep 16 04:20:01.380376 kubelet[2729]: I0916 04:20:01.380351 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe"} err="failed to get container status \"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"515ea6751343bdfdfc0b343f7743c83161a48033586a036e9511b82e9686d8fe\": not found" Sep 16 04:20:01.380376 kubelet[2729]: I0916 04:20:01.380367 2729 scope.go:117] "RemoveContainer" containerID="04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18" Sep 16 04:20:01.380561 containerd[1535]: time="2025-09-16T04:20:01.380533404Z" level=error msg="ContainerStatus for \"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\": not found" Sep 16 04:20:01.380681 kubelet[2729]: E0916 04:20:01.380662 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\": not found" containerID="04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18" Sep 16 04:20:01.380776 kubelet[2729]: I0916 04:20:01.380757 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18"} err="failed to get container status \"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\": rpc error: code = NotFound desc = an error occurred when try to find container \"04e64a148b5b00f71e5c634c0e347ae5a0a838d70fdaee3bad43680bea858d18\": not found" Sep 16 04:20:01.380897 kubelet[2729]: I0916 04:20:01.380815 2729 scope.go:117] "RemoveContainer" containerID="51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37" Sep 16 04:20:01.380993 containerd[1535]: time="2025-09-16T04:20:01.380958880Z" level=error msg="ContainerStatus for \"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\": not found" Sep 16 04:20:01.381116 kubelet[2729]: E0916 04:20:01.381093 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\": not found" containerID="51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37" Sep 16 04:20:01.381168 kubelet[2729]: I0916 04:20:01.381119 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37"} err="failed to get container status \"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\": rpc error: code = NotFound desc = an error occurred when try to find container \"51072900843108087329498d11d3ff770932fcfb9b9528a2b9f9995e19337d37\": not found" Sep 16 04:20:01.381168 kubelet[2729]: I0916 04:20:01.381134 2729 scope.go:117] "RemoveContainer" containerID="0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09" Sep 16 04:20:01.381336 containerd[1535]: time="2025-09-16T04:20:01.381308317Z" level=error msg="ContainerStatus for \"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\": not found" Sep 16 04:20:01.381455 kubelet[2729]: E0916 04:20:01.381434 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\": not found" containerID="0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09" Sep 16 04:20:01.381524 kubelet[2729]: I0916 04:20:01.381507 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09"} err="failed to get container status \"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e531ac30c0b7bcd1d330a025f8705c9b3110357aa0a576c77885a5c080f0d09\": not found" Sep 16 04:20:01.381572 kubelet[2729]: I0916 04:20:01.381562 2729 scope.go:117] "RemoveContainer" containerID="71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858" Sep 16 04:20:01.383019 containerd[1535]: time="2025-09-16T04:20:01.382998420Z" level=info msg="RemoveContainer for \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\"" Sep 16 04:20:01.385540 containerd[1535]: time="2025-09-16T04:20:01.385516435Z" level=info msg="RemoveContainer for \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\" returns successfully" Sep 16 04:20:01.385717 kubelet[2729]: I0916 04:20:01.385677 2729 scope.go:117] "RemoveContainer" containerID="71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858" Sep 16 04:20:01.385944 containerd[1535]: time="2025-09-16T04:20:01.385911391Z" level=error msg="ContainerStatus for \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\": not found" Sep 16 04:20:01.386050 kubelet[2729]: E0916 04:20:01.386029 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\": not found" containerID="71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858" Sep 16 04:20:01.386181 kubelet[2729]: I0916 04:20:01.386149 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858"} err="failed to get container status \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\": rpc error: code = NotFound desc = an error occurred when try to find container \"71cd72f6210fd4bc975202a9f2dd4d8cdb5937b5ebbe9f913bc86104b63c6858\": not found" Sep 16 04:20:01.432437 kubelet[2729]: I0916 04:20:01.432396 2729 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432437 kubelet[2729]: I0916 04:20:01.432427 2729 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432437 kubelet[2729]: I0916 04:20:01.432439 2729 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8361ad84-87e9-4783-b197-bfc57da9a1a8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432544 kubelet[2729]: I0916 04:20:01.432447 2729 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432544 kubelet[2729]: I0916 04:20:01.432456 2729 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-582zs\" (UniqueName: \"kubernetes.io/projected/8361ad84-87e9-4783-b197-bfc57da9a1a8-kube-api-access-582zs\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432544 kubelet[2729]: I0916 04:20:01.432464 2729 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432544 kubelet[2729]: I0916 04:20:01.432472 2729 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432544 kubelet[2729]: I0916 04:20:01.432479 2729 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432544 kubelet[2729]: I0916 04:20:01.432486 2729 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432544 kubelet[2729]: I0916 04:20:01.432493 2729 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432544 kubelet[2729]: I0916 04:20:01.432501 2729 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432719 kubelet[2729]: I0916 04:20:01.432508 2729 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64a4907b-0046-4414-8bba-8cd535a72115-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432719 kubelet[2729]: I0916 04:20:01.432516 2729 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8361ad84-87e9-4783-b197-bfc57da9a1a8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432719 kubelet[2729]: I0916 04:20:01.432524 2729 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432719 kubelet[2729]: I0916 04:20:01.432530 2729 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8361ad84-87e9-4783-b197-bfc57da9a1a8-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.432719 kubelet[2729]: I0916 04:20:01.432538 2729 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xrzvf\" (UniqueName: \"kubernetes.io/projected/64a4907b-0046-4414-8bba-8cd535a72115-kube-api-access-xrzvf\") on node \"localhost\" DevicePath \"\"" Sep 16 04:20:01.627577 systemd[1]: Removed slice kubepods-burstable-pod8361ad84_87e9_4783_b197_bfc57da9a1a8.slice - libcontainer container kubepods-burstable-pod8361ad84_87e9_4783_b197_bfc57da9a1a8.slice. Sep 16 04:20:01.627669 systemd[1]: kubepods-burstable-pod8361ad84_87e9_4783_b197_bfc57da9a1a8.slice: Consumed 6.424s CPU time, 123.1M memory peak, 140K read from disk, 16.1M written to disk. Sep 16 04:20:01.631125 systemd[1]: Removed slice kubepods-besteffort-pod64a4907b_0046_4414_8bba_8cd535a72115.slice - libcontainer container kubepods-besteffort-pod64a4907b_0046_4414_8bba_8cd535a72115.slice. Sep 16 04:20:02.054570 systemd[1]: var-lib-kubelet-pods-64a4907b\x2d0046\x2d4414\x2d8bba\x2d8cd535a72115-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxrzvf.mount: Deactivated successfully. Sep 16 04:20:02.054667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6fb97fd776ae2b0926e2b41b4832e444cc37df60b4ae7a7e6d02254038e6fcb5-shm.mount: Deactivated successfully. Sep 16 04:20:02.054729 systemd[1]: var-lib-kubelet-pods-8361ad84\x2d87e9\x2d4783\x2db197\x2dbfc57da9a1a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d582zs.mount: Deactivated successfully. Sep 16 04:20:02.054786 systemd[1]: var-lib-kubelet-pods-8361ad84\x2d87e9\x2d4783\x2db197\x2dbfc57da9a1a8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 16 04:20:02.054835 systemd[1]: var-lib-kubelet-pods-8361ad84\x2d87e9\x2d4783\x2db197\x2dbfc57da9a1a8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 16 04:20:02.934183 sshd[4355]: Connection closed by 10.0.0.1 port 36832 Sep 16 04:20:02.934575 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Sep 16 04:20:02.943560 systemd[1]: sshd@24-10.0.0.23:22-10.0.0.1:36832.service: Deactivated successfully. Sep 16 04:20:02.945481 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 04:20:02.945808 systemd[1]: session-25.scope: Consumed 1.259s CPU time, 26.2M memory peak. Sep 16 04:20:02.946400 systemd-logind[1519]: Session 25 logged out. Waiting for processes to exit. Sep 16 04:20:02.949967 systemd[1]: Started sshd@25-10.0.0.23:22-10.0.0.1:41956.service - OpenSSH per-connection server daemon (10.0.0.1:41956). Sep 16 04:20:02.950460 systemd-logind[1519]: Removed session 25. Sep 16 04:20:03.017275 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 41956 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:20:03.018591 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:20:03.023169 systemd-logind[1519]: New session 26 of user core. Sep 16 04:20:03.033307 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 04:20:03.095191 kubelet[2729]: I0916 04:20:03.095130 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a4907b-0046-4414-8bba-8cd535a72115" path="/var/lib/kubelet/pods/64a4907b-0046-4414-8bba-8cd535a72115/volumes" Sep 16 04:20:03.095580 kubelet[2729]: I0916 04:20:03.095554 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8361ad84-87e9-4783-b197-bfc57da9a1a8" path="/var/lib/kubelet/pods/8361ad84-87e9-4783-b197-bfc57da9a1a8/volumes" Sep 16 04:20:04.092887 kubelet[2729]: E0916 04:20:04.092803 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:04.173634 sshd[4509]: Connection closed by 10.0.0.1 port 41956 Sep 16 04:20:04.176199 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Sep 16 04:20:04.192184 systemd[1]: sshd@25-10.0.0.23:22-10.0.0.1:41956.service: Deactivated successfully. Sep 16 04:20:04.195021 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 04:20:04.195510 systemd[1]: session-26.scope: Consumed 1.051s CPU time, 24.1M memory peak. Sep 16 04:20:04.198250 systemd-logind[1519]: Session 26 logged out. Waiting for processes to exit. Sep 16 04:20:04.204429 systemd[1]: Started sshd@26-10.0.0.23:22-10.0.0.1:41964.service - OpenSSH per-connection server daemon (10.0.0.1:41964). Sep 16 04:20:04.206434 systemd-logind[1519]: Removed session 26. Sep 16 04:20:04.222578 systemd[1]: Created slice kubepods-burstable-pod17a52bed_9f63_43b5_873a_bebd9d2d656b.slice - libcontainer container kubepods-burstable-pod17a52bed_9f63_43b5_873a_bebd9d2d656b.slice. Sep 16 04:20:04.274053 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 41964 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:20:04.275437 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:20:04.279734 systemd-logind[1519]: New session 27 of user core. Sep 16 04:20:04.295302 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 16 04:20:04.345289 sshd[4524]: Connection closed by 10.0.0.1 port 41964 Sep 16 04:20:04.346177 kubelet[2729]: I0916 04:20:04.345525 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17a52bed-9f63-43b5-873a-bebd9d2d656b-etc-cni-netd\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346177 kubelet[2729]: I0916 04:20:04.345564 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17a52bed-9f63-43b5-873a-bebd9d2d656b-cilium-config-path\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346177 kubelet[2729]: I0916 04:20:04.345583 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17a52bed-9f63-43b5-873a-bebd9d2d656b-host-proc-sys-net\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346177 kubelet[2729]: I0916 04:20:04.345600 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17a52bed-9f63-43b5-873a-bebd9d2d656b-hubble-tls\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346177 kubelet[2729]: I0916 04:20:04.345618 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17a52bed-9f63-43b5-873a-bebd9d2d656b-bpf-maps\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346177 kubelet[2729]: I0916 04:20:04.345633 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17a52bed-9f63-43b5-873a-bebd9d2d656b-xtables-lock\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346483 kubelet[2729]: I0916 04:20:04.345656 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17a52bed-9f63-43b5-873a-bebd9d2d656b-cilium-run\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346483 kubelet[2729]: I0916 04:20:04.345673 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17a52bed-9f63-43b5-873a-bebd9d2d656b-cilium-cgroup\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346483 kubelet[2729]: I0916 04:20:04.345690 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17a52bed-9f63-43b5-873a-bebd9d2d656b-host-proc-sys-kernel\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346483 kubelet[2729]: I0916 04:20:04.345706 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17a52bed-9f63-43b5-873a-bebd9d2d656b-clustermesh-secrets\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346483 kubelet[2729]: I0916 04:20:04.345720 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/17a52bed-9f63-43b5-873a-bebd9d2d656b-cilium-ipsec-secrets\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346180 sshd-session[4521]: pam_unix(sshd:session): session closed for user core Sep 16 04:20:04.346640 kubelet[2729]: I0916 04:20:04.345777 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6grwr\" (UniqueName: \"kubernetes.io/projected/17a52bed-9f63-43b5-873a-bebd9d2d656b-kube-api-access-6grwr\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346640 kubelet[2729]: I0916 04:20:04.345856 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17a52bed-9f63-43b5-873a-bebd9d2d656b-cni-path\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346640 kubelet[2729]: I0916 04:20:04.345886 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17a52bed-9f63-43b5-873a-bebd9d2d656b-lib-modules\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.346640 kubelet[2729]: I0916 04:20:04.345906 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17a52bed-9f63-43b5-873a-bebd9d2d656b-hostproc\") pod \"cilium-tqr4r\" (UID: \"17a52bed-9f63-43b5-873a-bebd9d2d656b\") " pod="kube-system/cilium-tqr4r" Sep 16 04:20:04.358281 systemd[1]: sshd@26-10.0.0.23:22-10.0.0.1:41964.service: Deactivated successfully. Sep 16 04:20:04.359864 systemd[1]: session-27.scope: Deactivated successfully. Sep 16 04:20:04.360535 systemd-logind[1519]: Session 27 logged out. Waiting for processes to exit. Sep 16 04:20:04.363792 systemd[1]: Started sshd@27-10.0.0.23:22-10.0.0.1:41972.service - OpenSSH per-connection server daemon (10.0.0.1:41972). Sep 16 04:20:04.364472 systemd-logind[1519]: Removed session 27. Sep 16 04:20:04.426194 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 41972 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:20:04.427436 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:20:04.431454 systemd-logind[1519]: New session 28 of user core. Sep 16 04:20:04.440304 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 16 04:20:04.528090 kubelet[2729]: E0916 04:20:04.527838 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:04.529134 containerd[1535]: time="2025-09-16T04:20:04.529084939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tqr4r,Uid:17a52bed-9f63-43b5-873a-bebd9d2d656b,Namespace:kube-system,Attempt:0,}" Sep 16 04:20:04.554167 containerd[1535]: time="2025-09-16T04:20:04.554047490Z" level=info msg="connecting to shim 822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d" address="unix:///run/containerd/s/7cc17417f4beedd22a4274810146c3e3d37b781be5c0db8db05f5cade3afe60a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:20:04.585344 systemd[1]: Started cri-containerd-822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d.scope - libcontainer container 822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d. Sep 16 04:20:04.607944 containerd[1535]: time="2025-09-16T04:20:04.607741373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tqr4r,Uid:17a52bed-9f63-43b5-873a-bebd9d2d656b,Namespace:kube-system,Attempt:0,} returns sandbox id \"822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d\"" Sep 16 04:20:04.608878 kubelet[2729]: E0916 04:20:04.608844 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:04.614492 containerd[1535]: time="2025-09-16T04:20:04.614450698Z" level=info msg="CreateContainer within sandbox \"822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:20:04.632638 containerd[1535]: time="2025-09-16T04:20:04.632594045Z" level=info msg="Container a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:20:04.638523 containerd[1535]: time="2025-09-16T04:20:04.638403055Z" level=info msg="CreateContainer within sandbox \"822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b\"" Sep 16 04:20:04.639345 containerd[1535]: time="2025-09-16T04:20:04.639245650Z" level=info msg="StartContainer for \"a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b\"" Sep 16 04:20:04.640453 containerd[1535]: time="2025-09-16T04:20:04.640324325Z" level=info msg="connecting to shim a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b" address="unix:///run/containerd/s/7cc17417f4beedd22a4274810146c3e3d37b781be5c0db8db05f5cade3afe60a" protocol=ttrpc version=3 Sep 16 04:20:04.667343 systemd[1]: Started cri-containerd-a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b.scope - libcontainer container a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b. Sep 16 04:20:04.693191 containerd[1535]: time="2025-09-16T04:20:04.693131452Z" level=info msg="StartContainer for \"a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b\" returns successfully" Sep 16 04:20:04.702379 systemd[1]: cri-containerd-a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b.scope: Deactivated successfully. Sep 16 04:20:04.705559 containerd[1535]: time="2025-09-16T04:20:04.705427629Z" level=info msg="received exit event container_id:\"a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b\" id:\"a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b\" pid:4606 exited_at:{seconds:1757996404 nanos:705155710}" Sep 16 04:20:04.705636 containerd[1535]: time="2025-09-16T04:20:04.705519148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b\" id:\"a05c0fe8779c3aaac0d62950e4a97b69617f594352ebf4887df214ee1c66964b\" pid:4606 exited_at:{seconds:1757996404 nanos:705155710}" Sep 16 04:20:05.137757 kubelet[2729]: E0916 04:20:05.137717 2729 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 04:20:05.338713 kubelet[2729]: E0916 04:20:05.338361 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:05.344340 containerd[1535]: time="2025-09-16T04:20:05.344303991Z" level=info msg="CreateContainer within sandbox \"822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:20:05.354326 containerd[1535]: time="2025-09-16T04:20:05.354249034Z" level=info msg="Container 676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:20:05.361432 containerd[1535]: time="2025-09-16T04:20:05.361384568Z" level=info msg="CreateContainer within sandbox \"822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b\"" Sep 16 04:20:05.362015 containerd[1535]: time="2025-09-16T04:20:05.361951525Z" level=info msg="StartContainer for \"676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b\"" Sep 16 04:20:05.362851 containerd[1535]: time="2025-09-16T04:20:05.362816202Z" level=info msg="connecting to shim 676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b" address="unix:///run/containerd/s/7cc17417f4beedd22a4274810146c3e3d37b781be5c0db8db05f5cade3afe60a" protocol=ttrpc version=3 Sep 16 04:20:05.380329 systemd[1]: Started cri-containerd-676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b.scope - libcontainer container 676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b. Sep 16 04:20:05.405501 containerd[1535]: time="2025-09-16T04:20:05.405469644Z" level=info msg="StartContainer for \"676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b\" returns successfully" Sep 16 04:20:05.411505 systemd[1]: cri-containerd-676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b.scope: Deactivated successfully. Sep 16 04:20:05.412708 containerd[1535]: time="2025-09-16T04:20:05.412673657Z" level=info msg="received exit event container_id:\"676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b\" id:\"676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b\" pid:4654 exited_at:{seconds:1757996405 nanos:412455498}" Sep 16 04:20:05.412782 containerd[1535]: time="2025-09-16T04:20:05.412715897Z" level=info msg="TaskExit event in podsandbox handler container_id:\"676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b\" id:\"676166242ed06db6867850d305c2812b73412182ee5ccc9b3cea74c24960573b\" pid:4654 exited_at:{seconds:1757996405 nanos:412455498}" Sep 16 04:20:06.093588 kubelet[2729]: E0916 04:20:06.093512 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:06.341977 kubelet[2729]: E0916 04:20:06.341936 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:06.348738 containerd[1535]: time="2025-09-16T04:20:06.348621721Z" level=info msg="CreateContainer within sandbox \"822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:20:06.358741 containerd[1535]: time="2025-09-16T04:20:06.358693458Z" level=info msg="Container b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:20:06.364042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429871646.mount: Deactivated successfully. Sep 16 04:20:06.414150 containerd[1535]: time="2025-09-16T04:20:06.414096771Z" level=info msg="CreateContainer within sandbox \"822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff\"" Sep 16 04:20:06.414672 containerd[1535]: time="2025-09-16T04:20:06.414635730Z" level=info msg="StartContainer for \"b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff\"" Sep 16 04:20:06.417068 containerd[1535]: time="2025-09-16T04:20:06.417017364Z" level=info msg="connecting to shim b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff" address="unix:///run/containerd/s/7cc17417f4beedd22a4274810146c3e3d37b781be5c0db8db05f5cade3afe60a" protocol=ttrpc version=3 Sep 16 04:20:06.436326 systemd[1]: Started cri-containerd-b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff.scope - libcontainer container b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff. Sep 16 04:20:06.469015 systemd[1]: cri-containerd-b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff.scope: Deactivated successfully. Sep 16 04:20:06.470543 containerd[1535]: time="2025-09-16T04:20:06.470512122Z" level=info msg="StartContainer for \"b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff\" returns successfully" Sep 16 04:20:06.472120 containerd[1535]: time="2025-09-16T04:20:06.472069878Z" level=info msg="received exit event container_id:\"b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff\" id:\"b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff\" pid:4699 exited_at:{seconds:1757996406 nanos:471905999}" Sep 16 04:20:06.472619 containerd[1535]: time="2025-09-16T04:20:06.472363717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff\" id:\"b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff\" pid:4699 exited_at:{seconds:1757996406 nanos:471905999}" Sep 16 04:20:06.490906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b991250f15892e9225ba2a67ee5bf0861b16f0b21229466697a0deed655c17ff-rootfs.mount: Deactivated successfully. Sep 16 04:20:07.252680 kubelet[2729]: I0916 04:20:07.252606 2729 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-16T04:20:07Z","lastTransitionTime":"2025-09-16T04:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 16 04:20:07.349216 kubelet[2729]: E0916 04:20:07.349082 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:07.355159 containerd[1535]: time="2025-09-16T04:20:07.354376537Z" level=info msg="CreateContainer within sandbox \"822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:20:07.364866 containerd[1535]: time="2025-09-16T04:20:07.363796489Z" level=info msg="Container af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:20:07.371421 containerd[1535]: time="2025-09-16T04:20:07.371363722Z" level=info msg="CreateContainer within sandbox \"822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756\"" Sep 16 04:20:07.372637 containerd[1535]: time="2025-09-16T04:20:07.372239361Z" level=info msg="StartContainer for \"af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756\"" Sep 16 04:20:07.373413 containerd[1535]: time="2025-09-16T04:20:07.373379560Z" level=info msg="connecting to shim af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756" address="unix:///run/containerd/s/7cc17417f4beedd22a4274810146c3e3d37b781be5c0db8db05f5cade3afe60a" protocol=ttrpc version=3 Sep 16 04:20:07.393283 systemd[1]: Started cri-containerd-af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756.scope - libcontainer container af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756. Sep 16 04:20:07.413496 systemd[1]: cri-containerd-af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756.scope: Deactivated successfully. Sep 16 04:20:07.415953 containerd[1535]: time="2025-09-16T04:20:07.415848080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756\" id:\"af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756\" pid:4738 exited_at:{seconds:1757996407 nanos:414385402}" Sep 16 04:20:07.416076 containerd[1535]: time="2025-09-16T04:20:07.416056720Z" level=info msg="received exit event container_id:\"af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756\" id:\"af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756\" pid:4738 exited_at:{seconds:1757996407 nanos:414385402}" Sep 16 04:20:07.416543 containerd[1535]: time="2025-09-16T04:20:07.416515760Z" level=info msg="StartContainer for \"af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756\" returns successfully" Sep 16 04:20:07.491004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af6aa0ef8d49599484d955f19924acfaf5c79acbcd0d1f6c0992793aef1e9756-rootfs.mount: Deactivated successfully. Sep 16 04:20:08.093018 kubelet[2729]: E0916 04:20:08.092969 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:08.355224 kubelet[2729]: E0916 04:20:08.354462 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:08.359101 containerd[1535]: time="2025-09-16T04:20:08.359037641Z" level=info msg="CreateContainer within sandbox \"822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:20:08.374483 containerd[1535]: time="2025-09-16T04:20:08.374441127Z" level=info msg="Container 67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:20:08.375700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1266930616.mount: Deactivated successfully. Sep 16 04:20:08.379132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4014578702.mount: Deactivated successfully. Sep 16 04:20:08.383322 containerd[1535]: time="2025-09-16T04:20:08.383286250Z" level=info msg="CreateContainer within sandbox \"822fd113b326491cc7ecc5d673574595b69b3494760cadbccf82a48cee0e805d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394\"" Sep 16 04:20:08.384156 containerd[1535]: time="2025-09-16T04:20:08.384022731Z" level=info msg="StartContainer for \"67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394\"" Sep 16 04:20:08.385212 containerd[1535]: time="2025-09-16T04:20:08.384903571Z" level=info msg="connecting to shim 67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394" address="unix:///run/containerd/s/7cc17417f4beedd22a4274810146c3e3d37b781be5c0db8db05f5cade3afe60a" protocol=ttrpc version=3 Sep 16 04:20:08.408294 systemd[1]: Started cri-containerd-67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394.scope - libcontainer container 67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394. Sep 16 04:20:08.440182 containerd[1535]: time="2025-09-16T04:20:08.440147513Z" level=info msg="StartContainer for \"67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394\" returns successfully" Sep 16 04:20:08.496607 containerd[1535]: time="2025-09-16T04:20:08.496564296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394\" id:\"f1cee62b8ff7c0019f2797a4f705b57ea1fe0ca988ead5a2722f17d68a6e3195\" pid:4807 exited_at:{seconds:1757996408 nanos:496025975}" Sep 16 04:20:08.694191 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 16 04:20:09.096017 kubelet[2729]: E0916 04:20:09.095912 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:09.360460 kubelet[2729]: E0916 04:20:09.360363 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:09.377175 kubelet[2729]: I0916 04:20:09.377113 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tqr4r" podStartSLOduration=5.37709697 podStartE2EDuration="5.37709697s" podCreationTimestamp="2025-09-16 04:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:20:09.375050726 +0000 UTC m=+84.364771624" watchObservedRunningTime="2025-09-16 04:20:09.37709697 +0000 UTC m=+84.366817868" Sep 16 04:20:10.529681 kubelet[2729]: E0916 04:20:10.529646 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:10.835737 containerd[1535]: time="2025-09-16T04:20:10.835360819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394\" id:\"57f4a8f2fa42e560401e399bfdd02f950fc36b9a4ed78ecc74c2f5148f883a5b\" pid:5089 exit_status:1 exited_at:{seconds:1757996410 nanos:834789617}" Sep 16 04:20:11.690765 systemd-networkd[1445]: lxc_health: Link UP Sep 16 04:20:11.697774 systemd-networkd[1445]: lxc_health: Gained carrier Sep 16 04:20:12.529902 kubelet[2729]: E0916 04:20:12.529847 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:12.976172 containerd[1535]: time="2025-09-16T04:20:12.976023475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394\" id:\"a196332f9408b35ad52e2b89f750272914c879bc930b24455a97d667d1a31f06\" pid:5342 exited_at:{seconds:1757996412 nanos:968519795}" Sep 16 04:20:12.978753 kubelet[2729]: E0916 04:20:12.978727 2729 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:36806->127.0.0.1:33353: read tcp 127.0.0.1:36806->127.0.0.1:33353: read: connection reset by peer Sep 16 04:20:13.374666 kubelet[2729]: E0916 04:20:13.374121 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:13.652677 systemd-networkd[1445]: lxc_health: Gained IPv6LL Sep 16 04:20:14.369523 kubelet[2729]: E0916 04:20:14.369450 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:20:15.083235 containerd[1535]: time="2025-09-16T04:20:15.083185177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394\" id:\"e4d76e169301587c8a9525fb440a3c5e70d3d8aca94681c382f4f6c11ac1de12\" pid:5372 exited_at:{seconds:1757996415 nanos:82909895}" Sep 16 04:20:17.205239 containerd[1535]: time="2025-09-16T04:20:17.205175866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67c57c2fc767cbdaed42b59f4691bd8740701129dd90cbc25da93be00d6f4394\" id:\"216dc9a8fe4d00efc5cd94b48330d9264677791a8256f7d9165ce69831cd5f5d\" pid:5402 exited_at:{seconds:1757996417 nanos:204838463}" Sep 16 04:20:17.214235 sshd[4536]: Connection closed by 10.0.0.1 port 41972 Sep 16 04:20:17.214997 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Sep 16 04:20:17.221000 systemd[1]: sshd@27-10.0.0.23:22-10.0.0.1:41972.service: Deactivated successfully. Sep 16 04:20:17.222962 systemd[1]: session-28.scope: Deactivated successfully. Sep 16 04:20:17.223823 systemd-logind[1519]: Session 28 logged out. Waiting for processes to exit. Sep 16 04:20:17.224957 systemd-logind[1519]: Removed session 28.