Sep 3 23:37:56.773695 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 3 23:37:56.773771 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 3 22:04:24 -00 2025 Sep 3 23:37:56.773784 kernel: KASLR enabled Sep 3 23:37:56.773790 kernel: efi: EFI v2.7 by EDK II Sep 3 23:37:56.773795 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 3 23:37:56.773801 kernel: random: crng init done Sep 3 23:37:56.773808 kernel: secureboot: Secure boot disabled Sep 3 23:37:56.773813 kernel: ACPI: Early table checksum verification disabled Sep 3 23:37:56.773819 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 3 23:37:56.773826 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 3 23:37:56.773832 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:37:56.773838 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:37:56.773843 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:37:56.773849 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:37:56.773856 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:37:56.773864 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:37:56.773870 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:37:56.773876 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:37:56.773882 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:37:56.773888 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 3 23:37:56.773894 kernel: ACPI: Use ACPI SPCR as default console: No Sep 3 23:37:56.773900 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:37:56.773906 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 3 23:37:56.773912 kernel: Zone ranges: Sep 3 23:37:56.773918 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:37:56.773926 kernel: DMA32 empty Sep 3 23:37:56.773932 kernel: Normal empty Sep 3 23:37:56.773938 kernel: Device empty Sep 3 23:37:56.773943 kernel: Movable zone start for each node Sep 3 23:37:56.773949 kernel: Early memory node ranges Sep 3 23:37:56.773955 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 3 23:37:56.773962 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 3 23:37:56.773968 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 3 23:37:56.773974 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 3 23:37:56.773980 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 3 23:37:56.773985 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 3 23:37:56.773996 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 3 23:37:56.774005 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 3 23:37:56.774012 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 3 23:37:56.774019 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 3 23:37:56.774028 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 3 23:37:56.774035 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 3 23:37:56.774041 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 3 23:37:56.774049 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:37:56.774055 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 3 23:37:56.774062 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 3 23:37:56.774069 kernel: psci: probing for conduit method from ACPI. Sep 3 23:37:56.774075 kernel: psci: PSCIv1.1 detected in firmware. Sep 3 23:37:56.774082 kernel: psci: Using standard PSCI v0.2 function IDs Sep 3 23:37:56.774088 kernel: psci: Trusted OS migration not required Sep 3 23:37:56.774095 kernel: psci: SMC Calling Convention v1.1 Sep 3 23:37:56.774101 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 3 23:37:56.774108 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 3 23:37:56.774116 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 3 23:37:56.774123 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 3 23:37:56.774129 kernel: Detected PIPT I-cache on CPU0 Sep 3 23:37:56.774135 kernel: CPU features: detected: GIC system register CPU interface Sep 3 23:37:56.774142 kernel: CPU features: detected: Spectre-v4 Sep 3 23:37:56.774148 kernel: CPU features: detected: Spectre-BHB Sep 3 23:37:56.774154 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 3 23:37:56.774161 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 3 23:37:56.774167 kernel: CPU features: detected: ARM erratum 1418040 Sep 3 23:37:56.774173 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 3 23:37:56.774180 kernel: alternatives: applying boot alternatives Sep 3 23:37:56.774187 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:37:56.774195 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 3 23:37:56.774202 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 3 23:37:56.774208 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 3 23:37:56.774214 kernel: Fallback order for Node 0: 0 Sep 3 23:37:56.774221 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 3 23:37:56.774227 kernel: Policy zone: DMA Sep 3 23:37:56.774233 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 3 23:37:56.774240 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 3 23:37:56.774246 kernel: software IO TLB: area num 4. Sep 3 23:37:56.774252 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 3 23:37:56.774259 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 3 23:37:56.774266 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 3 23:37:56.774273 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 3 23:37:56.774280 kernel: rcu: RCU event tracing is enabled. Sep 3 23:37:56.774286 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 3 23:37:56.774293 kernel: Trampoline variant of Tasks RCU enabled. Sep 3 23:37:56.774300 kernel: Tracing variant of Tasks RCU enabled. Sep 3 23:37:56.774306 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 3 23:37:56.774313 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 3 23:37:56.774319 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 3 23:37:56.774326 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 3 23:37:56.774332 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 3 23:37:56.774340 kernel: GICv3: 256 SPIs implemented Sep 3 23:37:56.774346 kernel: GICv3: 0 Extended SPIs implemented Sep 3 23:37:56.774353 kernel: Root IRQ handler: gic_handle_irq Sep 3 23:37:56.774359 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 3 23:37:56.774365 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 3 23:37:56.774372 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 3 23:37:56.774378 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 3 23:37:56.774385 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 3 23:37:56.774391 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 3 23:37:56.774397 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 3 23:37:56.774404 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 3 23:37:56.774410 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 3 23:37:56.774418 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:37:56.774424 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 3 23:37:56.774431 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 3 23:37:56.774437 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 3 23:37:56.774444 kernel: arm-pv: using stolen time PV Sep 3 23:37:56.774450 kernel: Console: colour dummy device 80x25 Sep 3 23:37:56.774457 kernel: ACPI: Core revision 20240827 Sep 3 23:37:56.774464 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 3 23:37:56.774470 kernel: pid_max: default: 32768 minimum: 301 Sep 3 23:37:56.774477 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 3 23:37:56.774484 kernel: landlock: Up and running. Sep 3 23:37:56.774491 kernel: SELinux: Initializing. Sep 3 23:37:56.774497 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:37:56.774504 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:37:56.774511 kernel: rcu: Hierarchical SRCU implementation. Sep 3 23:37:56.774517 kernel: rcu: Max phase no-delay instances is 400. Sep 3 23:37:56.774524 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 3 23:37:56.774530 kernel: Remapping and enabling EFI services. Sep 3 23:37:56.774537 kernel: smp: Bringing up secondary CPUs ... Sep 3 23:37:56.774549 kernel: Detected PIPT I-cache on CPU1 Sep 3 23:37:56.774567 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 3 23:37:56.774574 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 3 23:37:56.774584 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:37:56.774590 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 3 23:37:56.774597 kernel: Detected PIPT I-cache on CPU2 Sep 3 23:37:56.774604 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 3 23:37:56.774611 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 3 23:37:56.774619 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:37:56.774626 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 3 23:37:56.774633 kernel: Detected PIPT I-cache on CPU3 Sep 3 23:37:56.774640 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 3 23:37:56.774647 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 3 23:37:56.774654 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:37:56.774661 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 3 23:37:56.774667 kernel: smp: Brought up 1 node, 4 CPUs Sep 3 23:37:56.774674 kernel: SMP: Total of 4 processors activated. Sep 3 23:37:56.774682 kernel: CPU: All CPU(s) started at EL1 Sep 3 23:37:56.774689 kernel: CPU features: detected: 32-bit EL0 Support Sep 3 23:37:56.774696 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 3 23:37:56.774703 kernel: CPU features: detected: Common not Private translations Sep 3 23:37:56.774710 kernel: CPU features: detected: CRC32 instructions Sep 3 23:37:56.774724 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 3 23:37:56.774732 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 3 23:37:56.774739 kernel: CPU features: detected: LSE atomic instructions Sep 3 23:37:56.774746 kernel: CPU features: detected: Privileged Access Never Sep 3 23:37:56.774755 kernel: CPU features: detected: RAS Extension Support Sep 3 23:37:56.774762 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 3 23:37:56.774769 kernel: alternatives: applying system-wide alternatives Sep 3 23:37:56.774776 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 3 23:37:56.774783 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 3 23:37:56.774790 kernel: devtmpfs: initialized Sep 3 23:37:56.774797 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 3 23:37:56.774804 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 3 23:37:56.774811 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 3 23:37:56.774819 kernel: 0 pages in range for non-PLT usage Sep 3 23:37:56.774826 kernel: 508560 pages in range for PLT usage Sep 3 23:37:56.774833 kernel: pinctrl core: initialized pinctrl subsystem Sep 3 23:37:56.774840 kernel: SMBIOS 3.0.0 present. Sep 3 23:37:56.774847 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 3 23:37:56.774854 kernel: DMI: Memory slots populated: 1/1 Sep 3 23:37:56.774861 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 3 23:37:56.774868 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 3 23:37:56.774874 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 3 23:37:56.774883 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 3 23:37:56.774890 kernel: audit: initializing netlink subsys (disabled) Sep 3 23:37:56.774897 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 3 23:37:56.774903 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 3 23:37:56.774910 kernel: cpuidle: using governor menu Sep 3 23:37:56.774917 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 3 23:37:56.774924 kernel: ASID allocator initialised with 32768 entries Sep 3 23:37:56.774931 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 3 23:37:56.774938 kernel: Serial: AMBA PL011 UART driver Sep 3 23:37:56.774946 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 3 23:37:56.774952 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 3 23:37:56.774959 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 3 23:37:56.774966 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 3 23:37:56.774973 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 3 23:37:56.774980 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 3 23:37:56.774987 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 3 23:37:56.774994 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 3 23:37:56.775001 kernel: ACPI: Added _OSI(Module Device) Sep 3 23:37:56.775009 kernel: ACPI: Added _OSI(Processor Device) Sep 3 23:37:56.775015 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 3 23:37:56.775022 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 3 23:37:56.775029 kernel: ACPI: Interpreter enabled Sep 3 23:37:56.775036 kernel: ACPI: Using GIC for interrupt routing Sep 3 23:37:56.775043 kernel: ACPI: MCFG table detected, 1 entries Sep 3 23:37:56.775050 kernel: ACPI: CPU0 has been hot-added Sep 3 23:37:56.775056 kernel: ACPI: CPU1 has been hot-added Sep 3 23:37:56.775063 kernel: ACPI: CPU2 has been hot-added Sep 3 23:37:56.775070 kernel: ACPI: CPU3 has been hot-added Sep 3 23:37:56.775078 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 3 23:37:56.775085 kernel: printk: legacy console [ttyAMA0] enabled Sep 3 23:37:56.775092 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 3 23:37:56.775230 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 3 23:37:56.775295 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 3 23:37:56.775354 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 3 23:37:56.775411 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 3 23:37:56.775472 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 3 23:37:56.775481 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 3 23:37:56.775488 kernel: PCI host bridge to bus 0000:00 Sep 3 23:37:56.775560 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 3 23:37:56.775623 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 3 23:37:56.775676 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 3 23:37:56.775753 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 3 23:37:56.775840 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 3 23:37:56.775912 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 3 23:37:56.775974 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 3 23:37:56.776034 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 3 23:37:56.776093 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 3 23:37:56.776152 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 3 23:37:56.776210 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 3 23:37:56.776272 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 3 23:37:56.776325 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 3 23:37:56.776376 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 3 23:37:56.776430 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 3 23:37:56.776439 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 3 23:37:56.776446 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 3 23:37:56.776453 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 3 23:37:56.776462 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 3 23:37:56.776469 kernel: iommu: Default domain type: Translated Sep 3 23:37:56.776475 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 3 23:37:56.776482 kernel: efivars: Registered efivars operations Sep 3 23:37:56.776489 kernel: vgaarb: loaded Sep 3 23:37:56.776496 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 3 23:37:56.776503 kernel: VFS: Disk quotas dquot_6.6.0 Sep 3 23:37:56.776510 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 3 23:37:56.776517 kernel: pnp: PnP ACPI init Sep 3 23:37:56.776598 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 3 23:37:56.776609 kernel: pnp: PnP ACPI: found 1 devices Sep 3 23:37:56.776616 kernel: NET: Registered PF_INET protocol family Sep 3 23:37:56.776623 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 3 23:37:56.776630 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 3 23:37:56.776637 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 3 23:37:56.776644 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 3 23:37:56.776651 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 3 23:37:56.776660 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 3 23:37:56.776667 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:37:56.776674 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:37:56.776681 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 3 23:37:56.776688 kernel: PCI: CLS 0 bytes, default 64 Sep 3 23:37:56.776695 kernel: kvm [1]: HYP mode not available Sep 3 23:37:56.776702 kernel: Initialise system trusted keyrings Sep 3 23:37:56.776709 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 3 23:37:56.776726 kernel: Key type asymmetric registered Sep 3 23:37:56.776735 kernel: Asymmetric key parser 'x509' registered Sep 3 23:37:56.776742 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 3 23:37:56.776749 kernel: io scheduler mq-deadline registered Sep 3 23:37:56.776755 kernel: io scheduler kyber registered Sep 3 23:37:56.776762 kernel: io scheduler bfq registered Sep 3 23:37:56.776769 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 3 23:37:56.776776 kernel: ACPI: button: Power Button [PWRB] Sep 3 23:37:56.776784 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 3 23:37:56.776849 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 3 23:37:56.776860 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 3 23:37:56.776867 kernel: thunder_xcv, ver 1.0 Sep 3 23:37:56.776874 kernel: thunder_bgx, ver 1.0 Sep 3 23:37:56.776881 kernel: nicpf, ver 1.0 Sep 3 23:37:56.776888 kernel: nicvf, ver 1.0 Sep 3 23:37:56.776954 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 3 23:37:56.777010 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-03T23:37:56 UTC (1756942676) Sep 3 23:37:56.777019 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 3 23:37:56.777028 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 3 23:37:56.777035 kernel: watchdog: NMI not fully supported Sep 3 23:37:56.777042 kernel: NET: Registered PF_INET6 protocol family Sep 3 23:37:56.777048 kernel: watchdog: Hard watchdog permanently disabled Sep 3 23:37:56.777055 kernel: Segment Routing with IPv6 Sep 3 23:37:56.777062 kernel: In-situ OAM (IOAM) with IPv6 Sep 3 23:37:56.777069 kernel: NET: Registered PF_PACKET protocol family Sep 3 23:37:56.777076 kernel: Key type dns_resolver registered Sep 3 23:37:56.777083 kernel: registered taskstats version 1 Sep 3 23:37:56.777090 kernel: Loading compiled-in X.509 certificates Sep 3 23:37:56.777098 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 08fc774dab168e64ce30c382a4517d40e72c4744' Sep 3 23:37:56.777105 kernel: Demotion targets for Node 0: null Sep 3 23:37:56.777112 kernel: Key type .fscrypt registered Sep 3 23:37:56.777118 kernel: Key type fscrypt-provisioning registered Sep 3 23:37:56.777125 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 3 23:37:56.777132 kernel: ima: Allocated hash algorithm: sha1 Sep 3 23:37:56.777139 kernel: ima: No architecture policies found Sep 3 23:37:56.777146 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 3 23:37:56.777155 kernel: clk: Disabling unused clocks Sep 3 23:37:56.777161 kernel: PM: genpd: Disabling unused power domains Sep 3 23:37:56.777168 kernel: Warning: unable to open an initial console. Sep 3 23:37:56.777175 kernel: Freeing unused kernel memory: 38976K Sep 3 23:37:56.777182 kernel: Run /init as init process Sep 3 23:37:56.777189 kernel: with arguments: Sep 3 23:37:56.777196 kernel: /init Sep 3 23:37:56.777202 kernel: with environment: Sep 3 23:37:56.777209 kernel: HOME=/ Sep 3 23:37:56.777217 kernel: TERM=linux Sep 3 23:37:56.777224 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 3 23:37:56.777231 systemd[1]: Successfully made /usr/ read-only. Sep 3 23:37:56.777242 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:37:56.777250 systemd[1]: Detected virtualization kvm. Sep 3 23:37:56.777257 systemd[1]: Detected architecture arm64. Sep 3 23:37:56.777264 systemd[1]: Running in initrd. Sep 3 23:37:56.777272 systemd[1]: No hostname configured, using default hostname. Sep 3 23:37:56.777281 systemd[1]: Hostname set to . Sep 3 23:37:56.777288 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:37:56.777296 systemd[1]: Queued start job for default target initrd.target. Sep 3 23:37:56.777303 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:37:56.777311 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:37:56.777319 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 3 23:37:56.777327 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:37:56.777334 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 3 23:37:56.777344 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 3 23:37:56.777353 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 3 23:37:56.777360 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 3 23:37:56.777368 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:37:56.777376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:37:56.777383 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:37:56.777391 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:37:56.777400 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:37:56.777407 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:37:56.777415 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:37:56.777422 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:37:56.777430 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 3 23:37:56.777437 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 3 23:37:56.777445 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:37:56.777453 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:37:56.777462 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:37:56.777469 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:37:56.777477 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 3 23:37:56.777484 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:37:56.777492 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 3 23:37:56.777500 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 3 23:37:56.777507 systemd[1]: Starting systemd-fsck-usr.service... Sep 3 23:37:56.777515 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:37:56.777522 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:37:56.777531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:37:56.777538 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 3 23:37:56.777546 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:37:56.777562 systemd[1]: Finished systemd-fsck-usr.service. Sep 3 23:37:56.777572 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:37:56.777596 systemd-journald[244]: Collecting audit messages is disabled. Sep 3 23:37:56.777615 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:37:56.777623 systemd-journald[244]: Journal started Sep 3 23:37:56.777642 systemd-journald[244]: Runtime Journal (/run/log/journal/f1196a8ebaf04c10814ec75d80a864e5) is 6M, max 48.5M, 42.4M free. Sep 3 23:37:56.769089 systemd-modules-load[245]: Inserted module 'overlay' Sep 3 23:37:56.779999 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:37:56.782735 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 3 23:37:56.782453 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 3 23:37:56.787753 kernel: Bridge firewalling registered Sep 3 23:37:56.784031 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 3 23:37:56.784519 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:37:56.795844 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:37:56.796897 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:37:56.800281 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:37:56.801655 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:37:56.802872 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 3 23:37:56.807260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:37:56.812424 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:37:56.813760 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:37:56.817304 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:37:56.819485 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 3 23:37:56.821580 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:37:56.840901 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:37:56.854355 systemd-resolved[291]: Positive Trust Anchors: Sep 3 23:37:56.854372 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:37:56.854409 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:37:56.859184 systemd-resolved[291]: Defaulting to hostname 'linux'. Sep 3 23:37:56.860365 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:37:56.862781 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:37:56.912734 kernel: SCSI subsystem initialized Sep 3 23:37:56.915771 kernel: Loading iSCSI transport class v2.0-870. Sep 3 23:37:56.923755 kernel: iscsi: registered transport (tcp) Sep 3 23:37:56.935738 kernel: iscsi: registered transport (qla4xxx) Sep 3 23:37:56.935762 kernel: QLogic iSCSI HBA Driver Sep 3 23:37:56.951755 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:37:56.978773 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:37:56.980029 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:37:57.024605 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 3 23:37:57.026675 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 3 23:37:57.084754 kernel: raid6: neonx8 gen() 15782 MB/s Sep 3 23:37:57.101729 kernel: raid6: neonx4 gen() 15810 MB/s Sep 3 23:37:57.118762 kernel: raid6: neonx2 gen() 13209 MB/s Sep 3 23:37:57.136594 kernel: raid6: neonx1 gen() 10425 MB/s Sep 3 23:37:57.152759 kernel: raid6: int64x8 gen() 6896 MB/s Sep 3 23:37:57.169756 kernel: raid6: int64x4 gen() 7353 MB/s Sep 3 23:37:57.186754 kernel: raid6: int64x2 gen() 6105 MB/s Sep 3 23:37:57.203759 kernel: raid6: int64x1 gen() 5055 MB/s Sep 3 23:37:57.203795 kernel: raid6: using algorithm neonx4 gen() 15810 MB/s Sep 3 23:37:57.220759 kernel: raid6: .... xor() 12265 MB/s, rmw enabled Sep 3 23:37:57.220814 kernel: raid6: using neon recovery algorithm Sep 3 23:37:57.226591 kernel: xor: measuring software checksum speed Sep 3 23:37:57.226635 kernel: 8regs : 21641 MB/sec Sep 3 23:37:57.226645 kernel: 32regs : 21687 MB/sec Sep 3 23:37:57.226653 kernel: arm64_neon : 28138 MB/sec Sep 3 23:37:57.226823 kernel: xor: using function: arm64_neon (28138 MB/sec) Sep 3 23:37:57.279920 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 3 23:37:57.287200 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:37:57.289426 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:37:57.331453 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 3 23:37:57.335612 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:37:57.337408 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 3 23:37:57.361929 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 3 23:37:57.385516 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:37:57.387598 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:37:57.441960 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:37:57.445487 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 3 23:37:57.490903 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 3 23:37:57.491057 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 3 23:37:57.502353 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 3 23:37:57.502394 kernel: GPT:9289727 != 19775487 Sep 3 23:37:57.502405 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 3 23:37:57.502886 kernel: GPT:9289727 != 19775487 Sep 3 23:37:57.504875 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 3 23:37:57.504899 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:37:57.506648 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:37:57.506813 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:37:57.509683 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:37:57.511900 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:37:57.548845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:37:57.556446 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 3 23:37:57.557842 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 3 23:37:57.566916 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 3 23:37:57.573135 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 3 23:37:57.574116 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 3 23:37:57.582831 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 3 23:37:57.583800 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:37:57.585435 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:37:57.587100 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:37:57.589440 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 3 23:37:57.591094 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 3 23:37:57.614514 disk-uuid[593]: Primary Header is updated. Sep 3 23:37:57.614514 disk-uuid[593]: Secondary Entries is updated. Sep 3 23:37:57.614514 disk-uuid[593]: Secondary Header is updated. Sep 3 23:37:57.617316 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:37:57.619787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:37:57.622731 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:37:58.624752 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:37:58.625814 disk-uuid[597]: The operation has completed successfully. Sep 3 23:37:58.649690 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 3 23:37:58.649808 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 3 23:37:58.673530 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 3 23:37:58.686738 sh[615]: Success Sep 3 23:37:58.698216 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 3 23:37:58.698260 kernel: device-mapper: uevent: version 1.0.3 Sep 3 23:37:58.699367 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 3 23:37:58.706742 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 3 23:37:58.731515 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 3 23:37:58.734355 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 3 23:37:58.756961 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 3 23:37:58.761740 kernel: BTRFS: device fsid e8b97e78-d30f-4a41-b431-d82f3afef949 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (627) Sep 3 23:37:58.763440 kernel: BTRFS info (device dm-0): first mount of filesystem e8b97e78-d30f-4a41-b431-d82f3afef949 Sep 3 23:37:58.763454 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:37:58.767286 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 3 23:37:58.767308 kernel: BTRFS info (device dm-0): enabling free space tree Sep 3 23:37:58.768265 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 3 23:37:58.769346 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:37:58.770468 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 3 23:37:58.771221 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 3 23:37:58.772624 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 3 23:37:58.800592 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Sep 3 23:37:58.800629 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:37:58.800639 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:37:58.803832 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:37:58.803868 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:37:58.808732 kernel: BTRFS info (device vda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:37:58.809810 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 3 23:37:58.811510 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 3 23:37:58.876228 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:37:58.878771 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:37:58.914944 ignition[703]: Ignition 2.21.0 Sep 3 23:37:58.914960 ignition[703]: Stage: fetch-offline Sep 3 23:37:58.915005 ignition[703]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:37:58.915013 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:37:58.915178 ignition[703]: parsed url from cmdline: "" Sep 3 23:37:58.915181 ignition[703]: no config URL provided Sep 3 23:37:58.915185 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:37:58.915192 ignition[703]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:37:58.915211 ignition[703]: op(1): [started] loading QEMU firmware config module Sep 3 23:37:58.915218 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 3 23:37:58.921760 ignition[703]: op(1): [finished] loading QEMU firmware config module Sep 3 23:37:58.926585 systemd-networkd[806]: lo: Link UP Sep 3 23:37:58.926597 systemd-networkd[806]: lo: Gained carrier Sep 3 23:37:58.927330 systemd-networkd[806]: Enumeration completed Sep 3 23:37:58.927484 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:37:58.928968 systemd[1]: Reached target network.target - Network. Sep 3 23:37:58.930662 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:37:58.930666 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:37:58.931163 systemd-networkd[806]: eth0: Link UP Sep 3 23:37:58.931456 systemd-networkd[806]: eth0: Gained carrier Sep 3 23:37:58.931465 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:37:58.948789 systemd-networkd[806]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 3 23:37:58.974199 ignition[703]: parsing config with SHA512: 38d52efc5ac20724f1e0e7f5efd4a2407640cce9668e138e5eea4c2de100bbb191d8888a1e6ef965bd83a68c27d193ee33dcf1473196039fa9a44bd612723eeb Sep 3 23:37:58.978536 unknown[703]: fetched base config from "system" Sep 3 23:37:58.978555 unknown[703]: fetched user config from "qemu" Sep 3 23:37:58.979088 ignition[703]: fetch-offline: fetch-offline passed Sep 3 23:37:58.980828 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:37:58.979148 ignition[703]: Ignition finished successfully Sep 3 23:37:58.982176 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 3 23:37:58.982932 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 3 23:37:59.014605 ignition[814]: Ignition 2.21.0 Sep 3 23:37:59.014624 ignition[814]: Stage: kargs Sep 3 23:37:59.014796 ignition[814]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:37:59.014806 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:37:59.016100 ignition[814]: kargs: kargs passed Sep 3 23:37:59.017914 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 3 23:37:59.016151 ignition[814]: Ignition finished successfully Sep 3 23:37:59.021124 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 3 23:37:59.060592 ignition[822]: Ignition 2.21.0 Sep 3 23:37:59.060609 ignition[822]: Stage: disks Sep 3 23:37:59.060756 ignition[822]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:37:59.060765 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:37:59.062101 ignition[822]: disks: disks passed Sep 3 23:37:59.063891 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 3 23:37:59.062170 ignition[822]: Ignition finished successfully Sep 3 23:37:59.066999 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 3 23:37:59.068555 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 3 23:37:59.070193 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:37:59.071836 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:37:59.073598 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:37:59.076070 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 3 23:37:59.105308 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 3 23:37:59.109802 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 3 23:37:59.113586 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 3 23:37:59.184746 kernel: EXT4-fs (vda9): mounted filesystem d953e3b7-a0cb-45f7-b3a7-216a9a578dda r/w with ordered data mode. Quota mode: none. Sep 3 23:37:59.184925 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 3 23:37:59.186119 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 3 23:37:59.188390 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:37:59.189942 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 3 23:37:59.190812 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 3 23:37:59.190852 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 3 23:37:59.190887 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:37:59.202981 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 3 23:37:59.205425 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 3 23:37:59.210021 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Sep 3 23:37:59.210043 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:37:59.210060 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:37:59.212737 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:37:59.212762 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:37:59.213813 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:37:59.239958 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory Sep 3 23:37:59.243020 initrd-setup-root[873]: cut: /sysroot/etc/group: No such file or directory Sep 3 23:37:59.246497 initrd-setup-root[880]: cut: /sysroot/etc/shadow: No such file or directory Sep 3 23:37:59.250147 initrd-setup-root[887]: cut: /sysroot/etc/gshadow: No such file or directory Sep 3 23:37:59.312640 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 3 23:37:59.315805 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 3 23:37:59.318113 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 3 23:37:59.332757 kernel: BTRFS info (device vda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:37:59.347861 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 3 23:37:59.360528 ignition[956]: INFO : Ignition 2.21.0 Sep 3 23:37:59.360528 ignition[956]: INFO : Stage: mount Sep 3 23:37:59.361847 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:37:59.361847 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:37:59.363651 ignition[956]: INFO : mount: mount passed Sep 3 23:37:59.363651 ignition[956]: INFO : Ignition finished successfully Sep 3 23:37:59.365204 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 3 23:37:59.366947 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 3 23:37:59.769149 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 3 23:37:59.770678 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:37:59.789958 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Sep 3 23:37:59.789990 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:37:59.790001 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:37:59.792872 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:37:59.792897 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:37:59.794172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:37:59.820636 ignition[985]: INFO : Ignition 2.21.0 Sep 3 23:37:59.820636 ignition[985]: INFO : Stage: files Sep 3 23:37:59.822020 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:37:59.822020 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:37:59.822020 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Sep 3 23:37:59.824854 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 3 23:37:59.824854 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 3 23:37:59.827168 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 3 23:37:59.827168 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 3 23:37:59.827168 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 3 23:37:59.826702 unknown[985]: wrote ssh authorized keys file for user: core Sep 3 23:37:59.831918 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 3 23:37:59.831918 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 3 23:37:59.927623 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 3 23:37:59.981553 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 3 23:37:59.983305 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:37:59.983305 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 3 23:38:00.178671 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 3 23:38:00.298194 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:38:00.299785 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 3 23:38:00.299785 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 3 23:38:00.299785 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:38:00.299785 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:38:00.299785 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:38:00.299785 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:38:00.299785 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:38:00.299785 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:38:00.310981 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:38:00.310981 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:38:00.310981 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:38:00.310981 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:38:00.310981 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:38:00.310981 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 3 23:38:00.575860 systemd-networkd[806]: eth0: Gained IPv6LL Sep 3 23:38:00.664207 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 3 23:38:01.015577 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:38:01.015577 ignition[985]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 3 23:38:01.018724 ignition[985]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:38:01.022236 ignition[985]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:38:01.022236 ignition[985]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 3 23:38:01.022236 ignition[985]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 3 23:38:01.026195 ignition[985]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 3 23:38:01.026195 ignition[985]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 3 23:38:01.026195 ignition[985]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 3 23:38:01.026195 ignition[985]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 3 23:38:01.035508 ignition[985]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 3 23:38:01.039017 ignition[985]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 3 23:38:01.041330 ignition[985]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 3 23:38:01.041330 ignition[985]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 3 23:38:01.041330 ignition[985]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 3 23:38:01.041330 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:38:01.041330 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:38:01.041330 ignition[985]: INFO : files: files passed Sep 3 23:38:01.041330 ignition[985]: INFO : Ignition finished successfully Sep 3 23:38:01.041936 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 3 23:38:01.045876 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 3 23:38:01.063047 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 3 23:38:01.068779 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 3 23:38:01.069638 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 3 23:38:01.073148 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Sep 3 23:38:01.076378 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:38:01.076378 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:38:01.079254 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:38:01.079910 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:38:01.081607 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 3 23:38:01.084888 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 3 23:38:01.122148 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 3 23:38:01.122946 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 3 23:38:01.124132 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 3 23:38:01.125629 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 3 23:38:01.126505 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 3 23:38:01.127260 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 3 23:38:01.150066 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:38:01.152670 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 3 23:38:01.171592 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:38:01.173740 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:38:01.174703 systemd[1]: Stopped target timers.target - Timer Units. Sep 3 23:38:01.176213 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 3 23:38:01.176364 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:38:01.178374 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 3 23:38:01.179980 systemd[1]: Stopped target basic.target - Basic System. Sep 3 23:38:01.181315 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 3 23:38:01.182618 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:38:01.184199 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 3 23:38:01.185665 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:38:01.187211 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 3 23:38:01.188650 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:38:01.190181 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 3 23:38:01.191666 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 3 23:38:01.193058 systemd[1]: Stopped target swap.target - Swaps. Sep 3 23:38:01.194293 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 3 23:38:01.194417 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:38:01.196344 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:38:01.197839 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:38:01.199320 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 3 23:38:01.202815 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:38:01.203785 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 3 23:38:01.203902 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 3 23:38:01.206243 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 3 23:38:01.206365 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:38:01.207861 systemd[1]: Stopped target paths.target - Path Units. Sep 3 23:38:01.209139 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 3 23:38:01.212790 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:38:01.213788 systemd[1]: Stopped target slices.target - Slice Units. Sep 3 23:38:01.215456 systemd[1]: Stopped target sockets.target - Socket Units. Sep 3 23:38:01.216659 systemd[1]: iscsid.socket: Deactivated successfully. Sep 3 23:38:01.216763 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:38:01.218023 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 3 23:38:01.218089 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:38:01.219416 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 3 23:38:01.219531 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:38:01.220847 systemd[1]: ignition-files.service: Deactivated successfully. Sep 3 23:38:01.220945 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 3 23:38:01.223055 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 3 23:38:01.224251 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 3 23:38:01.224375 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:38:01.226835 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 3 23:38:01.228098 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 3 23:38:01.228228 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:38:01.229755 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 3 23:38:01.229854 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:38:01.235021 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 3 23:38:01.235099 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 3 23:38:01.240961 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 3 23:38:01.245875 ignition[1040]: INFO : Ignition 2.21.0 Sep 3 23:38:01.245875 ignition[1040]: INFO : Stage: umount Sep 3 23:38:01.249098 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:38:01.249098 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:38:01.249098 ignition[1040]: INFO : umount: umount passed Sep 3 23:38:01.249098 ignition[1040]: INFO : Ignition finished successfully Sep 3 23:38:01.250070 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 3 23:38:01.250165 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 3 23:38:01.251604 systemd[1]: Stopped target network.target - Network. Sep 3 23:38:01.252867 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 3 23:38:01.252928 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 3 23:38:01.254208 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 3 23:38:01.254246 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 3 23:38:01.255460 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 3 23:38:01.255503 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 3 23:38:01.256979 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 3 23:38:01.257015 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 3 23:38:01.258451 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 3 23:38:01.259758 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 3 23:38:01.268375 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 3 23:38:01.269146 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 3 23:38:01.272259 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 3 23:38:01.272463 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 3 23:38:01.272573 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 3 23:38:01.275384 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 3 23:38:01.275970 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 3 23:38:01.277425 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 3 23:38:01.277464 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:38:01.279758 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 3 23:38:01.281116 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 3 23:38:01.281171 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:38:01.282636 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:38:01.282677 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:38:01.285043 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 3 23:38:01.285087 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 3 23:38:01.286641 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 3 23:38:01.286684 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:38:01.289124 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:38:01.291816 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:38:01.291876 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:38:01.300906 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 3 23:38:01.301024 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 3 23:38:01.308340 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 3 23:38:01.308488 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:38:01.312506 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 3 23:38:01.312558 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 3 23:38:01.313967 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 3 23:38:01.313995 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:38:01.315327 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 3 23:38:01.315368 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:38:01.317509 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 3 23:38:01.317568 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 3 23:38:01.319752 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 3 23:38:01.319795 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:38:01.322589 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 3 23:38:01.324133 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 3 23:38:01.324184 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:38:01.326657 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 3 23:38:01.326694 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:38:01.329458 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:38:01.329525 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:38:01.332932 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 3 23:38:01.332980 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 3 23:38:01.333009 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:38:01.333248 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 3 23:38:01.333332 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 3 23:38:01.334604 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 3 23:38:01.334681 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 3 23:38:01.337046 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 3 23:38:01.337131 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 3 23:38:01.339087 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 3 23:38:01.341544 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 3 23:38:01.356608 systemd[1]: Switching root. Sep 3 23:38:01.398673 systemd-journald[244]: Journal stopped Sep 3 23:38:02.142297 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 3 23:38:02.142351 kernel: SELinux: policy capability network_peer_controls=1 Sep 3 23:38:02.142367 kernel: SELinux: policy capability open_perms=1 Sep 3 23:38:02.142376 kernel: SELinux: policy capability extended_socket_class=1 Sep 3 23:38:02.142385 kernel: SELinux: policy capability always_check_network=0 Sep 3 23:38:02.142399 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 3 23:38:02.142408 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 3 23:38:02.142417 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 3 23:38:02.142426 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 3 23:38:02.142437 kernel: SELinux: policy capability userspace_initial_context=0 Sep 3 23:38:02.142446 kernel: audit: type=1403 audit(1756942681.577:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 3 23:38:02.142457 systemd[1]: Successfully loaded SELinux policy in 48.443ms. Sep 3 23:38:02.142472 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.622ms. Sep 3 23:38:02.142484 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:38:02.142495 systemd[1]: Detected virtualization kvm. Sep 3 23:38:02.142505 systemd[1]: Detected architecture arm64. Sep 3 23:38:02.142514 systemd[1]: Detected first boot. Sep 3 23:38:02.142525 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:38:02.142549 zram_generator::config[1086]: No configuration found. Sep 3 23:38:02.142562 kernel: NET: Registered PF_VSOCK protocol family Sep 3 23:38:02.142572 systemd[1]: Populated /etc with preset unit settings. Sep 3 23:38:02.142583 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 3 23:38:02.142593 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 3 23:38:02.142603 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 3 23:38:02.142613 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 3 23:38:02.142622 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 3 23:38:02.142635 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 3 23:38:02.142645 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 3 23:38:02.142655 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 3 23:38:02.142664 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 3 23:38:02.142674 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 3 23:38:02.142684 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 3 23:38:02.142695 systemd[1]: Created slice user.slice - User and Session Slice. Sep 3 23:38:02.142705 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:38:02.142741 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:38:02.142753 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 3 23:38:02.142763 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 3 23:38:02.142773 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 3 23:38:02.142784 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:38:02.142794 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 3 23:38:02.142804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:38:02.142826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:38:02.142838 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 3 23:38:02.142848 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 3 23:38:02.142858 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 3 23:38:02.142868 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 3 23:38:02.142878 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:38:02.142888 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:38:02.142898 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:38:02.142908 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:38:02.142919 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 3 23:38:02.142930 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 3 23:38:02.142940 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 3 23:38:02.142950 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:38:02.142960 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:38:02.142970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:38:02.142981 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 3 23:38:02.142994 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 3 23:38:02.143004 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 3 23:38:02.143014 systemd[1]: Mounting media.mount - External Media Directory... Sep 3 23:38:02.143025 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 3 23:38:02.143034 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 3 23:38:02.143044 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 3 23:38:02.143054 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 3 23:38:02.143064 systemd[1]: Reached target machines.target - Containers. Sep 3 23:38:02.143074 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 3 23:38:02.143084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:38:02.143094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:38:02.143104 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 3 23:38:02.143115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:38:02.143124 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:38:02.143135 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:38:02.143144 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 3 23:38:02.143154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:38:02.143164 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 3 23:38:02.143173 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 3 23:38:02.143183 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 3 23:38:02.143194 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 3 23:38:02.143204 systemd[1]: Stopped systemd-fsck-usr.service. Sep 3 23:38:02.143213 kernel: loop: module loaded Sep 3 23:38:02.143223 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:38:02.143232 kernel: fuse: init (API version 7.41) Sep 3 23:38:02.143241 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:38:02.143252 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:38:02.143262 kernel: ACPI: bus type drm_connector registered Sep 3 23:38:02.143272 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:38:02.143283 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 3 23:38:02.143292 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 3 23:38:02.143323 systemd-journald[1151]: Collecting audit messages is disabled. Sep 3 23:38:02.143348 systemd-journald[1151]: Journal started Sep 3 23:38:02.143368 systemd-journald[1151]: Runtime Journal (/run/log/journal/f1196a8ebaf04c10814ec75d80a864e5) is 6M, max 48.5M, 42.4M free. Sep 3 23:38:01.946279 systemd[1]: Queued start job for default target multi-user.target. Sep 3 23:38:01.968734 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 3 23:38:01.969136 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 3 23:38:02.150238 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:38:02.153070 systemd[1]: verity-setup.service: Deactivated successfully. Sep 3 23:38:02.153118 systemd[1]: Stopped verity-setup.service. Sep 3 23:38:02.158295 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:38:02.158991 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 3 23:38:02.160057 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 3 23:38:02.161150 systemd[1]: Mounted media.mount - External Media Directory. Sep 3 23:38:02.162151 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 3 23:38:02.163199 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 3 23:38:02.164309 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 3 23:38:02.165493 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 3 23:38:02.167809 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:38:02.168980 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 3 23:38:02.169142 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 3 23:38:02.170286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:38:02.170450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:38:02.171590 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:38:02.171753 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:38:02.172786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:38:02.172944 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:38:02.174060 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 3 23:38:02.174216 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 3 23:38:02.175308 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:38:02.175463 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:38:02.176795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:38:02.177918 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:38:02.179217 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 3 23:38:02.180585 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 3 23:38:02.191683 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:38:02.193868 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 3 23:38:02.195632 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 3 23:38:02.196669 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 3 23:38:02.196705 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:38:02.198466 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 3 23:38:02.203408 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 3 23:38:02.205118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:38:02.206434 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 3 23:38:02.208322 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 3 23:38:02.209400 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:38:02.210249 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 3 23:38:02.211919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:38:02.216809 systemd-journald[1151]: Time spent on flushing to /var/log/journal/f1196a8ebaf04c10814ec75d80a864e5 is 20.820ms for 890 entries. Sep 3 23:38:02.216809 systemd-journald[1151]: System Journal (/var/log/journal/f1196a8ebaf04c10814ec75d80a864e5) is 8M, max 195.6M, 187.6M free. Sep 3 23:38:02.249518 systemd-journald[1151]: Received client request to flush runtime journal. Sep 3 23:38:02.249576 kernel: loop0: detected capacity change from 0 to 138376 Sep 3 23:38:02.213981 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:38:02.215851 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 3 23:38:02.218249 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 3 23:38:02.229832 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:38:02.231105 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 3 23:38:02.233958 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 3 23:38:02.247762 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 3 23:38:02.250096 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 3 23:38:02.252855 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 3 23:38:02.255259 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 3 23:38:02.258741 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 3 23:38:02.263762 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:38:02.265222 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 3 23:38:02.268521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:38:02.279761 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 3 23:38:02.285747 kernel: loop1: detected capacity change from 0 to 107312 Sep 3 23:38:02.295676 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 3 23:38:02.295695 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 3 23:38:02.300794 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:38:02.305781 kernel: loop2: detected capacity change from 0 to 203944 Sep 3 23:38:02.326774 kernel: loop3: detected capacity change from 0 to 138376 Sep 3 23:38:02.330754 kernel: loop4: detected capacity change from 0 to 107312 Sep 3 23:38:02.335747 kernel: loop5: detected capacity change from 0 to 203944 Sep 3 23:38:02.340091 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 3 23:38:02.340460 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 3 23:38:02.344814 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Sep 3 23:38:02.344834 systemd[1]: Reloading... Sep 3 23:38:02.405745 zram_generator::config[1253]: No configuration found. Sep 3 23:38:02.480639 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:38:02.481364 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 3 23:38:02.550612 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 3 23:38:02.550857 systemd[1]: Reloading finished in 205 ms. Sep 3 23:38:02.578172 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 3 23:38:02.580728 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 3 23:38:02.591014 systemd[1]: Starting ensure-sysext.service... Sep 3 23:38:02.592609 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:38:02.601693 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Sep 3 23:38:02.601707 systemd[1]: Reloading... Sep 3 23:38:02.609925 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 3 23:38:02.609957 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 3 23:38:02.610210 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 3 23:38:02.610432 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 3 23:38:02.611742 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 3 23:38:02.612001 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 3 23:38:02.612051 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 3 23:38:02.614607 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:38:02.614620 systemd-tmpfiles[1285]: Skipping /boot Sep 3 23:38:02.628224 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:38:02.628241 systemd-tmpfiles[1285]: Skipping /boot Sep 3 23:38:02.647741 zram_generator::config[1312]: No configuration found. Sep 3 23:38:02.719823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:38:02.789670 systemd[1]: Reloading finished in 187 ms. Sep 3 23:38:02.811562 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 3 23:38:02.818082 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:38:02.828836 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:38:02.831016 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 3 23:38:02.840030 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 3 23:38:02.843858 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:38:02.847869 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:38:02.852842 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 3 23:38:02.857684 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 3 23:38:02.858900 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 3 23:38:02.864455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:38:02.868482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:38:02.872029 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:38:02.874485 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:38:02.875777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:38:02.875897 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:38:02.878161 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 3 23:38:02.880061 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 3 23:38:02.885711 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:38:02.887572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:38:02.887827 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:38:02.887921 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:38:02.887997 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:38:02.891502 augenrules[1381]: No rules Sep 3 23:38:02.893525 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Sep 3 23:38:02.895757 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 3 23:38:02.898609 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:38:02.899925 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:38:02.901366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:38:02.902747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:38:02.904186 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:38:02.904430 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:38:02.906048 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:38:02.906279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:38:02.907834 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 3 23:38:02.909119 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:38:02.920412 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 3 23:38:02.929953 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:38:02.931873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:38:02.940245 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:38:02.953936 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:38:02.957094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:38:02.960182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:38:02.961961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:38:02.962003 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:38:02.964996 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:38:02.966400 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:38:02.968746 systemd[1]: Finished ensure-sysext.service. Sep 3 23:38:02.981250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:38:02.982264 augenrules[1422]: /sbin/augenrules: No change Sep 3 23:38:02.991802 augenrules[1452]: No rules Sep 3 23:38:02.991150 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:38:02.995037 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:38:02.995224 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:38:02.996916 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:38:02.997080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:38:02.998339 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:38:02.998481 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:38:03.001130 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:38:03.001278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:38:03.010657 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 3 23:38:03.019247 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 3 23:38:03.028911 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 3 23:38:03.029905 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:38:03.029969 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:38:03.032088 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 3 23:38:03.057802 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 3 23:38:03.092013 systemd-networkd[1434]: lo: Link UP Sep 3 23:38:03.092020 systemd-networkd[1434]: lo: Gained carrier Sep 3 23:38:03.092782 systemd-networkd[1434]: Enumeration completed Sep 3 23:38:03.092891 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:38:03.093172 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:38:03.093181 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:38:03.093656 systemd-networkd[1434]: eth0: Link UP Sep 3 23:38:03.093783 systemd-networkd[1434]: eth0: Gained carrier Sep 3 23:38:03.093802 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:38:03.097027 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 3 23:38:03.099909 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 3 23:38:03.106893 systemd-resolved[1352]: Positive Trust Anchors: Sep 3 23:38:03.106910 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:38:03.106941 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:38:03.115760 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 3 23:38:03.116211 systemd-resolved[1352]: Defaulting to hostname 'linux'. Sep 3 23:38:03.117879 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:38:03.118833 systemd[1]: Reached target network.target - Network. Sep 3 23:38:03.119521 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:38:03.124352 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 3 23:38:03.136359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:38:03.142161 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 3 23:38:03.143369 systemd-timesyncd[1465]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 3 23:38:03.143417 systemd-timesyncd[1465]: Initial clock synchronization to Wed 2025-09-03 23:38:03.514198 UTC. Sep 3 23:38:03.143523 systemd[1]: Reached target time-set.target - System Time Set. Sep 3 23:38:03.185772 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:38:03.186917 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:38:03.187819 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 3 23:38:03.188763 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 3 23:38:03.189867 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 3 23:38:03.190777 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 3 23:38:03.191731 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 3 23:38:03.192630 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 3 23:38:03.192662 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:38:03.193569 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:38:03.195316 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 3 23:38:03.197381 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 3 23:38:03.200384 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 3 23:38:03.201608 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 3 23:38:03.202691 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 3 23:38:03.205580 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 3 23:38:03.206859 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 3 23:38:03.208262 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 3 23:38:03.209221 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:38:03.210008 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:38:03.210756 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:38:03.210787 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:38:03.211632 systemd[1]: Starting containerd.service - containerd container runtime... Sep 3 23:38:03.213411 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 3 23:38:03.215162 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 3 23:38:03.217108 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 3 23:38:03.219077 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 3 23:38:03.219896 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 3 23:38:03.220826 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 3 23:38:03.223461 jq[1499]: false Sep 3 23:38:03.223862 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 3 23:38:03.225428 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 3 23:38:03.227202 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 3 23:38:03.230812 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 3 23:38:03.232830 extend-filesystems[1500]: Found /dev/vda6 Sep 3 23:38:03.232421 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 3 23:38:03.232818 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 3 23:38:03.233336 systemd[1]: Starting update-engine.service - Update Engine... Sep 3 23:38:03.235161 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 3 23:38:03.240489 extend-filesystems[1500]: Found /dev/vda9 Sep 3 23:38:03.240120 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 3 23:38:03.241355 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 3 23:38:03.241514 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 3 23:38:03.243008 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 3 23:38:03.243182 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 3 23:38:03.244049 extend-filesystems[1500]: Checking size of /dev/vda9 Sep 3 23:38:03.247483 systemd[1]: motdgen.service: Deactivated successfully. Sep 3 23:38:03.247700 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 3 23:38:03.251789 jq[1513]: true Sep 3 23:38:03.259655 extend-filesystems[1500]: Resized partition /dev/vda9 Sep 3 23:38:03.261856 jq[1531]: true Sep 3 23:38:03.270082 extend-filesystems[1537]: resize2fs 1.47.2 (1-Jan-2025) Sep 3 23:38:03.276036 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 3 23:38:03.276988 update_engine[1512]: I20250903 23:38:03.276816 1512 main.cc:92] Flatcar Update Engine starting Sep 3 23:38:03.278261 tar[1519]: linux-arm64/helm Sep 3 23:38:03.281420 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 3 23:38:03.292110 dbus-daemon[1497]: [system] SELinux support is enabled Sep 3 23:38:03.292413 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 3 23:38:03.300784 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 3 23:38:03.300817 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 3 23:38:03.300940 update_engine[1512]: I20250903 23:38:03.300848 1512 update_check_scheduler.cc:74] Next update check in 5m51s Sep 3 23:38:03.302306 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 3 23:38:03.302330 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 3 23:38:03.304028 systemd[1]: Started update-engine.service - Update Engine. Sep 3 23:38:03.304728 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 3 23:38:03.306309 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 3 23:38:03.320400 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (Power Button) Sep 3 23:38:03.321776 systemd-logind[1510]: New seat seat0. Sep 3 23:38:03.322640 extend-filesystems[1537]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 3 23:38:03.322640 extend-filesystems[1537]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 3 23:38:03.322640 extend-filesystems[1537]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 3 23:38:03.330367 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Sep 3 23:38:03.325374 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 3 23:38:03.333646 bash[1558]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:38:03.330161 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 3 23:38:03.332600 systemd[1]: Started systemd-logind.service - User Login Management. Sep 3 23:38:03.336958 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 3 23:38:03.339141 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 3 23:38:03.378860 locksmithd[1548]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 3 23:38:03.459273 containerd[1534]: time="2025-09-03T23:38:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 3 23:38:03.460133 containerd[1534]: time="2025-09-03T23:38:03.459916720Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 3 23:38:03.468953 containerd[1534]: time="2025-09-03T23:38:03.468905640Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.2µs" Sep 3 23:38:03.468953 containerd[1534]: time="2025-09-03T23:38:03.468944360Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 3 23:38:03.469062 containerd[1534]: time="2025-09-03T23:38:03.468962280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 3 23:38:03.469364 containerd[1534]: time="2025-09-03T23:38:03.469105120Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 3 23:38:03.469364 containerd[1534]: time="2025-09-03T23:38:03.469126200Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 3 23:38:03.469364 containerd[1534]: time="2025-09-03T23:38:03.469148720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:38:03.469364 containerd[1534]: time="2025-09-03T23:38:03.469194840Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:38:03.469364 containerd[1534]: time="2025-09-03T23:38:03.469206240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:38:03.469460 containerd[1534]: time="2025-09-03T23:38:03.469393160Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:38:03.469460 containerd[1534]: time="2025-09-03T23:38:03.469408240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:38:03.469460 containerd[1534]: time="2025-09-03T23:38:03.469418480Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:38:03.469460 containerd[1534]: time="2025-09-03T23:38:03.469426400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 3 23:38:03.469526 containerd[1534]: time="2025-09-03T23:38:03.469492000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 3 23:38:03.469736 containerd[1534]: time="2025-09-03T23:38:03.469688840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:38:03.469768 containerd[1534]: time="2025-09-03T23:38:03.469750640Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:38:03.469768 containerd[1534]: time="2025-09-03T23:38:03.469762440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 3 23:38:03.469812 containerd[1534]: time="2025-09-03T23:38:03.469794240Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 3 23:38:03.470277 containerd[1534]: time="2025-09-03T23:38:03.470071040Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 3 23:38:03.470277 containerd[1534]: time="2025-09-03T23:38:03.470139880Z" level=info msg="metadata content store policy set" policy=shared Sep 3 23:38:03.473276 containerd[1534]: time="2025-09-03T23:38:03.473242120Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 3 23:38:03.473340 containerd[1534]: time="2025-09-03T23:38:03.473303760Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 3 23:38:03.473340 containerd[1534]: time="2025-09-03T23:38:03.473318440Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 3 23:38:03.473340 containerd[1534]: time="2025-09-03T23:38:03.473329400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 3 23:38:03.473389 containerd[1534]: time="2025-09-03T23:38:03.473340760Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 3 23:38:03.473389 containerd[1534]: time="2025-09-03T23:38:03.473352240Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 3 23:38:03.473389 containerd[1534]: time="2025-09-03T23:38:03.473362920Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 3 23:38:03.473389 containerd[1534]: time="2025-09-03T23:38:03.473374400Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 3 23:38:03.473389 containerd[1534]: time="2025-09-03T23:38:03.473386760Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 3 23:38:03.473466 containerd[1534]: time="2025-09-03T23:38:03.473397000Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 3 23:38:03.473466 containerd[1534]: time="2025-09-03T23:38:03.473406440Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 3 23:38:03.473466 containerd[1534]: time="2025-09-03T23:38:03.473418000Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 3 23:38:03.473870 containerd[1534]: time="2025-09-03T23:38:03.473525600Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 3 23:38:03.473870 containerd[1534]: time="2025-09-03T23:38:03.473564160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 3 23:38:03.473870 containerd[1534]: time="2025-09-03T23:38:03.473581040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 3 23:38:03.473870 containerd[1534]: time="2025-09-03T23:38:03.473591880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 3 23:38:03.473870 containerd[1534]: time="2025-09-03T23:38:03.473602120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 3 23:38:03.473870 containerd[1534]: time="2025-09-03T23:38:03.473611680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 3 23:38:03.473870 containerd[1534]: time="2025-09-03T23:38:03.473622360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 3 23:38:03.473870 containerd[1534]: time="2025-09-03T23:38:03.473631760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 3 23:38:03.473870 containerd[1534]: time="2025-09-03T23:38:03.473641880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 3 23:38:03.473870 containerd[1534]: time="2025-09-03T23:38:03.473651760Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 3 23:38:03.473870 containerd[1534]: time="2025-09-03T23:38:03.473661520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 3 23:38:03.474099 containerd[1534]: time="2025-09-03T23:38:03.473876000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 3 23:38:03.474099 containerd[1534]: time="2025-09-03T23:38:03.473894360Z" level=info msg="Start snapshots syncer" Sep 3 23:38:03.474099 containerd[1534]: time="2025-09-03T23:38:03.473923440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 3 23:38:03.474152 containerd[1534]: time="2025-09-03T23:38:03.474110920Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 3 23:38:03.474225 containerd[1534]: time="2025-09-03T23:38:03.474157920Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 3 23:38:03.474243 containerd[1534]: time="2025-09-03T23:38:03.474228280Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 3 23:38:03.474555 containerd[1534]: time="2025-09-03T23:38:03.474331880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 3 23:38:03.474555 containerd[1534]: time="2025-09-03T23:38:03.474370640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 3 23:38:03.474555 containerd[1534]: time="2025-09-03T23:38:03.474385080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 3 23:38:03.474555 containerd[1534]: time="2025-09-03T23:38:03.474396480Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 3 23:38:03.474555 containerd[1534]: time="2025-09-03T23:38:03.474408240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 3 23:38:03.474555 containerd[1534]: time="2025-09-03T23:38:03.474418240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 3 23:38:03.474555 containerd[1534]: time="2025-09-03T23:38:03.474429560Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 3 23:38:03.474555 containerd[1534]: time="2025-09-03T23:38:03.474456320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 3 23:38:03.474699 containerd[1534]: time="2025-09-03T23:38:03.474563600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 3 23:38:03.474699 containerd[1534]: time="2025-09-03T23:38:03.474591120Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 3 23:38:03.474699 containerd[1534]: time="2025-09-03T23:38:03.474660680Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:38:03.474699 containerd[1534]: time="2025-09-03T23:38:03.474681560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:38:03.474699 containerd[1534]: time="2025-09-03T23:38:03.474695480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:38:03.474801 containerd[1534]: time="2025-09-03T23:38:03.474707080Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:38:03.474801 containerd[1534]: time="2025-09-03T23:38:03.474742400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 3 23:38:03.474801 containerd[1534]: time="2025-09-03T23:38:03.474756560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 3 23:38:03.475140 containerd[1534]: time="2025-09-03T23:38:03.474771160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 3 23:38:03.475140 containerd[1534]: time="2025-09-03T23:38:03.474981720Z" level=info msg="runtime interface created" Sep 3 23:38:03.475140 containerd[1534]: time="2025-09-03T23:38:03.474993960Z" level=info msg="created NRI interface" Sep 3 23:38:03.475140 containerd[1534]: time="2025-09-03T23:38:03.475004000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 3 23:38:03.475140 containerd[1534]: time="2025-09-03T23:38:03.475020240Z" level=info msg="Connect containerd service" Sep 3 23:38:03.475140 containerd[1534]: time="2025-09-03T23:38:03.475059600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 3 23:38:03.476019 containerd[1534]: time="2025-09-03T23:38:03.475896800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:38:03.554031 containerd[1534]: time="2025-09-03T23:38:03.553970720Z" level=info msg="Start subscribing containerd event" Sep 3 23:38:03.554125 containerd[1534]: time="2025-09-03T23:38:03.554074640Z" level=info msg="Start recovering state" Sep 3 23:38:03.554606 containerd[1534]: time="2025-09-03T23:38:03.554195240Z" level=info msg="Start event monitor" Sep 3 23:38:03.554606 containerd[1534]: time="2025-09-03T23:38:03.554220440Z" level=info msg="Start cni network conf syncer for default" Sep 3 23:38:03.554606 containerd[1534]: time="2025-09-03T23:38:03.554228680Z" level=info msg="Start streaming server" Sep 3 23:38:03.554606 containerd[1534]: time="2025-09-03T23:38:03.554239160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 3 23:38:03.554606 containerd[1534]: time="2025-09-03T23:38:03.554281800Z" level=info msg="runtime interface starting up..." Sep 3 23:38:03.554606 containerd[1534]: time="2025-09-03T23:38:03.554288360Z" level=info msg="starting plugins..." Sep 3 23:38:03.554606 containerd[1534]: time="2025-09-03T23:38:03.554304920Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 3 23:38:03.554606 containerd[1534]: time="2025-09-03T23:38:03.554311280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 3 23:38:03.554606 containerd[1534]: time="2025-09-03T23:38:03.554356800Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 3 23:38:03.554606 containerd[1534]: time="2025-09-03T23:38:03.554593080Z" level=info msg="containerd successfully booted in 0.095672s" Sep 3 23:38:03.554694 systemd[1]: Started containerd.service - containerd container runtime. Sep 3 23:38:03.658272 tar[1519]: linux-arm64/LICENSE Sep 3 23:38:03.658446 tar[1519]: linux-arm64/README.md Sep 3 23:38:03.675769 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 3 23:38:04.723152 sshd_keygen[1532]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 3 23:38:04.742716 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 3 23:38:04.745317 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 3 23:38:04.760090 systemd[1]: issuegen.service: Deactivated successfully. Sep 3 23:38:04.760289 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 3 23:38:04.762678 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 3 23:38:04.779892 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 3 23:38:04.782273 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 3 23:38:04.784206 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 3 23:38:04.785310 systemd[1]: Reached target getty.target - Login Prompts. Sep 3 23:38:04.865392 systemd-networkd[1434]: eth0: Gained IPv6LL Sep 3 23:38:04.867666 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 3 23:38:04.869213 systemd[1]: Reached target network-online.target - Network is Online. Sep 3 23:38:04.871309 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 3 23:38:04.873487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:38:04.875477 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 3 23:38:04.896012 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 3 23:38:04.896265 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 3 23:38:04.897688 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 3 23:38:04.900083 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 3 23:38:05.433582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:38:05.435028 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 3 23:38:05.437317 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:38:05.439863 systemd[1]: Startup finished in 1.956s (kernel) + 4.980s (initrd) + 3.911s (userspace) = 10.848s. Sep 3 23:38:05.807088 kubelet[1631]: E0903 23:38:05.806968 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:38:05.809340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:38:05.809498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:38:05.810839 systemd[1]: kubelet.service: Consumed 759ms CPU time, 256.5M memory peak. Sep 3 23:38:10.124268 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 3 23:38:10.125946 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:42782.service - OpenSSH per-connection server daemon (10.0.0.1:42782). Sep 3 23:38:10.186183 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 42782 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:38:10.187997 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:38:10.200803 systemd-logind[1510]: New session 1 of user core. Sep 3 23:38:10.201064 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 3 23:38:10.201990 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 3 23:38:10.230211 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 3 23:38:10.232413 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 3 23:38:10.256942 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 3 23:38:10.259264 systemd-logind[1510]: New session c1 of user core. Sep 3 23:38:10.375096 systemd[1648]: Queued start job for default target default.target. Sep 3 23:38:10.393692 systemd[1648]: Created slice app.slice - User Application Slice. Sep 3 23:38:10.393726 systemd[1648]: Reached target paths.target - Paths. Sep 3 23:38:10.393786 systemd[1648]: Reached target timers.target - Timers. Sep 3 23:38:10.395087 systemd[1648]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 3 23:38:10.403889 systemd[1648]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 3 23:38:10.403951 systemd[1648]: Reached target sockets.target - Sockets. Sep 3 23:38:10.403989 systemd[1648]: Reached target basic.target - Basic System. Sep 3 23:38:10.404017 systemd[1648]: Reached target default.target - Main User Target. Sep 3 23:38:10.404047 systemd[1648]: Startup finished in 139ms. Sep 3 23:38:10.404193 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 3 23:38:10.405507 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 3 23:38:10.467209 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:42784.service - OpenSSH per-connection server daemon (10.0.0.1:42784). Sep 3 23:38:10.520883 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 42784 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:38:10.522616 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:38:10.527791 systemd-logind[1510]: New session 2 of user core. Sep 3 23:38:10.536882 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 3 23:38:10.592845 sshd[1661]: Connection closed by 10.0.0.1 port 42784 Sep 3 23:38:10.593114 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Sep 3 23:38:10.603767 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:42784.service: Deactivated successfully. Sep 3 23:38:10.605145 systemd[1]: session-2.scope: Deactivated successfully. Sep 3 23:38:10.607786 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Sep 3 23:38:10.609149 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:42790.service - OpenSSH per-connection server daemon (10.0.0.1:42790). Sep 3 23:38:10.609958 systemd-logind[1510]: Removed session 2. Sep 3 23:38:10.666660 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 42790 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:38:10.667879 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:38:10.672246 systemd-logind[1510]: New session 3 of user core. Sep 3 23:38:10.681920 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 3 23:38:10.731682 sshd[1669]: Connection closed by 10.0.0.1 port 42790 Sep 3 23:38:10.732126 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Sep 3 23:38:10.743197 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:42790.service: Deactivated successfully. Sep 3 23:38:10.744685 systemd[1]: session-3.scope: Deactivated successfully. Sep 3 23:38:10.745489 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Sep 3 23:38:10.748653 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:42802.service - OpenSSH per-connection server daemon (10.0.0.1:42802). Sep 3 23:38:10.749380 systemd-logind[1510]: Removed session 3. Sep 3 23:38:10.797213 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 42802 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:38:10.798246 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:38:10.803179 systemd-logind[1510]: New session 4 of user core. Sep 3 23:38:10.818893 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 3 23:38:10.869693 sshd[1677]: Connection closed by 10.0.0.1 port 42802 Sep 3 23:38:10.870065 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Sep 3 23:38:10.884229 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:42802.service: Deactivated successfully. Sep 3 23:38:10.886972 systemd[1]: session-4.scope: Deactivated successfully. Sep 3 23:38:10.887823 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Sep 3 23:38:10.890264 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:42806.service - OpenSSH per-connection server daemon (10.0.0.1:42806). Sep 3 23:38:10.891146 systemd-logind[1510]: Removed session 4. Sep 3 23:38:10.937248 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 42806 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:38:10.938328 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:38:10.942801 systemd-logind[1510]: New session 5 of user core. Sep 3 23:38:10.949872 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 3 23:38:11.006370 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 3 23:38:11.006657 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:38:11.021419 sudo[1686]: pam_unix(sudo:session): session closed for user root Sep 3 23:38:11.022979 sshd[1685]: Connection closed by 10.0.0.1 port 42806 Sep 3 23:38:11.023303 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Sep 3 23:38:11.033830 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:42806.service: Deactivated successfully. Sep 3 23:38:11.035961 systemd[1]: session-5.scope: Deactivated successfully. Sep 3 23:38:11.036628 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Sep 3 23:38:11.039065 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:42816.service - OpenSSH per-connection server daemon (10.0.0.1:42816). Sep 3 23:38:11.039899 systemd-logind[1510]: Removed session 5. Sep 3 23:38:11.089341 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 42816 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:38:11.090537 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:38:11.094312 systemd-logind[1510]: New session 6 of user core. Sep 3 23:38:11.102858 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 3 23:38:11.157188 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 3 23:38:11.157767 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:38:11.237108 sudo[1696]: pam_unix(sudo:session): session closed for user root Sep 3 23:38:11.242000 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 3 23:38:11.242280 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:38:11.250429 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:38:11.306567 augenrules[1718]: No rules Sep 3 23:38:11.307838 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:38:11.308045 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:38:11.310157 sudo[1695]: pam_unix(sudo:session): session closed for user root Sep 3 23:38:11.311427 sshd[1694]: Connection closed by 10.0.0.1 port 42816 Sep 3 23:38:11.311959 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Sep 3 23:38:11.323775 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:42816.service: Deactivated successfully. Sep 3 23:38:11.325068 systemd[1]: session-6.scope: Deactivated successfully. Sep 3 23:38:11.325765 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Sep 3 23:38:11.328084 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:42826.service - OpenSSH per-connection server daemon (10.0.0.1:42826). Sep 3 23:38:11.329074 systemd-logind[1510]: Removed session 6. Sep 3 23:38:11.376359 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 42826 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:38:11.377500 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:38:11.382238 systemd-logind[1510]: New session 7 of user core. Sep 3 23:38:11.389885 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 3 23:38:11.441667 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 3 23:38:11.441961 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:38:11.757479 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 3 23:38:11.777031 (dockerd)[1750]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 3 23:38:12.018216 dockerd[1750]: time="2025-09-03T23:38:12.018081732Z" level=info msg="Starting up" Sep 3 23:38:12.020345 dockerd[1750]: time="2025-09-03T23:38:12.020315437Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 3 23:38:12.060752 dockerd[1750]: time="2025-09-03T23:38:12.060704436Z" level=info msg="Loading containers: start." Sep 3 23:38:12.068757 kernel: Initializing XFRM netlink socket Sep 3 23:38:12.244794 systemd-networkd[1434]: docker0: Link UP Sep 3 23:38:12.247692 dockerd[1750]: time="2025-09-03T23:38:12.247648324Z" level=info msg="Loading containers: done." Sep 3 23:38:12.259713 dockerd[1750]: time="2025-09-03T23:38:12.259669380Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 3 23:38:12.259848 dockerd[1750]: time="2025-09-03T23:38:12.259758904Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 3 23:38:12.259875 dockerd[1750]: time="2025-09-03T23:38:12.259854889Z" level=info msg="Initializing buildkit" Sep 3 23:38:12.280638 dockerd[1750]: time="2025-09-03T23:38:12.280540205Z" level=info msg="Completed buildkit initialization" Sep 3 23:38:12.285434 dockerd[1750]: time="2025-09-03T23:38:12.285399562Z" level=info msg="Daemon has completed initialization" Sep 3 23:38:12.285599 dockerd[1750]: time="2025-09-03T23:38:12.285489939Z" level=info msg="API listen on /run/docker.sock" Sep 3 23:38:12.285623 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 3 23:38:13.056560 containerd[1534]: time="2025-09-03T23:38:13.056518045Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 3 23:38:13.607993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712795475.mount: Deactivated successfully. Sep 3 23:38:14.686746 containerd[1534]: time="2025-09-03T23:38:14.686676622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:14.687144 containerd[1534]: time="2025-09-03T23:38:14.687109762Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 3 23:38:14.687910 containerd[1534]: time="2025-09-03T23:38:14.687855427Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:14.690742 containerd[1534]: time="2025-09-03T23:38:14.690607873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:14.691548 containerd[1534]: time="2025-09-03T23:38:14.691521967Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.634963701s" Sep 3 23:38:14.691589 containerd[1534]: time="2025-09-03T23:38:14.691557273Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 3 23:38:14.692927 containerd[1534]: time="2025-09-03T23:38:14.692884627Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 3 23:38:15.745144 containerd[1534]: time="2025-09-03T23:38:15.744098941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:15.745144 containerd[1534]: time="2025-09-03T23:38:15.744608684Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 3 23:38:15.745781 containerd[1534]: time="2025-09-03T23:38:15.745752482Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:15.748144 containerd[1534]: time="2025-09-03T23:38:15.748119556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:15.749768 containerd[1534]: time="2025-09-03T23:38:15.749740310Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.056805301s" Sep 3 23:38:15.749939 containerd[1534]: time="2025-09-03T23:38:15.749837254Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 3 23:38:15.750350 containerd[1534]: time="2025-09-03T23:38:15.750247668Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 3 23:38:16.059852 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 3 23:38:16.061433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:38:16.187161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:38:16.190845 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:38:16.227222 kubelet[2023]: E0903 23:38:16.227168 2023 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:38:16.230397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:38:16.230640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:38:16.231283 systemd[1]: kubelet.service: Consumed 139ms CPU time, 105.5M memory peak. Sep 3 23:38:17.017785 containerd[1534]: time="2025-09-03T23:38:17.017711699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:17.018231 containerd[1534]: time="2025-09-03T23:38:17.018202718Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 3 23:38:17.019383 containerd[1534]: time="2025-09-03T23:38:17.019350823Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:17.022119 containerd[1534]: time="2025-09-03T23:38:17.022097242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:17.023619 containerd[1534]: time="2025-09-03T23:38:17.023538120Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.273259789s" Sep 3 23:38:17.023619 containerd[1534]: time="2025-09-03T23:38:17.023581874Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 3 23:38:17.024096 containerd[1534]: time="2025-09-03T23:38:17.024080032Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 3 23:38:17.948956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681558544.mount: Deactivated successfully. Sep 3 23:38:18.156935 containerd[1534]: time="2025-09-03T23:38:18.156332504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:18.156935 containerd[1534]: time="2025-09-03T23:38:18.156907425Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 3 23:38:18.157551 containerd[1534]: time="2025-09-03T23:38:18.157527990Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:18.159113 containerd[1534]: time="2025-09-03T23:38:18.159080410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:18.159697 containerd[1534]: time="2025-09-03T23:38:18.159675636Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.135496611s" Sep 3 23:38:18.159789 containerd[1534]: time="2025-09-03T23:38:18.159776471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 3 23:38:18.160416 containerd[1534]: time="2025-09-03T23:38:18.160392565Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 3 23:38:18.658175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795588058.mount: Deactivated successfully. Sep 3 23:38:19.309983 containerd[1534]: time="2025-09-03T23:38:19.309937569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:19.310922 containerd[1534]: time="2025-09-03T23:38:19.310895565Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 3 23:38:19.311636 containerd[1534]: time="2025-09-03T23:38:19.311593423Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:19.314527 containerd[1534]: time="2025-09-03T23:38:19.314464553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:19.315838 containerd[1534]: time="2025-09-03T23:38:19.315807984Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.155381427s" Sep 3 23:38:19.315900 containerd[1534]: time="2025-09-03T23:38:19.315842841Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 3 23:38:19.316495 containerd[1534]: time="2025-09-03T23:38:19.316467404Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 3 23:38:19.762206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248806202.mount: Deactivated successfully. Sep 3 23:38:19.766849 containerd[1534]: time="2025-09-03T23:38:19.766798046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:38:19.767299 containerd[1534]: time="2025-09-03T23:38:19.767272154Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 3 23:38:19.768129 containerd[1534]: time="2025-09-03T23:38:19.768102194Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:38:19.769861 containerd[1534]: time="2025-09-03T23:38:19.769831022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:38:19.770769 containerd[1534]: time="2025-09-03T23:38:19.770717493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 454.224289ms" Sep 3 23:38:19.770769 containerd[1534]: time="2025-09-03T23:38:19.770761688Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 3 23:38:19.771465 containerd[1534]: time="2025-09-03T23:38:19.771255397Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 3 23:38:20.240435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935671745.mount: Deactivated successfully. Sep 3 23:38:22.127372 containerd[1534]: time="2025-09-03T23:38:22.127325561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:22.129386 containerd[1534]: time="2025-09-03T23:38:22.129350211Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 3 23:38:22.130504 containerd[1534]: time="2025-09-03T23:38:22.130454861Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:22.133640 containerd[1534]: time="2025-09-03T23:38:22.133602317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:22.135681 containerd[1534]: time="2025-09-03T23:38:22.135597845Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.36430153s" Sep 3 23:38:22.135681 containerd[1534]: time="2025-09-03T23:38:22.135644520Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 3 23:38:26.280077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 3 23:38:26.281639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:38:26.291254 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 3 23:38:26.291327 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 3 23:38:26.291539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:38:26.293687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:38:26.317098 systemd[1]: Reload requested from client PID 2184 ('systemctl') (unit session-7.scope)... Sep 3 23:38:26.317114 systemd[1]: Reloading... Sep 3 23:38:26.386822 zram_generator::config[2229]: No configuration found. Sep 3 23:38:26.509277 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:38:26.607847 systemd[1]: Reloading finished in 290 ms. Sep 3 23:38:26.673205 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 3 23:38:26.673287 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 3 23:38:26.673560 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:38:26.673607 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95M memory peak. Sep 3 23:38:26.675174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:38:26.786508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:38:26.790363 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:38:26.825745 kubelet[2271]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:38:26.825745 kubelet[2271]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 3 23:38:26.825745 kubelet[2271]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:38:26.826090 kubelet[2271]: I0903 23:38:26.825846 2271 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:38:27.657144 kubelet[2271]: I0903 23:38:27.657098 2271 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 3 23:38:27.657144 kubelet[2271]: I0903 23:38:27.657132 2271 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:38:27.657388 kubelet[2271]: I0903 23:38:27.657371 2271 server.go:934] "Client rotation is on, will bootstrap in background" Sep 3 23:38:27.687532 kubelet[2271]: I0903 23:38:27.687405 2271 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:38:27.687532 kubelet[2271]: E0903 23:38:27.687454 2271 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:38:27.695564 kubelet[2271]: I0903 23:38:27.695537 2271 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:38:27.699123 kubelet[2271]: I0903 23:38:27.699106 2271 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:38:27.699995 kubelet[2271]: I0903 23:38:27.699972 2271 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 3 23:38:27.700146 kubelet[2271]: I0903 23:38:27.700121 2271 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:38:27.700309 kubelet[2271]: I0903 23:38:27.700148 2271 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:38:27.700397 kubelet[2271]: I0903 23:38:27.700377 2271 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:38:27.700397 kubelet[2271]: I0903 23:38:27.700387 2271 container_manager_linux.go:300] "Creating device plugin manager" Sep 3 23:38:27.700639 kubelet[2271]: I0903 23:38:27.700622 2271 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:38:27.706241 kubelet[2271]: W0903 23:38:27.706146 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 3 23:38:27.706241 kubelet[2271]: E0903 23:38:27.706211 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:38:27.706370 kubelet[2271]: I0903 23:38:27.706347 2271 kubelet.go:408] "Attempting to sync node with API server" Sep 3 23:38:27.706403 kubelet[2271]: I0903 23:38:27.706377 2271 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:38:27.706403 kubelet[2271]: I0903 23:38:27.706400 2271 kubelet.go:314] "Adding apiserver pod source" Sep 3 23:38:27.706490 kubelet[2271]: I0903 23:38:27.706479 2271 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:38:27.707036 kubelet[2271]: W0903 23:38:27.706968 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 3 23:38:27.707036 kubelet[2271]: E0903 23:38:27.707011 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:38:27.710906 kubelet[2271]: I0903 23:38:27.710017 2271 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:38:27.710906 kubelet[2271]: I0903 23:38:27.710818 2271 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:38:27.710988 kubelet[2271]: W0903 23:38:27.710919 2271 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 3 23:38:27.711849 kubelet[2271]: I0903 23:38:27.711821 2271 server.go:1274] "Started kubelet" Sep 3 23:38:27.713862 kubelet[2271]: I0903 23:38:27.713610 2271 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:38:27.714907 kubelet[2271]: I0903 23:38:27.713937 2271 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:38:27.714907 kubelet[2271]: I0903 23:38:27.714014 2271 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:38:27.717143 kubelet[2271]: I0903 23:38:27.717117 2271 server.go:449] "Adding debug handlers to kubelet server" Sep 3 23:38:27.721870 kubelet[2271]: E0903 23:38:27.721847 2271 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:38:27.723001 kubelet[2271]: I0903 23:38:27.722978 2271 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:38:27.723489 kubelet[2271]: I0903 23:38:27.723467 2271 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:38:27.729936 kubelet[2271]: I0903 23:38:27.729397 2271 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 3 23:38:27.729936 kubelet[2271]: E0903 23:38:27.729561 2271 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:38:27.729936 kubelet[2271]: E0903 23:38:27.721480 2271 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861ea157fdff19d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-03 23:38:27.711799709 +0000 UTC m=+0.918204587,LastTimestamp:2025-09-03 23:38:27.711799709 +0000 UTC m=+0.918204587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 3 23:38:27.729936 kubelet[2271]: I0903 23:38:27.729776 2271 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 3 23:38:27.729936 kubelet[2271]: I0903 23:38:27.729889 2271 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:38:27.730253 kubelet[2271]: W0903 23:38:27.730208 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 3 23:38:27.730291 kubelet[2271]: E0903 23:38:27.730258 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:38:27.731089 kubelet[2271]: I0903 23:38:27.731064 2271 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:38:27.731230 kubelet[2271]: I0903 23:38:27.731149 2271 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:38:27.731743 kubelet[2271]: E0903 23:38:27.731689 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" Sep 3 23:38:27.732209 kubelet[2271]: I0903 23:38:27.732187 2271 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:38:27.745328 kubelet[2271]: I0903 23:38:27.745306 2271 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 3 23:38:27.745328 kubelet[2271]: I0903 23:38:27.745324 2271 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 3 23:38:27.745412 kubelet[2271]: I0903 23:38:27.745342 2271 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:38:27.747129 kubelet[2271]: I0903 23:38:27.747089 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:38:27.748166 kubelet[2271]: I0903 23:38:27.748138 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:38:27.748166 kubelet[2271]: I0903 23:38:27.748161 2271 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 3 23:38:27.748227 kubelet[2271]: I0903 23:38:27.748178 2271 kubelet.go:2321] "Starting kubelet main sync loop" Sep 3 23:38:27.748227 kubelet[2271]: E0903 23:38:27.748215 2271 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:38:27.749619 kubelet[2271]: I0903 23:38:27.749591 2271 policy_none.go:49] "None policy: Start" Sep 3 23:38:27.752090 kubelet[2271]: W0903 23:38:27.752028 2271 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 3 23:38:27.752090 kubelet[2271]: E0903 23:38:27.752081 2271 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:38:27.752593 kubelet[2271]: I0903 23:38:27.752575 2271 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 3 23:38:27.752907 kubelet[2271]: I0903 23:38:27.752670 2271 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:38:27.759538 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 3 23:38:27.773889 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 3 23:38:27.776705 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 3 23:38:27.796554 kubelet[2271]: I0903 23:38:27.796517 2271 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:38:27.796897 kubelet[2271]: I0903 23:38:27.796878 2271 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:38:27.796991 kubelet[2271]: I0903 23:38:27.796961 2271 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:38:27.797242 kubelet[2271]: I0903 23:38:27.797219 2271 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:38:27.798145 kubelet[2271]: E0903 23:38:27.798127 2271 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 3 23:38:27.856967 systemd[1]: Created slice kubepods-burstable-podb6215e769877381360ef3e06455f1bee.slice - libcontainer container kubepods-burstable-podb6215e769877381360ef3e06455f1bee.slice. Sep 3 23:38:27.885159 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 3 23:38:27.899024 kubelet[2271]: I0903 23:38:27.898978 2271 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:38:27.899559 kubelet[2271]: E0903 23:38:27.899500 2271 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 3 23:38:27.904032 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 3 23:38:27.931104 kubelet[2271]: I0903 23:38:27.930939 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:38:27.931104 kubelet[2271]: I0903 23:38:27.930977 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 3 23:38:27.931104 kubelet[2271]: I0903 23:38:27.930994 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6215e769877381360ef3e06455f1bee-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6215e769877381360ef3e06455f1bee\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:38:27.931104 kubelet[2271]: I0903 23:38:27.931010 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6215e769877381360ef3e06455f1bee-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b6215e769877381360ef3e06455f1bee\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:38:27.931402 kubelet[2271]: I0903 23:38:27.931283 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:38:27.931402 kubelet[2271]: I0903 23:38:27.931314 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:38:27.931402 kubelet[2271]: I0903 23:38:27.931346 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6215e769877381360ef3e06455f1bee-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6215e769877381360ef3e06455f1bee\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:38:27.931402 kubelet[2271]: I0903 23:38:27.931362 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:38:27.931519 kubelet[2271]: I0903 23:38:27.931398 2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:38:27.933137 kubelet[2271]: E0903 23:38:27.933095 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" Sep 3 23:38:28.101127 kubelet[2271]: I0903 23:38:28.101092 2271 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:38:28.101448 kubelet[2271]: E0903 23:38:28.101418 2271 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 3 23:38:28.183898 containerd[1534]: time="2025-09-03T23:38:28.183809992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b6215e769877381360ef3e06455f1bee,Namespace:kube-system,Attempt:0,}" Sep 3 23:38:28.202586 containerd[1534]: time="2025-09-03T23:38:28.202548450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 3 23:38:28.206765 containerd[1534]: time="2025-09-03T23:38:28.206642456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 3 23:38:28.218799 containerd[1534]: time="2025-09-03T23:38:28.218743293Z" level=info msg="connecting to shim 740dbba0c749e86ff1830b7e8730652bc79e1a1c40427fa31a8b486c3758d902" address="unix:///run/containerd/s/495191e480b832bae2e8beb368455704af84ac0ac3e664f850bd05076a2ebfb1" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:38:28.245122 containerd[1534]: time="2025-09-03T23:38:28.245040060Z" level=info msg="connecting to shim 8c84eb08c493c38456ec114f0519597de045574b8e79e844f0e59e667c18ee0a" address="unix:///run/containerd/s/2a31d6396bcb864ecf907ad98be3ba622d11b2e008e69e920412d8fdd7cac804" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:38:28.245360 containerd[1534]: time="2025-09-03T23:38:28.245308283Z" level=info msg="connecting to shim 563d9fba171a56a9d00dc0aaaba628e4671b6041b16b8a09e7c87a76acd160cc" address="unix:///run/containerd/s/05ce940d6f54313fd88cb21e0a5fb3054c64ede009254e1bc12d9b6160bdd539" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:38:28.257970 systemd[1]: Started cri-containerd-740dbba0c749e86ff1830b7e8730652bc79e1a1c40427fa31a8b486c3758d902.scope - libcontainer container 740dbba0c749e86ff1830b7e8730652bc79e1a1c40427fa31a8b486c3758d902. Sep 3 23:38:28.278922 systemd[1]: Started cri-containerd-563d9fba171a56a9d00dc0aaaba628e4671b6041b16b8a09e7c87a76acd160cc.scope - libcontainer container 563d9fba171a56a9d00dc0aaaba628e4671b6041b16b8a09e7c87a76acd160cc. Sep 3 23:38:28.284602 systemd[1]: Started cri-containerd-8c84eb08c493c38456ec114f0519597de045574b8e79e844f0e59e667c18ee0a.scope - libcontainer container 8c84eb08c493c38456ec114f0519597de045574b8e79e844f0e59e667c18ee0a. Sep 3 23:38:28.325290 containerd[1534]: time="2025-09-03T23:38:28.325144481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"563d9fba171a56a9d00dc0aaaba628e4671b6041b16b8a09e7c87a76acd160cc\"" Sep 3 23:38:28.326289 containerd[1534]: time="2025-09-03T23:38:28.325774864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b6215e769877381360ef3e06455f1bee,Namespace:kube-system,Attempt:0,} returns sandbox id \"740dbba0c749e86ff1830b7e8730652bc79e1a1c40427fa31a8b486c3758d902\"" Sep 3 23:38:28.329578 containerd[1534]: time="2025-09-03T23:38:28.329539251Z" level=info msg="CreateContainer within sandbox \"563d9fba171a56a9d00dc0aaaba628e4671b6041b16b8a09e7c87a76acd160cc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 3 23:38:28.329857 containerd[1534]: time="2025-09-03T23:38:28.329756980Z" level=info msg="CreateContainer within sandbox \"740dbba0c749e86ff1830b7e8730652bc79e1a1c40427fa31a8b486c3758d902\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 3 23:38:28.334454 kubelet[2271]: E0903 23:38:28.334400 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" Sep 3 23:38:28.340235 containerd[1534]: time="2025-09-03T23:38:28.340196819Z" level=info msg="Container 8df1bb7f322d7f36cc112c417910f5efacc05c51b4237b340faeb079afa972c8: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:38:28.341381 containerd[1534]: time="2025-09-03T23:38:28.341347258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c84eb08c493c38456ec114f0519597de045574b8e79e844f0e59e667c18ee0a\"" Sep 3 23:38:28.343947 containerd[1534]: time="2025-09-03T23:38:28.343913997Z" level=info msg="CreateContainer within sandbox \"8c84eb08c493c38456ec114f0519597de045574b8e79e844f0e59e667c18ee0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 3 23:38:28.345351 containerd[1534]: time="2025-09-03T23:38:28.345316951Z" level=info msg="Container acddc88c0412d9f6fd9bb23d2f552e59d278698ef90663698ad071c37c12d0f1: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:38:28.349086 containerd[1534]: time="2025-09-03T23:38:28.349049558Z" level=info msg="CreateContainer within sandbox \"563d9fba171a56a9d00dc0aaaba628e4671b6041b16b8a09e7c87a76acd160cc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8df1bb7f322d7f36cc112c417910f5efacc05c51b4237b340faeb079afa972c8\"" Sep 3 23:38:28.349983 containerd[1534]: time="2025-09-03T23:38:28.349803453Z" level=info msg="StartContainer for \"8df1bb7f322d7f36cc112c417910f5efacc05c51b4237b340faeb079afa972c8\"" Sep 3 23:38:28.351089 containerd[1534]: time="2025-09-03T23:38:28.351058810Z" level=info msg="connecting to shim 8df1bb7f322d7f36cc112c417910f5efacc05c51b4237b340faeb079afa972c8" address="unix:///run/containerd/s/05ce940d6f54313fd88cb21e0a5fb3054c64ede009254e1bc12d9b6160bdd539" protocol=ttrpc version=3 Sep 3 23:38:28.352299 containerd[1534]: time="2025-09-03T23:38:28.352271046Z" level=info msg="Container 66a4999f8524a8a9ed972e786ad43d9c0232069d28cc25bd4322c60346c10901: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:38:28.356160 containerd[1534]: time="2025-09-03T23:38:28.356044369Z" level=info msg="CreateContainer within sandbox \"740dbba0c749e86ff1830b7e8730652bc79e1a1c40427fa31a8b486c3758d902\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"acddc88c0412d9f6fd9bb23d2f552e59d278698ef90663698ad071c37c12d0f1\"" Sep 3 23:38:28.356494 containerd[1534]: time="2025-09-03T23:38:28.356472894Z" level=info msg="StartContainer for \"acddc88c0412d9f6fd9bb23d2f552e59d278698ef90663698ad071c37c12d0f1\"" Sep 3 23:38:28.357680 containerd[1534]: time="2025-09-03T23:38:28.357632391Z" level=info msg="connecting to shim acddc88c0412d9f6fd9bb23d2f552e59d278698ef90663698ad071c37c12d0f1" address="unix:///run/containerd/s/495191e480b832bae2e8beb368455704af84ac0ac3e664f850bd05076a2ebfb1" protocol=ttrpc version=3 Sep 3 23:38:28.359233 containerd[1534]: time="2025-09-03T23:38:28.359150601Z" level=info msg="CreateContainer within sandbox \"8c84eb08c493c38456ec114f0519597de045574b8e79e844f0e59e667c18ee0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"66a4999f8524a8a9ed972e786ad43d9c0232069d28cc25bd4322c60346c10901\"" Sep 3 23:38:28.359733 containerd[1534]: time="2025-09-03T23:38:28.359692658Z" level=info msg="StartContainer for \"66a4999f8524a8a9ed972e786ad43d9c0232069d28cc25bd4322c60346c10901\"" Sep 3 23:38:28.361599 containerd[1534]: time="2025-09-03T23:38:28.361360750Z" level=info msg="connecting to shim 66a4999f8524a8a9ed972e786ad43d9c0232069d28cc25bd4322c60346c10901" address="unix:///run/containerd/s/2a31d6396bcb864ecf907ad98be3ba622d11b2e008e69e920412d8fdd7cac804" protocol=ttrpc version=3 Sep 3 23:38:28.366926 systemd[1]: Started cri-containerd-8df1bb7f322d7f36cc112c417910f5efacc05c51b4237b340faeb079afa972c8.scope - libcontainer container 8df1bb7f322d7f36cc112c417910f5efacc05c51b4237b340faeb079afa972c8. Sep 3 23:38:28.385954 systemd[1]: Started cri-containerd-acddc88c0412d9f6fd9bb23d2f552e59d278698ef90663698ad071c37c12d0f1.scope - libcontainer container acddc88c0412d9f6fd9bb23d2f552e59d278698ef90663698ad071c37c12d0f1. Sep 3 23:38:28.391068 systemd[1]: Started cri-containerd-66a4999f8524a8a9ed972e786ad43d9c0232069d28cc25bd4322c60346c10901.scope - libcontainer container 66a4999f8524a8a9ed972e786ad43d9c0232069d28cc25bd4322c60346c10901. Sep 3 23:38:28.435179 containerd[1534]: time="2025-09-03T23:38:28.434488153Z" level=info msg="StartContainer for \"8df1bb7f322d7f36cc112c417910f5efacc05c51b4237b340faeb079afa972c8\" returns successfully" Sep 3 23:38:28.435492 containerd[1534]: time="2025-09-03T23:38:28.434764832Z" level=info msg="StartContainer for \"acddc88c0412d9f6fd9bb23d2f552e59d278698ef90663698ad071c37c12d0f1\" returns successfully" Sep 3 23:38:28.444260 containerd[1534]: time="2025-09-03T23:38:28.444225312Z" level=info msg="StartContainer for \"66a4999f8524a8a9ed972e786ad43d9c0232069d28cc25bd4322c60346c10901\" returns successfully" Sep 3 23:38:28.505310 kubelet[2271]: I0903 23:38:28.505253 2271 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:38:28.505992 kubelet[2271]: E0903 23:38:28.505959 2271 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 3 23:38:29.307852 kubelet[2271]: I0903 23:38:29.307530 2271 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:38:30.100555 kubelet[2271]: E0903 23:38:30.100510 2271 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 3 23:38:30.173573 kubelet[2271]: I0903 23:38:30.173406 2271 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 3 23:38:30.173573 kubelet[2271]: E0903 23:38:30.173447 2271 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 3 23:38:30.214873 kubelet[2271]: E0903 23:38:30.214767 2271 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1861ea157fdff19d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-03 23:38:27.711799709 +0000 UTC m=+0.918204587,LastTimestamp:2025-09-03 23:38:27.711799709 +0000 UTC m=+0.918204587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 3 23:38:30.708160 kubelet[2271]: I0903 23:38:30.708121 2271 apiserver.go:52] "Watching apiserver" Sep 3 23:38:30.729945 kubelet[2271]: I0903 23:38:30.729901 2271 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 3 23:38:32.228221 systemd[1]: Reload requested from client PID 2542 ('systemctl') (unit session-7.scope)... Sep 3 23:38:32.228239 systemd[1]: Reloading... Sep 3 23:38:32.295753 zram_generator::config[2585]: No configuration found. Sep 3 23:38:32.434735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:38:32.542711 systemd[1]: Reloading finished in 314 ms. Sep 3 23:38:32.563079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:38:32.579571 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:38:32.579850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:38:32.579910 systemd[1]: kubelet.service: Consumed 1.319s CPU time, 129.3M memory peak. Sep 3 23:38:32.581674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:38:32.726859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:38:32.731917 (kubelet)[2627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:38:32.774917 kubelet[2627]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:38:32.774917 kubelet[2627]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 3 23:38:32.774917 kubelet[2627]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:38:32.774917 kubelet[2627]: I0903 23:38:32.774890 2627 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:38:32.783463 kubelet[2627]: I0903 23:38:32.783423 2627 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 3 23:38:32.783463 kubelet[2627]: I0903 23:38:32.783454 2627 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:38:32.783707 kubelet[2627]: I0903 23:38:32.783689 2627 server.go:934] "Client rotation is on, will bootstrap in background" Sep 3 23:38:32.785201 kubelet[2627]: I0903 23:38:32.785181 2627 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 3 23:38:32.787315 kubelet[2627]: I0903 23:38:32.787275 2627 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:38:32.791490 kubelet[2627]: I0903 23:38:32.791452 2627 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:38:32.794660 kubelet[2627]: I0903 23:38:32.794145 2627 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:38:32.794660 kubelet[2627]: I0903 23:38:32.794301 2627 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 3 23:38:32.794660 kubelet[2627]: I0903 23:38:32.794422 2627 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:38:32.794793 kubelet[2627]: I0903 23:38:32.794445 2627 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:38:32.794793 kubelet[2627]: I0903 23:38:32.794760 2627 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:38:32.794793 kubelet[2627]: I0903 23:38:32.794770 2627 container_manager_linux.go:300] "Creating device plugin manager" Sep 3 23:38:32.794907 kubelet[2627]: I0903 23:38:32.794809 2627 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:38:32.794929 kubelet[2627]: I0903 23:38:32.794913 2627 kubelet.go:408] "Attempting to sync node with API server" Sep 3 23:38:32.794951 kubelet[2627]: I0903 23:38:32.794928 2627 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:38:32.794951 kubelet[2627]: I0903 23:38:32.794948 2627 kubelet.go:314] "Adding apiserver pod source" Sep 3 23:38:32.794988 kubelet[2627]: I0903 23:38:32.794958 2627 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:38:32.795681 kubelet[2627]: I0903 23:38:32.795660 2627 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:38:32.796473 kubelet[2627]: I0903 23:38:32.796436 2627 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:38:32.797785 kubelet[2627]: I0903 23:38:32.797071 2627 server.go:1274] "Started kubelet" Sep 3 23:38:32.798525 kubelet[2627]: I0903 23:38:32.798489 2627 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:38:32.798694 kubelet[2627]: I0903 23:38:32.798654 2627 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:38:32.799077 kubelet[2627]: I0903 23:38:32.799019 2627 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:38:32.801159 kubelet[2627]: I0903 23:38:32.801129 2627 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:38:32.801393 kubelet[2627]: I0903 23:38:32.801212 2627 server.go:449] "Adding debug handlers to kubelet server" Sep 3 23:38:32.805591 kubelet[2627]: I0903 23:38:32.804882 2627 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:38:32.805957 kubelet[2627]: E0903 23:38:32.805918 2627 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:38:32.805957 kubelet[2627]: I0903 23:38:32.805958 2627 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 3 23:38:32.806899 kubelet[2627]: I0903 23:38:32.806047 2627 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 3 23:38:32.806899 kubelet[2627]: I0903 23:38:32.806185 2627 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:38:32.810724 kubelet[2627]: I0903 23:38:32.809186 2627 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:38:32.822119 kubelet[2627]: I0903 23:38:32.822062 2627 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:38:32.823606 kubelet[2627]: I0903 23:38:32.823579 2627 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:38:32.823653 kubelet[2627]: I0903 23:38:32.823612 2627 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 3 23:38:32.823653 kubelet[2627]: I0903 23:38:32.823634 2627 kubelet.go:2321] "Starting kubelet main sync loop" Sep 3 23:38:32.823696 kubelet[2627]: E0903 23:38:32.823675 2627 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:38:32.825149 kubelet[2627]: E0903 23:38:32.824948 2627 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:38:32.829355 kubelet[2627]: I0903 23:38:32.829322 2627 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:38:32.829355 kubelet[2627]: I0903 23:38:32.829347 2627 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:38:32.857224 kubelet[2627]: I0903 23:38:32.857196 2627 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 3 23:38:32.857224 kubelet[2627]: I0903 23:38:32.857217 2627 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 3 23:38:32.857352 kubelet[2627]: I0903 23:38:32.857240 2627 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:38:32.857413 kubelet[2627]: I0903 23:38:32.857394 2627 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 3 23:38:32.857449 kubelet[2627]: I0903 23:38:32.857411 2627 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 3 23:38:32.857449 kubelet[2627]: I0903 23:38:32.857431 2627 policy_none.go:49] "None policy: Start" Sep 3 23:38:32.858198 kubelet[2627]: I0903 23:38:32.858179 2627 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 3 23:38:32.858257 kubelet[2627]: I0903 23:38:32.858206 2627 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:38:32.858368 kubelet[2627]: I0903 23:38:32.858352 2627 state_mem.go:75] "Updated machine memory state" Sep 3 23:38:32.862539 kubelet[2627]: I0903 23:38:32.862367 2627 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:38:32.862539 kubelet[2627]: I0903 23:38:32.862542 2627 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:38:32.862649 kubelet[2627]: I0903 23:38:32.862555 2627 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:38:32.863099 kubelet[2627]: I0903 23:38:32.863036 2627 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:38:32.964707 kubelet[2627]: I0903 23:38:32.964680 2627 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:38:32.973776 kubelet[2627]: I0903 23:38:32.973706 2627 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 3 23:38:32.973911 kubelet[2627]: I0903 23:38:32.973836 2627 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 3 23:38:33.007863 kubelet[2627]: I0903 23:38:33.007822 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:38:33.007863 kubelet[2627]: I0903 23:38:33.007860 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:38:33.008053 kubelet[2627]: I0903 23:38:33.007893 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6215e769877381360ef3e06455f1bee-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b6215e769877381360ef3e06455f1bee\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:38:33.008053 kubelet[2627]: I0903 23:38:33.007913 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:38:33.008053 kubelet[2627]: I0903 23:38:33.007932 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:38:33.008053 kubelet[2627]: I0903 23:38:33.007953 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:38:33.008053 kubelet[2627]: I0903 23:38:33.007970 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 3 23:38:33.008156 kubelet[2627]: I0903 23:38:33.007986 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6215e769877381360ef3e06455f1bee-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6215e769877381360ef3e06455f1bee\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:38:33.008156 kubelet[2627]: I0903 23:38:33.008002 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6215e769877381360ef3e06455f1bee-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6215e769877381360ef3e06455f1bee\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:38:33.227282 sudo[2662]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 3 23:38:33.227571 sudo[2662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 3 23:38:33.679790 sudo[2662]: pam_unix(sudo:session): session closed for user root Sep 3 23:38:33.800391 kubelet[2627]: I0903 23:38:33.800261 2627 apiserver.go:52] "Watching apiserver" Sep 3 23:38:33.806279 kubelet[2627]: I0903 23:38:33.806239 2627 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 3 23:38:33.846543 kubelet[2627]: E0903 23:38:33.846504 2627 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 3 23:38:33.847331 kubelet[2627]: E0903 23:38:33.847304 2627 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 3 23:38:33.847371 kubelet[2627]: E0903 23:38:33.847309 2627 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 3 23:38:33.889788 kubelet[2627]: I0903 23:38:33.889727 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.889693399 podStartE2EDuration="1.889693399s" podCreationTimestamp="2025-09-03 23:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:38:33.878838458 +0000 UTC m=+1.143658850" watchObservedRunningTime="2025-09-03 23:38:33.889693399 +0000 UTC m=+1.154513791" Sep 3 23:38:33.889908 kubelet[2627]: I0903 23:38:33.889845 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.889839981 podStartE2EDuration="1.889839981s" podCreationTimestamp="2025-09-03 23:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:38:33.889158404 +0000 UTC m=+1.153978756" watchObservedRunningTime="2025-09-03 23:38:33.889839981 +0000 UTC m=+1.154660373" Sep 3 23:38:33.908733 kubelet[2627]: I0903 23:38:33.908674 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.908656035 podStartE2EDuration="1.908656035s" podCreationTimestamp="2025-09-03 23:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:38:33.898349782 +0000 UTC m=+1.163170134" watchObservedRunningTime="2025-09-03 23:38:33.908656035 +0000 UTC m=+1.173476427" Sep 3 23:38:36.037926 sudo[1730]: pam_unix(sudo:session): session closed for user root Sep 3 23:38:36.039270 sshd[1729]: Connection closed by 10.0.0.1 port 42826 Sep 3 23:38:36.039666 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 3 23:38:36.042873 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Sep 3 23:38:36.043523 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:42826.service: Deactivated successfully. Sep 3 23:38:36.045293 systemd[1]: session-7.scope: Deactivated successfully. Sep 3 23:38:36.045468 systemd[1]: session-7.scope: Consumed 6.873s CPU time, 267.5M memory peak. Sep 3 23:38:36.047094 systemd-logind[1510]: Removed session 7. Sep 3 23:38:38.313785 kubelet[2627]: I0903 23:38:38.313709 2627 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 3 23:38:38.314199 containerd[1534]: time="2025-09-03T23:38:38.314135645Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 3 23:38:38.314510 kubelet[2627]: I0903 23:38:38.314459 2627 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 3 23:38:39.148681 kubelet[2627]: I0903 23:38:39.148648 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cef5c0dc-a777-4c9c-91d1-598ff2e642e3-xtables-lock\") pod \"kube-proxy-h2hrp\" (UID: \"cef5c0dc-a777-4c9c-91d1-598ff2e642e3\") " pod="kube-system/kube-proxy-h2hrp" Sep 3 23:38:39.149953 kubelet[2627]: I0903 23:38:39.149933 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkngh\" (UniqueName: \"kubernetes.io/projected/cef5c0dc-a777-4c9c-91d1-598ff2e642e3-kube-api-access-gkngh\") pod \"kube-proxy-h2hrp\" (UID: \"cef5c0dc-a777-4c9c-91d1-598ff2e642e3\") " pod="kube-system/kube-proxy-h2hrp" Sep 3 23:38:39.150043 kubelet[2627]: I0903 23:38:39.150026 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cef5c0dc-a777-4c9c-91d1-598ff2e642e3-kube-proxy\") pod \"kube-proxy-h2hrp\" (UID: \"cef5c0dc-a777-4c9c-91d1-598ff2e642e3\") " pod="kube-system/kube-proxy-h2hrp" Sep 3 23:38:39.150279 kubelet[2627]: I0903 23:38:39.150109 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cef5c0dc-a777-4c9c-91d1-598ff2e642e3-lib-modules\") pod \"kube-proxy-h2hrp\" (UID: \"cef5c0dc-a777-4c9c-91d1-598ff2e642e3\") " pod="kube-system/kube-proxy-h2hrp" Sep 3 23:38:39.152944 systemd[1]: Created slice kubepods-besteffort-podcef5c0dc_a777_4c9c_91d1_598ff2e642e3.slice - libcontainer container kubepods-besteffort-podcef5c0dc_a777_4c9c_91d1_598ff2e642e3.slice. Sep 3 23:38:39.154319 kubelet[2627]: W0903 23:38:39.154261 2627 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 3 23:38:39.154319 kubelet[2627]: E0903 23:38:39.154306 2627 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 3 23:38:39.154945 kubelet[2627]: W0903 23:38:39.154900 2627 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 3 23:38:39.154945 kubelet[2627]: E0903 23:38:39.154927 2627 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 3 23:38:39.168190 systemd[1]: Created slice kubepods-burstable-pod4dafee3a_2272_43e7_8323_9d1c6bab9769.slice - libcontainer container kubepods-burstable-pod4dafee3a_2272_43e7_8323_9d1c6bab9769.slice. Sep 3 23:38:39.247247 systemd[1]: Created slice kubepods-besteffort-pod5f46c986_63a3_46a0_bf40_9988b0adca7e.slice - libcontainer container kubepods-besteffort-pod5f46c986_63a3_46a0_bf40_9988b0adca7e.slice. Sep 3 23:38:39.252751 kubelet[2627]: I0903 23:38:39.251911 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-hostproc\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.252751 kubelet[2627]: I0903 23:38:39.251959 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-xtables-lock\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.252751 kubelet[2627]: I0903 23:38:39.251988 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-config-path\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.252751 kubelet[2627]: I0903 23:38:39.252011 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-host-proc-sys-kernel\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.252751 kubelet[2627]: I0903 23:38:39.252035 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cni-path\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.252751 kubelet[2627]: I0903 23:38:39.252056 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-cgroup\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.252971 kubelet[2627]: I0903 23:38:39.252097 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqkf\" (UniqueName: \"kubernetes.io/projected/4dafee3a-2272-43e7-8323-9d1c6bab9769-kube-api-access-2xqkf\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.252971 kubelet[2627]: I0903 23:38:39.252131 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-etc-cni-netd\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.252971 kubelet[2627]: I0903 23:38:39.252152 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-lib-modules\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.252971 kubelet[2627]: I0903 23:38:39.252171 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4dafee3a-2272-43e7-8323-9d1c6bab9769-clustermesh-secrets\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.252971 kubelet[2627]: I0903 23:38:39.252189 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4dafee3a-2272-43e7-8323-9d1c6bab9769-hubble-tls\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.253072 kubelet[2627]: I0903 23:38:39.252209 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f46c986-63a3-46a0-bf40-9988b0adca7e-cilium-config-path\") pod \"cilium-operator-5d85765b45-5jxn5\" (UID: \"5f46c986-63a3-46a0-bf40-9988b0adca7e\") " pod="kube-system/cilium-operator-5d85765b45-5jxn5" Sep 3 23:38:39.253072 kubelet[2627]: I0903 23:38:39.252239 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n98qf\" (UniqueName: \"kubernetes.io/projected/5f46c986-63a3-46a0-bf40-9988b0adca7e-kube-api-access-n98qf\") pod \"cilium-operator-5d85765b45-5jxn5\" (UID: \"5f46c986-63a3-46a0-bf40-9988b0adca7e\") " pod="kube-system/cilium-operator-5d85765b45-5jxn5" Sep 3 23:38:39.253072 kubelet[2627]: I0903 23:38:39.252277 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-bpf-maps\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.253072 kubelet[2627]: I0903 23:38:39.252300 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-host-proc-sys-net\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.253072 kubelet[2627]: I0903 23:38:39.252324 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-run\") pod \"cilium-4sf9s\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " pod="kube-system/cilium-4sf9s" Sep 3 23:38:39.465973 containerd[1534]: time="2025-09-03T23:38:39.465941619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h2hrp,Uid:cef5c0dc-a777-4c9c-91d1-598ff2e642e3,Namespace:kube-system,Attempt:0,}" Sep 3 23:38:39.485317 containerd[1534]: time="2025-09-03T23:38:39.485283378Z" level=info msg="connecting to shim 26ec8072c4518907c8f2abafa72fcafdbe9dd43496d32399b31535a4a95818e8" address="unix:///run/containerd/s/99df076bee0687ad82e64a70966220a06d6bfb58168fca327e9e24787b6f467d" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:38:39.508885 systemd[1]: Started cri-containerd-26ec8072c4518907c8f2abafa72fcafdbe9dd43496d32399b31535a4a95818e8.scope - libcontainer container 26ec8072c4518907c8f2abafa72fcafdbe9dd43496d32399b31535a4a95818e8. Sep 3 23:38:39.528703 containerd[1534]: time="2025-09-03T23:38:39.528656294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h2hrp,Uid:cef5c0dc-a777-4c9c-91d1-598ff2e642e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"26ec8072c4518907c8f2abafa72fcafdbe9dd43496d32399b31535a4a95818e8\"" Sep 3 23:38:39.531192 containerd[1534]: time="2025-09-03T23:38:39.531155321Z" level=info msg="CreateContainer within sandbox \"26ec8072c4518907c8f2abafa72fcafdbe9dd43496d32399b31535a4a95818e8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 3 23:38:39.541767 containerd[1534]: time="2025-09-03T23:38:39.541329537Z" level=info msg="Container a16e87ff2cb59a7f7b996e16f9a98161d212c59133c76b50fe7c19aeb043b140: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:38:39.548003 containerd[1534]: time="2025-09-03T23:38:39.547968589Z" level=info msg="CreateContainer within sandbox \"26ec8072c4518907c8f2abafa72fcafdbe9dd43496d32399b31535a4a95818e8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a16e87ff2cb59a7f7b996e16f9a98161d212c59133c76b50fe7c19aeb043b140\"" Sep 3 23:38:39.548709 containerd[1534]: time="2025-09-03T23:38:39.548601672Z" level=info msg="StartContainer for \"a16e87ff2cb59a7f7b996e16f9a98161d212c59133c76b50fe7c19aeb043b140\"" Sep 3 23:38:39.550343 containerd[1534]: time="2025-09-03T23:38:39.550319333Z" level=info msg="connecting to shim a16e87ff2cb59a7f7b996e16f9a98161d212c59133c76b50fe7c19aeb043b140" address="unix:///run/containerd/s/99df076bee0687ad82e64a70966220a06d6bfb58168fca327e9e24787b6f467d" protocol=ttrpc version=3 Sep 3 23:38:39.572895 systemd[1]: Started cri-containerd-a16e87ff2cb59a7f7b996e16f9a98161d212c59133c76b50fe7c19aeb043b140.scope - libcontainer container a16e87ff2cb59a7f7b996e16f9a98161d212c59133c76b50fe7c19aeb043b140. Sep 3 23:38:39.606509 containerd[1534]: time="2025-09-03T23:38:39.606469098Z" level=info msg="StartContainer for \"a16e87ff2cb59a7f7b996e16f9a98161d212c59133c76b50fe7c19aeb043b140\" returns successfully" Sep 3 23:38:39.862999 kubelet[2627]: I0903 23:38:39.862591 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h2hrp" podStartSLOduration=0.862574218 podStartE2EDuration="862.574218ms" podCreationTimestamp="2025-09-03 23:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:38:39.862504601 +0000 UTC m=+7.127324993" watchObservedRunningTime="2025-09-03 23:38:39.862574218 +0000 UTC m=+7.127394570" Sep 3 23:38:40.353836 kubelet[2627]: E0903 23:38:40.353777 2627 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 3 23:38:40.353971 kubelet[2627]: E0903 23:38:40.353889 2627 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-config-path podName:4dafee3a-2272-43e7-8323-9d1c6bab9769 nodeName:}" failed. No retries permitted until 2025-09-03 23:38:40.853865196 +0000 UTC m=+8.118685588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-config-path") pod "cilium-4sf9s" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769") : failed to sync configmap cache: timed out waiting for the condition Sep 3 23:38:40.353971 kubelet[2627]: E0903 23:38:40.353777 2627 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 3 23:38:40.354087 kubelet[2627]: E0903 23:38:40.353994 2627 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f46c986-63a3-46a0-bf40-9988b0adca7e-cilium-config-path podName:5f46c986-63a3-46a0-bf40-9988b0adca7e nodeName:}" failed. No retries permitted until 2025-09-03 23:38:40.853976563 +0000 UTC m=+8.118796915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5f46c986-63a3-46a0-bf40-9988b0adca7e-cilium-config-path") pod "cilium-operator-5d85765b45-5jxn5" (UID: "5f46c986-63a3-46a0-bf40-9988b0adca7e") : failed to sync configmap cache: timed out waiting for the condition Sep 3 23:38:40.972618 containerd[1534]: time="2025-09-03T23:38:40.972561007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4sf9s,Uid:4dafee3a-2272-43e7-8323-9d1c6bab9769,Namespace:kube-system,Attempt:0,}" Sep 3 23:38:40.998311 containerd[1534]: time="2025-09-03T23:38:40.998159553Z" level=info msg="connecting to shim 107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527" address="unix:///run/containerd/s/29662481b6e08763180ecae43c652b7c44fe11d9dfacd685ce9cf2e0449566a5" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:38:41.036933 systemd[1]: Started cri-containerd-107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527.scope - libcontainer container 107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527. Sep 3 23:38:41.050650 containerd[1534]: time="2025-09-03T23:38:41.050607487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5jxn5,Uid:5f46c986-63a3-46a0-bf40-9988b0adca7e,Namespace:kube-system,Attempt:0,}" Sep 3 23:38:41.073934 containerd[1534]: time="2025-09-03T23:38:41.073887841Z" level=info msg="connecting to shim 174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e" address="unix:///run/containerd/s/1b175e53b6262ae9f8eecabfaa0a89f212338b04094f9287fc05baa21711b962" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:38:41.076294 containerd[1534]: time="2025-09-03T23:38:41.076256354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4sf9s,Uid:4dafee3a-2272-43e7-8323-9d1c6bab9769,Namespace:kube-system,Attempt:0,} returns sandbox id \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\"" Sep 3 23:38:41.079452 containerd[1534]: time="2025-09-03T23:38:41.079419135Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 3 23:38:41.097901 systemd[1]: Started cri-containerd-174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e.scope - libcontainer container 174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e. Sep 3 23:38:41.130139 containerd[1534]: time="2025-09-03T23:38:41.130071711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5jxn5,Uid:5f46c986-63a3-46a0-bf40-9988b0adca7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e\"" Sep 3 23:38:48.391250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140095091.mount: Deactivated successfully. Sep 3 23:38:48.598815 update_engine[1512]: I20250903 23:38:48.598760 1512 update_attempter.cc:509] Updating boot flags... Sep 3 23:38:49.685805 containerd[1534]: time="2025-09-03T23:38:49.685756402Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:49.686861 containerd[1534]: time="2025-09-03T23:38:49.686822519Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 3 23:38:49.687568 containerd[1534]: time="2025-09-03T23:38:49.687542029Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:49.689787 containerd[1534]: time="2025-09-03T23:38:49.689758546Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.610166204s" Sep 3 23:38:49.689895 containerd[1534]: time="2025-09-03T23:38:49.689877564Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 3 23:38:49.696062 containerd[1534]: time="2025-09-03T23:38:49.696034194Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 3 23:38:49.710076 containerd[1534]: time="2025-09-03T23:38:49.710045081Z" level=info msg="CreateContainer within sandbox \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:38:49.724758 containerd[1534]: time="2025-09-03T23:38:49.724352911Z" level=info msg="Container a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:38:49.729200 containerd[1534]: time="2025-09-03T23:38:49.729160767Z" level=info msg="CreateContainer within sandbox \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\"" Sep 3 23:38:49.729776 containerd[1534]: time="2025-09-03T23:38:49.729752374Z" level=info msg="StartContainer for \"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\"" Sep 3 23:38:49.730659 containerd[1534]: time="2025-09-03T23:38:49.730626919Z" level=info msg="connecting to shim a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af" address="unix:///run/containerd/s/29662481b6e08763180ecae43c652b7c44fe11d9dfacd685ce9cf2e0449566a5" protocol=ttrpc version=3 Sep 3 23:38:49.777861 systemd[1]: Started cri-containerd-a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af.scope - libcontainer container a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af. Sep 3 23:38:49.802751 containerd[1534]: time="2025-09-03T23:38:49.802488029Z" level=info msg="StartContainer for \"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\" returns successfully" Sep 3 23:38:49.814443 systemd[1]: cri-containerd-a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af.scope: Deactivated successfully. Sep 3 23:38:49.836870 containerd[1534]: time="2025-09-03T23:38:49.836816025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\" id:\"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\" pid:3065 exited_at:{seconds:1756942729 nanos:826237406}" Sep 3 23:38:49.837640 containerd[1534]: time="2025-09-03T23:38:49.837584839Z" level=info msg="received exit event container_id:\"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\" id:\"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\" pid:3065 exited_at:{seconds:1756942729 nanos:826237406}" Sep 3 23:38:50.724272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af-rootfs.mount: Deactivated successfully. Sep 3 23:38:50.885556 containerd[1534]: time="2025-09-03T23:38:50.885517418Z" level=info msg="CreateContainer within sandbox \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:38:50.914528 containerd[1534]: time="2025-09-03T23:38:50.914482207Z" level=info msg="Container 3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:38:50.918548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088390036.mount: Deactivated successfully. Sep 3 23:38:50.920337 containerd[1534]: time="2025-09-03T23:38:50.920302297Z" level=info msg="CreateContainer within sandbox \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\"" Sep 3 23:38:50.921222 containerd[1534]: time="2025-09-03T23:38:50.921117194Z" level=info msg="StartContainer for \"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\"" Sep 3 23:38:50.922404 containerd[1534]: time="2025-09-03T23:38:50.922376496Z" level=info msg="connecting to shim 3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8" address="unix:///run/containerd/s/29662481b6e08763180ecae43c652b7c44fe11d9dfacd685ce9cf2e0449566a5" protocol=ttrpc version=3 Sep 3 23:38:50.950903 systemd[1]: Started cri-containerd-3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8.scope - libcontainer container 3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8. Sep 3 23:38:50.983995 containerd[1534]: time="2025-09-03T23:38:50.983873683Z" level=info msg="StartContainer for \"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\" returns successfully" Sep 3 23:38:50.998073 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:38:50.998561 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:38:50.998873 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:38:51.000461 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:38:51.001603 systemd[1]: cri-containerd-3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8.scope: Deactivated successfully. Sep 3 23:38:51.002834 containerd[1534]: time="2025-09-03T23:38:51.002749426Z" level=info msg="received exit event container_id:\"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\" id:\"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\" pid:3111 exited_at:{seconds:1756942731 nanos:2133195}" Sep 3 23:38:51.003477 containerd[1534]: time="2025-09-03T23:38:51.003439810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\" id:\"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\" pid:3111 exited_at:{seconds:1756942731 nanos:2133195}" Sep 3 23:38:51.020664 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:38:51.724358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount937223754.mount: Deactivated successfully. Sep 3 23:38:51.724483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8-rootfs.mount: Deactivated successfully. Sep 3 23:38:51.889843 containerd[1534]: time="2025-09-03T23:38:51.889787926Z" level=info msg="CreateContainer within sandbox \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:38:51.902155 containerd[1534]: time="2025-09-03T23:38:51.902117233Z" level=info msg="Container 1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:38:51.905945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667796333.mount: Deactivated successfully. Sep 3 23:38:51.912017 containerd[1534]: time="2025-09-03T23:38:51.911974652Z" level=info msg="CreateContainer within sandbox \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\"" Sep 3 23:38:51.912575 containerd[1534]: time="2025-09-03T23:38:51.912552826Z" level=info msg="StartContainer for \"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\"" Sep 3 23:38:51.914294 containerd[1534]: time="2025-09-03T23:38:51.914061090Z" level=info msg="connecting to shim 1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b" address="unix:///run/containerd/s/29662481b6e08763180ecae43c652b7c44fe11d9dfacd685ce9cf2e0449566a5" protocol=ttrpc version=3 Sep 3 23:38:51.941906 systemd[1]: Started cri-containerd-1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b.scope - libcontainer container 1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b. Sep 3 23:38:51.979974 systemd[1]: cri-containerd-1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b.scope: Deactivated successfully. Sep 3 23:38:51.983132 containerd[1534]: time="2025-09-03T23:38:51.983026928Z" level=info msg="StartContainer for \"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\" returns successfully" Sep 3 23:38:51.994010 containerd[1534]: time="2025-09-03T23:38:51.992894672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\" id:\"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\" pid:3163 exited_at:{seconds:1756942731 nanos:992569689}" Sep 3 23:38:51.996289 containerd[1534]: time="2025-09-03T23:38:51.996250309Z" level=info msg="received exit event container_id:\"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\" id:\"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\" pid:3163 exited_at:{seconds:1756942731 nanos:992569689}" Sep 3 23:38:52.700726 containerd[1534]: time="2025-09-03T23:38:52.700666179Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:52.701472 containerd[1534]: time="2025-09-03T23:38:52.701262629Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 3 23:38:52.702312 containerd[1534]: time="2025-09-03T23:38:52.702277735Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:38:52.703566 containerd[1534]: time="2025-09-03T23:38:52.703532541Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.007367644s" Sep 3 23:38:52.703633 containerd[1534]: time="2025-09-03T23:38:52.703566996Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 3 23:38:52.706109 containerd[1534]: time="2025-09-03T23:38:52.706081250Z" level=info msg="CreateContainer within sandbox \"174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 3 23:38:52.717114 containerd[1534]: time="2025-09-03T23:38:52.717067139Z" level=info msg="Container 2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:38:52.722092 containerd[1534]: time="2025-09-03T23:38:52.722055751Z" level=info msg="CreateContainer within sandbox \"174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\"" Sep 3 23:38:52.722553 containerd[1534]: time="2025-09-03T23:38:52.722530991Z" level=info msg="StartContainer for \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\"" Sep 3 23:38:52.724086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b-rootfs.mount: Deactivated successfully. Sep 3 23:38:52.725739 containerd[1534]: time="2025-09-03T23:38:52.725652300Z" level=info msg="connecting to shim 2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20" address="unix:///run/containerd/s/1b175e53b6262ae9f8eecabfaa0a89f212338b04094f9287fc05baa21711b962" protocol=ttrpc version=3 Sep 3 23:38:52.751946 systemd[1]: Started cri-containerd-2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20.scope - libcontainer container 2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20. Sep 3 23:38:52.804261 containerd[1534]: time="2025-09-03T23:38:52.804185324Z" level=info msg="StartContainer for \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\" returns successfully" Sep 3 23:38:52.896280 containerd[1534]: time="2025-09-03T23:38:52.896230056Z" level=info msg="CreateContainer within sandbox \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:38:52.920206 containerd[1534]: time="2025-09-03T23:38:52.920165537Z" level=info msg="Container c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:38:52.931449 kubelet[2627]: I0903 23:38:52.931383 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5jxn5" podStartSLOduration=2.3584925500000002 podStartE2EDuration="13.931363114s" podCreationTimestamp="2025-09-03 23:38:39 +0000 UTC" firstStartedPulling="2025-09-03 23:38:41.131624981 +0000 UTC m=+8.396445373" lastFinishedPulling="2025-09-03 23:38:52.704495545 +0000 UTC m=+19.969315937" observedRunningTime="2025-09-03 23:38:52.931240143 +0000 UTC m=+20.196060535" watchObservedRunningTime="2025-09-03 23:38:52.931363114 +0000 UTC m=+20.196183587" Sep 3 23:38:52.933575 containerd[1534]: time="2025-09-03T23:38:52.933527903Z" level=info msg="CreateContainer within sandbox \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\"" Sep 3 23:38:52.936275 containerd[1534]: time="2025-09-03T23:38:52.936230076Z" level=info msg="StartContainer for \"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\"" Sep 3 23:38:52.937929 containerd[1534]: time="2025-09-03T23:38:52.937900817Z" level=info msg="connecting to shim c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8" address="unix:///run/containerd/s/29662481b6e08763180ecae43c652b7c44fe11d9dfacd685ce9cf2e0449566a5" protocol=ttrpc version=3 Sep 3 23:38:52.983903 systemd[1]: Started cri-containerd-c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8.scope - libcontainer container c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8. Sep 3 23:38:53.012708 systemd[1]: cri-containerd-c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8.scope: Deactivated successfully. Sep 3 23:38:53.013833 containerd[1534]: time="2025-09-03T23:38:53.013791799Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\" id:\"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\" pid:3249 exited_at:{seconds:1756942733 nanos:13440379}" Sep 3 23:38:53.014099 containerd[1534]: time="2025-09-03T23:38:53.014002043Z" level=info msg="received exit event container_id:\"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\" id:\"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\" pid:3249 exited_at:{seconds:1756942733 nanos:13440379}" Sep 3 23:38:53.021559 containerd[1534]: time="2025-09-03T23:38:53.021521412Z" level=info msg="StartContainer for \"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\" returns successfully" Sep 3 23:38:53.724282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8-rootfs.mount: Deactivated successfully. Sep 3 23:38:53.910237 containerd[1534]: time="2025-09-03T23:38:53.910173637Z" level=info msg="CreateContainer within sandbox \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:38:53.927927 containerd[1534]: time="2025-09-03T23:38:53.927883163Z" level=info msg="Container f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:38:53.941080 containerd[1534]: time="2025-09-03T23:38:53.941024580Z" level=info msg="CreateContainer within sandbox \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\"" Sep 3 23:38:53.941516 containerd[1534]: time="2025-09-03T23:38:53.941469398Z" level=info msg="StartContainer for \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\"" Sep 3 23:38:53.942650 containerd[1534]: time="2025-09-03T23:38:53.942548030Z" level=info msg="connecting to shim f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb" address="unix:///run/containerd/s/29662481b6e08763180ecae43c652b7c44fe11d9dfacd685ce9cf2e0449566a5" protocol=ttrpc version=3 Sep 3 23:38:53.968909 systemd[1]: Started cri-containerd-f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb.scope - libcontainer container f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb. Sep 3 23:38:54.004610 containerd[1534]: time="2025-09-03T23:38:54.004339807Z" level=info msg="StartContainer for \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" returns successfully" Sep 3 23:38:54.098036 containerd[1534]: time="2025-09-03T23:38:54.097992094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" id:\"499a43dfedfb066fda64104ad5b7564397742fe21b303eeb26520b56cbd9d378\" pid:3320 exited_at:{seconds:1756942734 nanos:97644241}" Sep 3 23:38:54.117059 kubelet[2627]: I0903 23:38:54.116986 2627 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 3 23:38:54.159050 systemd[1]: Created slice kubepods-burstable-podab13303d_e917_4e3f_b282_9c864368decd.slice - libcontainer container kubepods-burstable-podab13303d_e917_4e3f_b282_9c864368decd.slice. Sep 3 23:38:54.165700 systemd[1]: Created slice kubepods-burstable-pod5afbfb04_470a_4516_aeca_6e63f4b68ed1.slice - libcontainer container kubepods-burstable-pod5afbfb04_470a_4516_aeca_6e63f4b68ed1.slice. Sep 3 23:38:54.165975 kubelet[2627]: I0903 23:38:54.165947 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqmh7\" (UniqueName: \"kubernetes.io/projected/5afbfb04-470a-4516-aeca-6e63f4b68ed1-kube-api-access-pqmh7\") pod \"coredns-7c65d6cfc9-vl5tt\" (UID: \"5afbfb04-470a-4516-aeca-6e63f4b68ed1\") " pod="kube-system/coredns-7c65d6cfc9-vl5tt" Sep 3 23:38:54.166078 kubelet[2627]: I0903 23:38:54.166065 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab13303d-e917-4e3f-b282-9c864368decd-config-volume\") pod \"coredns-7c65d6cfc9-zlbq9\" (UID: \"ab13303d-e917-4e3f-b282-9c864368decd\") " pod="kube-system/coredns-7c65d6cfc9-zlbq9" Sep 3 23:38:54.166154 kubelet[2627]: I0903 23:38:54.166142 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz9w6\" (UniqueName: \"kubernetes.io/projected/ab13303d-e917-4e3f-b282-9c864368decd-kube-api-access-mz9w6\") pod \"coredns-7c65d6cfc9-zlbq9\" (UID: \"ab13303d-e917-4e3f-b282-9c864368decd\") " pod="kube-system/coredns-7c65d6cfc9-zlbq9" Sep 3 23:38:54.166213 kubelet[2627]: I0903 23:38:54.166202 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5afbfb04-470a-4516-aeca-6e63f4b68ed1-config-volume\") pod \"coredns-7c65d6cfc9-vl5tt\" (UID: \"5afbfb04-470a-4516-aeca-6e63f4b68ed1\") " pod="kube-system/coredns-7c65d6cfc9-vl5tt" Sep 3 23:38:54.463158 containerd[1534]: time="2025-09-03T23:38:54.463119419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zlbq9,Uid:ab13303d-e917-4e3f-b282-9c864368decd,Namespace:kube-system,Attempt:0,}" Sep 3 23:38:54.472882 containerd[1534]: time="2025-09-03T23:38:54.472801637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vl5tt,Uid:5afbfb04-470a-4516-aeca-6e63f4b68ed1,Namespace:kube-system,Attempt:0,}" Sep 3 23:38:54.930544 kubelet[2627]: I0903 23:38:54.930003 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4sf9s" podStartSLOduration=7.313024238 podStartE2EDuration="15.9299876s" podCreationTimestamp="2025-09-03 23:38:39 +0000 UTC" firstStartedPulling="2025-09-03 23:38:41.078894347 +0000 UTC m=+8.343714739" lastFinishedPulling="2025-09-03 23:38:49.695857709 +0000 UTC m=+16.960678101" observedRunningTime="2025-09-03 23:38:54.928310439 +0000 UTC m=+22.193130831" watchObservedRunningTime="2025-09-03 23:38:54.9299876 +0000 UTC m=+22.194807992" Sep 3 23:38:56.843348 systemd-networkd[1434]: cilium_host: Link UP Sep 3 23:38:56.843507 systemd-networkd[1434]: cilium_net: Link UP Sep 3 23:38:56.843638 systemd-networkd[1434]: cilium_net: Gained carrier Sep 3 23:38:56.843769 systemd-networkd[1434]: cilium_host: Gained carrier Sep 3 23:38:56.934919 systemd-networkd[1434]: cilium_vxlan: Link UP Sep 3 23:38:56.934926 systemd-networkd[1434]: cilium_vxlan: Gained carrier Sep 3 23:38:57.187741 kernel: NET: Registered PF_ALG protocol family Sep 3 23:38:57.471897 systemd-networkd[1434]: cilium_net: Gained IPv6LL Sep 3 23:38:57.739868 systemd-networkd[1434]: lxc_health: Link UP Sep 3 23:38:57.740126 systemd-networkd[1434]: lxc_health: Gained carrier Sep 3 23:38:57.791895 systemd-networkd[1434]: cilium_host: Gained IPv6LL Sep 3 23:38:57.995835 systemd-networkd[1434]: lxcd894c2fd6ad6: Link UP Sep 3 23:38:58.005749 kernel: eth0: renamed from tmp5cb20 Sep 3 23:38:58.007153 systemd-networkd[1434]: lxcd894c2fd6ad6: Gained carrier Sep 3 23:38:58.018812 systemd-networkd[1434]: lxc702dd839d09a: Link UP Sep 3 23:38:58.019733 kernel: eth0: renamed from tmpf7fcd Sep 3 23:38:58.022755 systemd-networkd[1434]: lxc702dd839d09a: Gained carrier Sep 3 23:38:58.242881 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL Sep 3 23:38:59.520842 systemd-networkd[1434]: lxc_health: Gained IPv6LL Sep 3 23:38:59.647874 systemd-networkd[1434]: lxcd894c2fd6ad6: Gained IPv6LL Sep 3 23:38:59.711847 systemd-networkd[1434]: lxc702dd839d09a: Gained IPv6LL Sep 3 23:39:01.650563 containerd[1534]: time="2025-09-03T23:39:01.650440373Z" level=info msg="connecting to shim f7fcd951a3fa34f8c23d72ccaac7c21bbfeaffa9128b1a0118ed99463d5c25de" address="unix:///run/containerd/s/af2a7d668d9fdc34e5d54917811a5a7afdd4eb7170f40fbeac186afe94f7daea" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:39:01.651954 containerd[1534]: time="2025-09-03T23:39:01.651904026Z" level=info msg="connecting to shim 5cb20c1dca6370ddd4311d076f9ba3dacc22c3578ad1010a0729572e61c7f7c8" address="unix:///run/containerd/s/72a754c454c216cc7a51394568e60f9ea4ab6886bf9564b842aecf389d478a31" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:39:01.677892 systemd[1]: Started cri-containerd-5cb20c1dca6370ddd4311d076f9ba3dacc22c3578ad1010a0729572e61c7f7c8.scope - libcontainer container 5cb20c1dca6370ddd4311d076f9ba3dacc22c3578ad1010a0729572e61c7f7c8. Sep 3 23:39:01.682508 systemd[1]: Started cri-containerd-f7fcd951a3fa34f8c23d72ccaac7c21bbfeaffa9128b1a0118ed99463d5c25de.scope - libcontainer container f7fcd951a3fa34f8c23d72ccaac7c21bbfeaffa9128b1a0118ed99463d5c25de. Sep 3 23:39:01.693791 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:39:01.699641 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:39:01.717172 containerd[1534]: time="2025-09-03T23:39:01.717130108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zlbq9,Uid:ab13303d-e917-4e3f-b282-9c864368decd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cb20c1dca6370ddd4311d076f9ba3dacc22c3578ad1010a0729572e61c7f7c8\"" Sep 3 23:39:01.720218 containerd[1534]: time="2025-09-03T23:39:01.720088825Z" level=info msg="CreateContainer within sandbox \"5cb20c1dca6370ddd4311d076f9ba3dacc22c3578ad1010a0729572e61c7f7c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:39:01.734445 containerd[1534]: time="2025-09-03T23:39:01.734387547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vl5tt,Uid:5afbfb04-470a-4516-aeca-6e63f4b68ed1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7fcd951a3fa34f8c23d72ccaac7c21bbfeaffa9128b1a0118ed99463d5c25de\"" Sep 3 23:39:01.738016 containerd[1534]: time="2025-09-03T23:39:01.737969760Z" level=info msg="CreateContainer within sandbox \"f7fcd951a3fa34f8c23d72ccaac7c21bbfeaffa9128b1a0118ed99463d5c25de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:39:01.746614 containerd[1534]: time="2025-09-03T23:39:01.746571432Z" level=info msg="Container 40adb3dfbe13a9a078c745bd521928a3d0f689b4993c89bdab9a0e8c412486ce: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:39:01.752442 containerd[1534]: time="2025-09-03T23:39:01.752386596Z" level=info msg="Container f2a589822b071786a4f8a32f1288dd11f827ae8ccd737dcbf11ac8e2e18962f2: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:39:01.788476 containerd[1534]: time="2025-09-03T23:39:01.788406941Z" level=info msg="CreateContainer within sandbox \"5cb20c1dca6370ddd4311d076f9ba3dacc22c3578ad1010a0729572e61c7f7c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40adb3dfbe13a9a078c745bd521928a3d0f689b4993c89bdab9a0e8c412486ce\"" Sep 3 23:39:01.789158 containerd[1534]: time="2025-09-03T23:39:01.789124143Z" level=info msg="StartContainer for \"40adb3dfbe13a9a078c745bd521928a3d0f689b4993c89bdab9a0e8c412486ce\"" Sep 3 23:39:01.790782 containerd[1534]: time="2025-09-03T23:39:01.790752164Z" level=info msg="connecting to shim 40adb3dfbe13a9a078c745bd521928a3d0f689b4993c89bdab9a0e8c412486ce" address="unix:///run/containerd/s/72a754c454c216cc7a51394568e60f9ea4ab6886bf9564b842aecf389d478a31" protocol=ttrpc version=3 Sep 3 23:39:01.795369 containerd[1534]: time="2025-09-03T23:39:01.795331338Z" level=info msg="CreateContainer within sandbox \"f7fcd951a3fa34f8c23d72ccaac7c21bbfeaffa9128b1a0118ed99463d5c25de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2a589822b071786a4f8a32f1288dd11f827ae8ccd737dcbf11ac8e2e18962f2\"" Sep 3 23:39:01.797741 containerd[1534]: time="2025-09-03T23:39:01.796879536Z" level=info msg="StartContainer for \"f2a589822b071786a4f8a32f1288dd11f827ae8ccd737dcbf11ac8e2e18962f2\"" Sep 3 23:39:01.799850 containerd[1534]: time="2025-09-03T23:39:01.799819928Z" level=info msg="connecting to shim f2a589822b071786a4f8a32f1288dd11f827ae8ccd737dcbf11ac8e2e18962f2" address="unix:///run/containerd/s/af2a7d668d9fdc34e5d54917811a5a7afdd4eb7170f40fbeac186afe94f7daea" protocol=ttrpc version=3 Sep 3 23:39:01.829893 systemd[1]: Started cri-containerd-f2a589822b071786a4f8a32f1288dd11f827ae8ccd737dcbf11ac8e2e18962f2.scope - libcontainer container f2a589822b071786a4f8a32f1288dd11f827ae8ccd737dcbf11ac8e2e18962f2. Sep 3 23:39:01.833061 systemd[1]: Started cri-containerd-40adb3dfbe13a9a078c745bd521928a3d0f689b4993c89bdab9a0e8c412486ce.scope - libcontainer container 40adb3dfbe13a9a078c745bd521928a3d0f689b4993c89bdab9a0e8c412486ce. Sep 3 23:39:01.864069 containerd[1534]: time="2025-09-03T23:39:01.863946979Z" level=info msg="StartContainer for \"40adb3dfbe13a9a078c745bd521928a3d0f689b4993c89bdab9a0e8c412486ce\" returns successfully" Sep 3 23:39:01.888795 containerd[1534]: time="2025-09-03T23:39:01.888417217Z" level=info msg="StartContainer for \"f2a589822b071786a4f8a32f1288dd11f827ae8ccd737dcbf11ac8e2e18962f2\" returns successfully" Sep 3 23:39:01.970463 kubelet[2627]: I0903 23:39:01.970386 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zlbq9" podStartSLOduration=22.970366867 podStartE2EDuration="22.970366867s" podCreationTimestamp="2025-09-03 23:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:39:01.969684474 +0000 UTC m=+29.234504866" watchObservedRunningTime="2025-09-03 23:39:01.970366867 +0000 UTC m=+29.235187259" Sep 3 23:39:01.971231 kubelet[2627]: I0903 23:39:01.970528 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vl5tt" podStartSLOduration=22.970521991 podStartE2EDuration="22.970521991s" podCreationTimestamp="2025-09-03 23:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:39:01.951992952 +0000 UTC m=+29.216813344" watchObservedRunningTime="2025-09-03 23:39:01.970521991 +0000 UTC m=+29.235342383" Sep 3 23:39:02.229353 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:56818.service - OpenSSH per-connection server daemon (10.0.0.1:56818). Sep 3 23:39:02.274013 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 56818 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:02.275312 sshd-session[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:02.279639 systemd-logind[1510]: New session 8 of user core. Sep 3 23:39:02.290903 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 3 23:39:02.411498 sshd[3979]: Connection closed by 10.0.0.1 port 56818 Sep 3 23:39:02.411820 sshd-session[3977]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:02.415229 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:56818.service: Deactivated successfully. Sep 3 23:39:02.417175 systemd[1]: session-8.scope: Deactivated successfully. Sep 3 23:39:02.418230 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Sep 3 23:39:02.419802 systemd-logind[1510]: Removed session 8. Sep 3 23:39:07.430977 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:56866.service - OpenSSH per-connection server daemon (10.0.0.1:56866). Sep 3 23:39:07.491544 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 56866 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:07.492993 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:07.497613 systemd-logind[1510]: New session 9 of user core. Sep 3 23:39:07.507910 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 3 23:39:07.630562 sshd[4006]: Connection closed by 10.0.0.1 port 56866 Sep 3 23:39:07.630892 sshd-session[4004]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:07.634529 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:56866.service: Deactivated successfully. Sep 3 23:39:07.636254 systemd[1]: session-9.scope: Deactivated successfully. Sep 3 23:39:07.637064 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Sep 3 23:39:07.638251 systemd-logind[1510]: Removed session 9. Sep 3 23:39:12.651256 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:60816.service - OpenSSH per-connection server daemon (10.0.0.1:60816). Sep 3 23:39:12.707130 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 60816 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:12.708362 sshd-session[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:12.712278 systemd-logind[1510]: New session 10 of user core. Sep 3 23:39:12.725928 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 3 23:39:12.843762 sshd[4024]: Connection closed by 10.0.0.1 port 60816 Sep 3 23:39:12.843975 sshd-session[4022]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:12.847301 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:60816.service: Deactivated successfully. Sep 3 23:39:12.849413 systemd[1]: session-10.scope: Deactivated successfully. Sep 3 23:39:12.850377 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Sep 3 23:39:12.851764 systemd-logind[1510]: Removed session 10. Sep 3 23:39:17.866120 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:60884.service - OpenSSH per-connection server daemon (10.0.0.1:60884). Sep 3 23:39:17.919989 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 60884 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:17.921225 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:17.926245 systemd-logind[1510]: New session 11 of user core. Sep 3 23:39:17.940871 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 3 23:39:18.064733 sshd[4041]: Connection closed by 10.0.0.1 port 60884 Sep 3 23:39:18.065230 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:18.077264 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:60884.service: Deactivated successfully. Sep 3 23:39:18.079121 systemd[1]: session-11.scope: Deactivated successfully. Sep 3 23:39:18.080389 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Sep 3 23:39:18.082945 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:60906.service - OpenSSH per-connection server daemon (10.0.0.1:60906). Sep 3 23:39:18.083567 systemd-logind[1510]: Removed session 11. Sep 3 23:39:18.135482 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 60906 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:18.136661 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:18.142783 systemd-logind[1510]: New session 12 of user core. Sep 3 23:39:18.147935 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 3 23:39:18.301547 sshd[4058]: Connection closed by 10.0.0.1 port 60906 Sep 3 23:39:18.302562 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:18.309299 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:60906.service: Deactivated successfully. Sep 3 23:39:18.312861 systemd[1]: session-12.scope: Deactivated successfully. Sep 3 23:39:18.314431 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Sep 3 23:39:18.319024 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:60938.service - OpenSSH per-connection server daemon (10.0.0.1:60938). Sep 3 23:39:18.320077 systemd-logind[1510]: Removed session 12. Sep 3 23:39:18.375531 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 60938 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:18.376839 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:18.381605 systemd-logind[1510]: New session 13 of user core. Sep 3 23:39:18.395902 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 3 23:39:18.512123 sshd[4071]: Connection closed by 10.0.0.1 port 60938 Sep 3 23:39:18.512452 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:18.516497 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:60938.service: Deactivated successfully. Sep 3 23:39:18.518301 systemd[1]: session-13.scope: Deactivated successfully. Sep 3 23:39:18.519256 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Sep 3 23:39:18.520523 systemd-logind[1510]: Removed session 13. Sep 3 23:39:23.532055 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:45660.service - OpenSSH per-connection server daemon (10.0.0.1:45660). Sep 3 23:39:23.580745 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 45660 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:23.581893 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:23.585668 systemd-logind[1510]: New session 14 of user core. Sep 3 23:39:23.595922 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 3 23:39:23.706362 sshd[4087]: Connection closed by 10.0.0.1 port 45660 Sep 3 23:39:23.706700 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:23.710280 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:45660.service: Deactivated successfully. Sep 3 23:39:23.712026 systemd[1]: session-14.scope: Deactivated successfully. Sep 3 23:39:23.715021 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Sep 3 23:39:23.716409 systemd-logind[1510]: Removed session 14. Sep 3 23:39:28.725057 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:45664.service - OpenSSH per-connection server daemon (10.0.0.1:45664). Sep 3 23:39:28.783816 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 45664 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:28.785556 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:28.790568 systemd-logind[1510]: New session 15 of user core. Sep 3 23:39:28.800900 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 3 23:39:28.921818 sshd[4104]: Connection closed by 10.0.0.1 port 45664 Sep 3 23:39:28.921196 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:28.936308 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:45664.service: Deactivated successfully. Sep 3 23:39:28.940270 systemd[1]: session-15.scope: Deactivated successfully. Sep 3 23:39:28.941672 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Sep 3 23:39:28.945363 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:45666.service - OpenSSH per-connection server daemon (10.0.0.1:45666). Sep 3 23:39:28.946669 systemd-logind[1510]: Removed session 15. Sep 3 23:39:29.003095 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 45666 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:29.004382 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:29.008693 systemd-logind[1510]: New session 16 of user core. Sep 3 23:39:29.018109 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 3 23:39:29.207294 sshd[4119]: Connection closed by 10.0.0.1 port 45666 Sep 3 23:39:29.207786 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:29.217326 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:45666.service: Deactivated successfully. Sep 3 23:39:29.219036 systemd[1]: session-16.scope: Deactivated successfully. Sep 3 23:39:29.219769 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Sep 3 23:39:29.222477 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:45682.service - OpenSSH per-connection server daemon (10.0.0.1:45682). Sep 3 23:39:29.223403 systemd-logind[1510]: Removed session 16. Sep 3 23:39:29.283836 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 45682 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:29.284754 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:29.289602 systemd-logind[1510]: New session 17 of user core. Sep 3 23:39:29.299910 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 3 23:39:30.486642 sshd[4132]: Connection closed by 10.0.0.1 port 45682 Sep 3 23:39:30.487932 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:30.496545 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:45682.service: Deactivated successfully. Sep 3 23:39:30.499221 systemd[1]: session-17.scope: Deactivated successfully. Sep 3 23:39:30.501738 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Sep 3 23:39:30.508283 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:39340.service - OpenSSH per-connection server daemon (10.0.0.1:39340). Sep 3 23:39:30.509438 systemd-logind[1510]: Removed session 17. Sep 3 23:39:30.577355 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 39340 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:30.579628 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:30.584951 systemd-logind[1510]: New session 18 of user core. Sep 3 23:39:30.601945 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 3 23:39:30.838654 sshd[4153]: Connection closed by 10.0.0.1 port 39340 Sep 3 23:39:30.839441 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:30.848693 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:39340.service: Deactivated successfully. Sep 3 23:39:30.854389 systemd[1]: session-18.scope: Deactivated successfully. Sep 3 23:39:30.860109 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Sep 3 23:39:30.867620 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:39356.service - OpenSSH per-connection server daemon (10.0.0.1:39356). Sep 3 23:39:30.870823 systemd-logind[1510]: Removed session 18. Sep 3 23:39:30.927032 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 39356 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:30.928455 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:30.932542 systemd-logind[1510]: New session 19 of user core. Sep 3 23:39:30.939903 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 3 23:39:31.061765 sshd[4166]: Connection closed by 10.0.0.1 port 39356 Sep 3 23:39:31.061880 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:31.065394 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:39356.service: Deactivated successfully. Sep 3 23:39:31.067116 systemd[1]: session-19.scope: Deactivated successfully. Sep 3 23:39:31.070051 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Sep 3 23:39:31.071819 systemd-logind[1510]: Removed session 19. Sep 3 23:39:36.077474 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:39362.service - OpenSSH per-connection server daemon (10.0.0.1:39362). Sep 3 23:39:36.138903 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 39362 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:36.140305 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:36.144402 systemd-logind[1510]: New session 20 of user core. Sep 3 23:39:36.153896 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 3 23:39:36.265760 sshd[4186]: Connection closed by 10.0.0.1 port 39362 Sep 3 23:39:36.266324 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:36.269596 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:39362.service: Deactivated successfully. Sep 3 23:39:36.271296 systemd[1]: session-20.scope: Deactivated successfully. Sep 3 23:39:36.273013 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Sep 3 23:39:36.274451 systemd-logind[1510]: Removed session 20. Sep 3 23:39:41.284493 systemd[1]: Started sshd@20-10.0.0.118:22-10.0.0.1:36872.service - OpenSSH per-connection server daemon (10.0.0.1:36872). Sep 3 23:39:41.349147 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 36872 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:41.350609 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:41.355674 systemd-logind[1510]: New session 21 of user core. Sep 3 23:39:41.364952 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 3 23:39:41.487160 sshd[4204]: Connection closed by 10.0.0.1 port 36872 Sep 3 23:39:41.487550 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:41.492825 systemd[1]: sshd@20-10.0.0.118:22-10.0.0.1:36872.service: Deactivated successfully. Sep 3 23:39:41.494786 systemd[1]: session-21.scope: Deactivated successfully. Sep 3 23:39:41.496280 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. Sep 3 23:39:41.498200 systemd-logind[1510]: Removed session 21. Sep 3 23:39:44.831065 kubelet[2627]: E0903 23:39:44.830955 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:46.502382 systemd[1]: Started sshd@21-10.0.0.118:22-10.0.0.1:36876.service - OpenSSH per-connection server daemon (10.0.0.1:36876). Sep 3 23:39:46.545492 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 36876 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:46.546907 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:46.551249 systemd-logind[1510]: New session 22 of user core. Sep 3 23:39:46.560909 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 3 23:39:46.670711 sshd[4219]: Connection closed by 10.0.0.1 port 36876 Sep 3 23:39:46.671042 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:46.682116 systemd[1]: sshd@21-10.0.0.118:22-10.0.0.1:36876.service: Deactivated successfully. Sep 3 23:39:46.685254 systemd[1]: session-22.scope: Deactivated successfully. Sep 3 23:39:46.686031 systemd-logind[1510]: Session 22 logged out. Waiting for processes to exit. Sep 3 23:39:46.688898 systemd[1]: Started sshd@22-10.0.0.118:22-10.0.0.1:36890.service - OpenSSH per-connection server daemon (10.0.0.1:36890). Sep 3 23:39:46.689753 systemd-logind[1510]: Removed session 22. Sep 3 23:39:46.744252 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 36890 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:46.745742 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:46.749782 systemd-logind[1510]: New session 23 of user core. Sep 3 23:39:46.756873 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 3 23:39:48.588832 containerd[1534]: time="2025-09-03T23:39:48.588785724Z" level=info msg="StopContainer for \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\" with timeout 30 (s)" Sep 3 23:39:48.589344 containerd[1534]: time="2025-09-03T23:39:48.589314079Z" level=info msg="Stop container \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\" with signal terminated" Sep 3 23:39:48.601021 systemd[1]: cri-containerd-2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20.scope: Deactivated successfully. Sep 3 23:39:48.607893 containerd[1534]: time="2025-09-03T23:39:48.607847625Z" level=info msg="received exit event container_id:\"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\" id:\"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\" pid:3214 exited_at:{seconds:1756942788 nanos:606815313}" Sep 3 23:39:48.608029 containerd[1534]: time="2025-09-03T23:39:48.607995132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\" id:\"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\" pid:3214 exited_at:{seconds:1756942788 nanos:606815313}" Sep 3 23:39:48.614632 containerd[1534]: time="2025-09-03T23:39:48.614200245Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:39:48.620649 containerd[1534]: time="2025-09-03T23:39:48.620599062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" id:\"a958710cf53d8fe3139f4c9f51e9975188729e2b25a7875b862bced21c805d4d\" pid:4263 exited_at:{seconds:1756942788 nanos:620353483}" Sep 3 23:39:48.622417 containerd[1534]: time="2025-09-03T23:39:48.622355993Z" level=info msg="StopContainer for \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" with timeout 2 (s)" Sep 3 23:39:48.622763 containerd[1534]: time="2025-09-03T23:39:48.622705443Z" level=info msg="Stop container \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" with signal terminated" Sep 3 23:39:48.631261 systemd-networkd[1434]: lxc_health: Link DOWN Sep 3 23:39:48.631594 systemd-networkd[1434]: lxc_health: Lost carrier Sep 3 23:39:48.636537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20-rootfs.mount: Deactivated successfully. Sep 3 23:39:48.651583 containerd[1534]: time="2025-09-03T23:39:48.651530115Z" level=info msg="StopContainer for \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\" returns successfully" Sep 3 23:39:48.654371 containerd[1534]: time="2025-09-03T23:39:48.654191529Z" level=info msg="StopPodSandbox for \"174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e\"" Sep 3 23:39:48.655582 systemd[1]: cri-containerd-f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb.scope: Deactivated successfully. Sep 3 23:39:48.655917 systemd[1]: cri-containerd-f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb.scope: Consumed 6.242s CPU time, 124.2M memory peak, 128K read from disk, 12.9M written to disk. Sep 3 23:39:48.659403 containerd[1534]: time="2025-09-03T23:39:48.659249940Z" level=info msg="received exit event container_id:\"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" id:\"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" pid:3287 exited_at:{seconds:1756942788 nanos:658935326}" Sep 3 23:39:48.659403 containerd[1534]: time="2025-09-03T23:39:48.659385568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" id:\"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" pid:3287 exited_at:{seconds:1756942788 nanos:658935326}" Sep 3 23:39:48.664580 containerd[1534]: time="2025-09-03T23:39:48.664314390Z" level=info msg="Container to stop \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:39:48.671231 systemd[1]: cri-containerd-174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e.scope: Deactivated successfully. Sep 3 23:39:48.673506 containerd[1534]: time="2025-09-03T23:39:48.673467852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e\" id:\"174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e\" pid:2995 exit_status:137 exited_at:{seconds:1756942788 nanos:671833911}" Sep 3 23:39:48.683835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb-rootfs.mount: Deactivated successfully. Sep 3 23:39:48.694579 containerd[1534]: time="2025-09-03T23:39:48.694501986Z" level=info msg="StopContainer for \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" returns successfully" Sep 3 23:39:48.695050 containerd[1534]: time="2025-09-03T23:39:48.695024302Z" level=info msg="StopPodSandbox for \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\"" Sep 3 23:39:48.695431 containerd[1534]: time="2025-09-03T23:39:48.695165210Z" level=info msg="Container to stop \"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:39:48.695431 containerd[1534]: time="2025-09-03T23:39:48.695185888Z" level=info msg="Container to stop \"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:39:48.695431 containerd[1534]: time="2025-09-03T23:39:48.695197447Z" level=info msg="Container to stop \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:39:48.695431 containerd[1534]: time="2025-09-03T23:39:48.695206086Z" level=info msg="Container to stop \"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:39:48.695431 containerd[1534]: time="2025-09-03T23:39:48.695215885Z" level=info msg="Container to stop \"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:39:48.701230 systemd[1]: cri-containerd-107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527.scope: Deactivated successfully. Sep 3 23:39:48.705685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e-rootfs.mount: Deactivated successfully. Sep 3 23:39:48.710573 containerd[1534]: time="2025-09-03T23:39:48.710527945Z" level=info msg="shim disconnected" id=174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e namespace=k8s.io Sep 3 23:39:48.725903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527-rootfs.mount: Deactivated successfully. Sep 3 23:39:48.729389 containerd[1534]: time="2025-09-03T23:39:48.710845118Z" level=warning msg="cleaning up after shim disconnected" id=174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e namespace=k8s.io Sep 3 23:39:48.729389 containerd[1534]: time="2025-09-03T23:39:48.729319669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:39:48.738137 containerd[1534]: time="2025-09-03T23:39:48.738094044Z" level=info msg="shim disconnected" id=107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527 namespace=k8s.io Sep 3 23:39:48.738508 containerd[1534]: time="2025-09-03T23:39:48.738228433Z" level=warning msg="cleaning up after shim disconnected" id=107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527 namespace=k8s.io Sep 3 23:39:48.738508 containerd[1534]: time="2025-09-03T23:39:48.738262870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:39:48.752439 containerd[1534]: time="2025-09-03T23:39:48.752383031Z" level=info msg="received exit event sandbox_id:\"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" exit_status:137 exited_at:{seconds:1756942788 nanos:706175195}" Sep 3 23:39:48.752581 containerd[1534]: time="2025-09-03T23:39:48.752507340Z" level=info msg="received exit event sandbox_id:\"174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e\" exit_status:137 exited_at:{seconds:1756942788 nanos:671833911}" Sep 3 23:39:48.752888 containerd[1534]: time="2025-09-03T23:39:48.752859630Z" level=info msg="TearDown network for sandbox \"174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e\" successfully" Sep 3 23:39:48.753047 containerd[1534]: time="2025-09-03T23:39:48.752968221Z" level=info msg="StopPodSandbox for \"174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e\" returns successfully" Sep 3 23:39:48.753227 containerd[1534]: time="2025-09-03T23:39:48.752405669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" id:\"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" pid:2950 exit_status:137 exited_at:{seconds:1756942788 nanos:706175195}" Sep 3 23:39:48.754305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-174ace8b49ef0e9dbb1ad43ef909cdb1e212d47036cfe58784b5eb1de160a84e-shm.mount: Deactivated successfully. Sep 3 23:39:48.754573 containerd[1534]: time="2025-09-03T23:39:48.754542847Z" level=info msg="TearDown network for sandbox \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" successfully" Sep 3 23:39:48.754661 containerd[1534]: time="2025-09-03T23:39:48.754574205Z" level=info msg="StopPodSandbox for \"107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527\" returns successfully" Sep 3 23:39:48.813540 kubelet[2627]: I0903 23:39:48.813499 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-config-path\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.813540 kubelet[2627]: I0903 23:39:48.813544 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cni-path\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814394 kubelet[2627]: I0903 23:39:48.813567 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-lib-modules\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814394 kubelet[2627]: I0903 23:39:48.813581 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-host-proc-sys-net\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814394 kubelet[2627]: I0903 23:39:48.813597 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-host-proc-sys-kernel\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814394 kubelet[2627]: I0903 23:39:48.813615 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-xtables-lock\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814394 kubelet[2627]: I0903 23:39:48.813632 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xqkf\" (UniqueName: \"kubernetes.io/projected/4dafee3a-2272-43e7-8323-9d1c6bab9769-kube-api-access-2xqkf\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814394 kubelet[2627]: I0903 23:39:48.813646 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-run\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814533 kubelet[2627]: I0903 23:39:48.813661 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-cgroup\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814533 kubelet[2627]: I0903 23:39:48.813677 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n98qf\" (UniqueName: \"kubernetes.io/projected/5f46c986-63a3-46a0-bf40-9988b0adca7e-kube-api-access-n98qf\") pod \"5f46c986-63a3-46a0-bf40-9988b0adca7e\" (UID: \"5f46c986-63a3-46a0-bf40-9988b0adca7e\") " Sep 3 23:39:48.814533 kubelet[2627]: I0903 23:39:48.813693 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f46c986-63a3-46a0-bf40-9988b0adca7e-cilium-config-path\") pod \"5f46c986-63a3-46a0-bf40-9988b0adca7e\" (UID: \"5f46c986-63a3-46a0-bf40-9988b0adca7e\") " Sep 3 23:39:48.814533 kubelet[2627]: I0903 23:39:48.813710 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4dafee3a-2272-43e7-8323-9d1c6bab9769-hubble-tls\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814533 kubelet[2627]: I0903 23:39:48.813754 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4dafee3a-2272-43e7-8323-9d1c6bab9769-clustermesh-secrets\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814533 kubelet[2627]: I0903 23:39:48.813770 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-bpf-maps\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814649 kubelet[2627]: I0903 23:39:48.813786 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-hostproc\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.814649 kubelet[2627]: I0903 23:39:48.813802 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-etc-cni-netd\") pod \"4dafee3a-2272-43e7-8323-9d1c6bab9769\" (UID: \"4dafee3a-2272-43e7-8323-9d1c6bab9769\") " Sep 3 23:39:48.818769 kubelet[2627]: I0903 23:39:48.818383 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:39:48.818769 kubelet[2627]: I0903 23:39:48.818381 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cni-path" (OuterVolumeSpecName: "cni-path") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:39:48.818769 kubelet[2627]: I0903 23:39:48.818454 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:39:48.818769 kubelet[2627]: I0903 23:39:48.818478 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:39:48.818769 kubelet[2627]: I0903 23:39:48.818495 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:39:48.819022 kubelet[2627]: I0903 23:39:48.818921 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:39:48.819022 kubelet[2627]: I0903 23:39:48.818954 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:39:48.820294 kubelet[2627]: I0903 23:39:48.820239 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 3 23:39:48.822067 kubelet[2627]: I0903 23:39:48.821875 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dafee3a-2272-43e7-8323-9d1c6bab9769-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 3 23:39:48.822067 kubelet[2627]: I0903 23:39:48.821949 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dafee3a-2272-43e7-8323-9d1c6bab9769-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 3 23:39:48.822067 kubelet[2627]: I0903 23:39:48.821985 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dafee3a-2272-43e7-8323-9d1c6bab9769-kube-api-access-2xqkf" (OuterVolumeSpecName: "kube-api-access-2xqkf") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "kube-api-access-2xqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 3 23:39:48.822067 kubelet[2627]: I0903 23:39:48.821999 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:39:48.822067 kubelet[2627]: I0903 23:39:48.822012 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-hostproc" (OuterVolumeSpecName: "hostproc") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:39:48.822285 kubelet[2627]: I0903 23:39:48.822039 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4dafee3a-2272-43e7-8323-9d1c6bab9769" (UID: "4dafee3a-2272-43e7-8323-9d1c6bab9769"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:39:48.822635 kubelet[2627]: I0903 23:39:48.822605 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f46c986-63a3-46a0-bf40-9988b0adca7e-kube-api-access-n98qf" (OuterVolumeSpecName: "kube-api-access-n98qf") pod "5f46c986-63a3-46a0-bf40-9988b0adca7e" (UID: "5f46c986-63a3-46a0-bf40-9988b0adca7e"). InnerVolumeSpecName "kube-api-access-n98qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 3 23:39:48.823740 kubelet[2627]: I0903 23:39:48.823692 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f46c986-63a3-46a0-bf40-9988b0adca7e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5f46c986-63a3-46a0-bf40-9988b0adca7e" (UID: "5f46c986-63a3-46a0-bf40-9988b0adca7e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 3 23:39:48.824255 kubelet[2627]: E0903 23:39:48.824223 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:48.832787 systemd[1]: Removed slice kubepods-burstable-pod4dafee3a_2272_43e7_8323_9d1c6bab9769.slice - libcontainer container kubepods-burstable-pod4dafee3a_2272_43e7_8323_9d1c6bab9769.slice. Sep 3 23:39:48.832929 systemd[1]: kubepods-burstable-pod4dafee3a_2272_43e7_8323_9d1c6bab9769.slice: Consumed 6.331s CPU time, 124.6M memory peak, 136K read from disk, 12.9M written to disk. Sep 3 23:39:48.834319 systemd[1]: Removed slice kubepods-besteffort-pod5f46c986_63a3_46a0_bf40_9988b0adca7e.slice - libcontainer container kubepods-besteffort-pod5f46c986_63a3_46a0_bf40_9988b0adca7e.slice. Sep 3 23:39:48.914370 kubelet[2627]: I0903 23:39:48.914225 2627 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914370 kubelet[2627]: I0903 23:39:48.914261 2627 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914370 kubelet[2627]: I0903 23:39:48.914272 2627 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914370 kubelet[2627]: I0903 23:39:48.914284 2627 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914370 kubelet[2627]: I0903 23:39:48.914292 2627 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xqkf\" (UniqueName: \"kubernetes.io/projected/4dafee3a-2272-43e7-8323-9d1c6bab9769-kube-api-access-2xqkf\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914370 kubelet[2627]: I0903 23:39:48.914300 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914370 kubelet[2627]: I0903 23:39:48.914309 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914370 kubelet[2627]: I0903 23:39:48.914320 2627 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n98qf\" (UniqueName: \"kubernetes.io/projected/5f46c986-63a3-46a0-bf40-9988b0adca7e-kube-api-access-n98qf\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914623 kubelet[2627]: I0903 23:39:48.914328 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f46c986-63a3-46a0-bf40-9988b0adca7e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914623 kubelet[2627]: I0903 23:39:48.914336 2627 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914623 kubelet[2627]: I0903 23:39:48.914353 2627 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4dafee3a-2272-43e7-8323-9d1c6bab9769-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914623 kubelet[2627]: I0903 23:39:48.914361 2627 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4dafee3a-2272-43e7-8323-9d1c6bab9769-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914623 kubelet[2627]: I0903 23:39:48.914383 2627 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914623 kubelet[2627]: I0903 23:39:48.914392 2627 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914623 kubelet[2627]: I0903 23:39:48.914399 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4dafee3a-2272-43e7-8323-9d1c6bab9769-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:48.914623 kubelet[2627]: I0903 23:39:48.914406 2627 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4dafee3a-2272-43e7-8323-9d1c6bab9769-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 3 23:39:49.033624 kubelet[2627]: I0903 23:39:49.033593 2627 scope.go:117] "RemoveContainer" containerID="2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20" Sep 3 23:39:49.036586 containerd[1534]: time="2025-09-03T23:39:49.036548729Z" level=info msg="RemoveContainer for \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\"" Sep 3 23:39:49.085473 containerd[1534]: time="2025-09-03T23:39:49.085420154Z" level=info msg="RemoveContainer for \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\" returns successfully" Sep 3 23:39:49.088052 kubelet[2627]: I0903 23:39:49.088018 2627 scope.go:117] "RemoveContainer" containerID="2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20" Sep 3 23:39:49.088430 containerd[1534]: time="2025-09-03T23:39:49.088381478Z" level=error msg="ContainerStatus for \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\": not found" Sep 3 23:39:49.092439 kubelet[2627]: E0903 23:39:49.092360 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\": not found" containerID="2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20" Sep 3 23:39:49.092554 kubelet[2627]: I0903 23:39:49.092437 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20"} err="failed to get container status \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b850e3801ae0f711c6d649eda56bf36352580f333342d29b0f90e56748dda20\": not found" Sep 3 23:39:49.092554 kubelet[2627]: I0903 23:39:49.092545 2627 scope.go:117] "RemoveContainer" containerID="f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb" Sep 3 23:39:49.094426 containerd[1534]: time="2025-09-03T23:39:49.094394319Z" level=info msg="RemoveContainer for \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\"" Sep 3 23:39:49.098260 containerd[1534]: time="2025-09-03T23:39:49.098227853Z" level=info msg="RemoveContainer for \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" returns successfully" Sep 3 23:39:49.098435 kubelet[2627]: I0903 23:39:49.098406 2627 scope.go:117] "RemoveContainer" containerID="c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8" Sep 3 23:39:49.099647 containerd[1534]: time="2025-09-03T23:39:49.099626142Z" level=info msg="RemoveContainer for \"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\"" Sep 3 23:39:49.106816 containerd[1534]: time="2025-09-03T23:39:49.106773852Z" level=info msg="RemoveContainer for \"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\" returns successfully" Sep 3 23:39:49.107046 kubelet[2627]: I0903 23:39:49.107006 2627 scope.go:117] "RemoveContainer" containerID="1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b" Sep 3 23:39:49.118232 containerd[1534]: time="2025-09-03T23:39:49.118197461Z" level=info msg="RemoveContainer for \"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\"" Sep 3 23:39:49.121966 containerd[1534]: time="2025-09-03T23:39:49.121930204Z" level=info msg="RemoveContainer for \"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\" returns successfully" Sep 3 23:39:49.122139 kubelet[2627]: I0903 23:39:49.122103 2627 scope.go:117] "RemoveContainer" containerID="3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8" Sep 3 23:39:49.123502 containerd[1534]: time="2025-09-03T23:39:49.123470241Z" level=info msg="RemoveContainer for \"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\"" Sep 3 23:39:49.126188 containerd[1534]: time="2025-09-03T23:39:49.126157267Z" level=info msg="RemoveContainer for \"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\" returns successfully" Sep 3 23:39:49.126374 kubelet[2627]: I0903 23:39:49.126352 2627 scope.go:117] "RemoveContainer" containerID="a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af" Sep 3 23:39:49.127537 containerd[1534]: time="2025-09-03T23:39:49.127504960Z" level=info msg="RemoveContainer for \"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\"" Sep 3 23:39:49.129994 containerd[1534]: time="2025-09-03T23:39:49.129968283Z" level=info msg="RemoveContainer for \"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\" returns successfully" Sep 3 23:39:49.130120 kubelet[2627]: I0903 23:39:49.130094 2627 scope.go:117] "RemoveContainer" containerID="f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb" Sep 3 23:39:49.130390 containerd[1534]: time="2025-09-03T23:39:49.130315376Z" level=error msg="ContainerStatus for \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\": not found" Sep 3 23:39:49.130652 kubelet[2627]: E0903 23:39:49.130509 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\": not found" containerID="f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb" Sep 3 23:39:49.130652 kubelet[2627]: I0903 23:39:49.130541 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb"} err="failed to get container status \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2f5ebac3b2f53ba3d74f3b28e658f099332919b32d0c827e986e48627bdc7fb\": not found" Sep 3 23:39:49.130652 kubelet[2627]: I0903 23:39:49.130561 2627 scope.go:117] "RemoveContainer" containerID="c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8" Sep 3 23:39:49.130760 containerd[1534]: time="2025-09-03T23:39:49.130732582Z" level=error msg="ContainerStatus for \"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\": not found" Sep 3 23:39:49.130871 kubelet[2627]: E0903 23:39:49.130852 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\": not found" containerID="c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8" Sep 3 23:39:49.130904 kubelet[2627]: I0903 23:39:49.130889 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8"} err="failed to get container status \"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\": rpc error: code = NotFound desc = an error occurred when try to find container \"c098daded306f727bf069f244c4d0c2c72f927f41c31a2f1ace042c069aafba8\": not found" Sep 3 23:39:49.130932 kubelet[2627]: I0903 23:39:49.130907 2627 scope.go:117] "RemoveContainer" containerID="1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b" Sep 3 23:39:49.131133 containerd[1534]: time="2025-09-03T23:39:49.131070515Z" level=error msg="ContainerStatus for \"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\": not found" Sep 3 23:39:49.131206 kubelet[2627]: E0903 23:39:49.131186 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\": not found" containerID="1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b" Sep 3 23:39:49.131242 kubelet[2627]: I0903 23:39:49.131212 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b"} err="failed to get container status \"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fd55719c72a5b49c3d59976f5cacfe05846bc8fcd67d408955fd08be9dc208b\": not found" Sep 3 23:39:49.131242 kubelet[2627]: I0903 23:39:49.131230 2627 scope.go:117] "RemoveContainer" containerID="3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8" Sep 3 23:39:49.131456 containerd[1534]: time="2025-09-03T23:39:49.131425207Z" level=error msg="ContainerStatus for \"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\": not found" Sep 3 23:39:49.131599 kubelet[2627]: E0903 23:39:49.131573 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\": not found" containerID="3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8" Sep 3 23:39:49.131632 kubelet[2627]: I0903 23:39:49.131608 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8"} err="failed to get container status \"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ca7078da37519a4d6b00b673ead6147c44059e64d8815532f61ba293daf08c8\": not found" Sep 3 23:39:49.131632 kubelet[2627]: I0903 23:39:49.131624 2627 scope.go:117] "RemoveContainer" containerID="a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af" Sep 3 23:39:49.131791 containerd[1534]: time="2025-09-03T23:39:49.131758341Z" level=error msg="ContainerStatus for \"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\": not found" Sep 3 23:39:49.131937 kubelet[2627]: E0903 23:39:49.131854 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\": not found" containerID="a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af" Sep 3 23:39:49.131937 kubelet[2627]: I0903 23:39:49.131877 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af"} err="failed to get container status \"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\": rpc error: code = NotFound desc = an error occurred when try to find container \"a784495543001405436dc198d91dc17ee25f0ba94765ec911607f7a38956e7af\": not found" Sep 3 23:39:49.637267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-107f0e3550ac4e53b88b35eead4cd2f00e35950426b95bfb0988715b5c6eb527-shm.mount: Deactivated successfully. Sep 3 23:39:49.637418 systemd[1]: var-lib-kubelet-pods-4dafee3a\x2d2272\x2d43e7\x2d8323\x2d9d1c6bab9769-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 3 23:39:49.637469 systemd[1]: var-lib-kubelet-pods-5f46c986\x2d63a3\x2d46a0\x2dbf40\x2d9988b0adca7e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn98qf.mount: Deactivated successfully. Sep 3 23:39:49.637520 systemd[1]: var-lib-kubelet-pods-4dafee3a\x2d2272\x2d43e7\x2d8323\x2d9d1c6bab9769-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2xqkf.mount: Deactivated successfully. Sep 3 23:39:49.637570 systemd[1]: var-lib-kubelet-pods-4dafee3a\x2d2272\x2d43e7\x2d8323\x2d9d1c6bab9769-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 3 23:39:50.547050 sshd[4234]: Connection closed by 10.0.0.1 port 36890 Sep 3 23:39:50.547501 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:50.556915 systemd[1]: sshd@22-10.0.0.118:22-10.0.0.1:36890.service: Deactivated successfully. Sep 3 23:39:50.559066 systemd[1]: session-23.scope: Deactivated successfully. Sep 3 23:39:50.559256 systemd[1]: session-23.scope: Consumed 1.151s CPU time, 23.3M memory peak. Sep 3 23:39:50.560229 systemd-logind[1510]: Session 23 logged out. Waiting for processes to exit. Sep 3 23:39:50.562172 systemd-logind[1510]: Removed session 23. Sep 3 23:39:50.564155 systemd[1]: Started sshd@23-10.0.0.118:22-10.0.0.1:39134.service - OpenSSH per-connection server daemon (10.0.0.1:39134). Sep 3 23:39:50.614804 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 39134 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:50.615968 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:50.620109 systemd-logind[1510]: New session 24 of user core. Sep 3 23:39:50.624883 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 3 23:39:50.827282 kubelet[2627]: I0903 23:39:50.826908 2627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dafee3a-2272-43e7-8323-9d1c6bab9769" path="/var/lib/kubelet/pods/4dafee3a-2272-43e7-8323-9d1c6bab9769/volumes" Sep 3 23:39:50.829322 kubelet[2627]: I0903 23:39:50.829058 2627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f46c986-63a3-46a0-bf40-9988b0adca7e" path="/var/lib/kubelet/pods/5f46c986-63a3-46a0-bf40-9988b0adca7e/volumes" Sep 3 23:39:52.051672 sshd[4389]: Connection closed by 10.0.0.1 port 39134 Sep 3 23:39:52.052088 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:52.062680 systemd[1]: sshd@23-10.0.0.118:22-10.0.0.1:39134.service: Deactivated successfully. Sep 3 23:39:52.066950 systemd[1]: session-24.scope: Deactivated successfully. Sep 3 23:39:52.067153 systemd[1]: session-24.scope: Consumed 1.348s CPU time, 26.2M memory peak. Sep 3 23:39:52.067731 kubelet[2627]: E0903 23:39:52.067613 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f46c986-63a3-46a0-bf40-9988b0adca7e" containerName="cilium-operator" Sep 3 23:39:52.067731 kubelet[2627]: E0903 23:39:52.067639 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4dafee3a-2272-43e7-8323-9d1c6bab9769" containerName="clean-cilium-state" Sep 3 23:39:52.067731 kubelet[2627]: E0903 23:39:52.067646 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4dafee3a-2272-43e7-8323-9d1c6bab9769" containerName="cilium-agent" Sep 3 23:39:52.067731 kubelet[2627]: E0903 23:39:52.067653 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4dafee3a-2272-43e7-8323-9d1c6bab9769" containerName="apply-sysctl-overwrites" Sep 3 23:39:52.067731 kubelet[2627]: E0903 23:39:52.067658 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4dafee3a-2272-43e7-8323-9d1c6bab9769" containerName="mount-bpf-fs" Sep 3 23:39:52.067731 kubelet[2627]: E0903 23:39:52.067665 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4dafee3a-2272-43e7-8323-9d1c6bab9769" containerName="mount-cgroup" Sep 3 23:39:52.067731 kubelet[2627]: I0903 23:39:52.067695 2627 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dafee3a-2272-43e7-8323-9d1c6bab9769" containerName="cilium-agent" Sep 3 23:39:52.067731 kubelet[2627]: I0903 23:39:52.067708 2627 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f46c986-63a3-46a0-bf40-9988b0adca7e" containerName="cilium-operator" Sep 3 23:39:52.069971 systemd-logind[1510]: Session 24 logged out. Waiting for processes to exit. Sep 3 23:39:52.075182 systemd[1]: Started sshd@24-10.0.0.118:22-10.0.0.1:39148.service - OpenSSH per-connection server daemon (10.0.0.1:39148). Sep 3 23:39:52.076845 systemd-logind[1510]: Removed session 24. Sep 3 23:39:52.090534 systemd[1]: Created slice kubepods-burstable-pod3226bf84_a15d_44f2_92da_9a6230f77aa0.slice - libcontainer container kubepods-burstable-pod3226bf84_a15d_44f2_92da_9a6230f77aa0.slice. Sep 3 23:39:52.130608 kubelet[2627]: I0903 23:39:52.130555 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3226bf84-a15d-44f2-92da-9a6230f77aa0-cilium-run\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130608 kubelet[2627]: I0903 23:39:52.130601 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3226bf84-a15d-44f2-92da-9a6230f77aa0-bpf-maps\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130767 kubelet[2627]: I0903 23:39:52.130623 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rv8t\" (UniqueName: \"kubernetes.io/projected/3226bf84-a15d-44f2-92da-9a6230f77aa0-kube-api-access-8rv8t\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130767 kubelet[2627]: I0903 23:39:52.130641 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3226bf84-a15d-44f2-92da-9a6230f77aa0-cilium-config-path\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130767 kubelet[2627]: I0903 23:39:52.130657 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3226bf84-a15d-44f2-92da-9a6230f77aa0-cilium-ipsec-secrets\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130767 kubelet[2627]: I0903 23:39:52.130672 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3226bf84-a15d-44f2-92da-9a6230f77aa0-host-proc-sys-net\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130767 kubelet[2627]: I0903 23:39:52.130686 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3226bf84-a15d-44f2-92da-9a6230f77aa0-host-proc-sys-kernel\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130875 kubelet[2627]: I0903 23:39:52.130704 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3226bf84-a15d-44f2-92da-9a6230f77aa0-xtables-lock\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130875 kubelet[2627]: I0903 23:39:52.130739 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3226bf84-a15d-44f2-92da-9a6230f77aa0-etc-cni-netd\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130875 kubelet[2627]: I0903 23:39:52.130757 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3226bf84-a15d-44f2-92da-9a6230f77aa0-clustermesh-secrets\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130875 kubelet[2627]: I0903 23:39:52.130776 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3226bf84-a15d-44f2-92da-9a6230f77aa0-cni-path\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130875 kubelet[2627]: I0903 23:39:52.130792 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3226bf84-a15d-44f2-92da-9a6230f77aa0-cilium-cgroup\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130875 kubelet[2627]: I0903 23:39:52.130807 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3226bf84-a15d-44f2-92da-9a6230f77aa0-hostproc\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130985 kubelet[2627]: I0903 23:39:52.130825 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3226bf84-a15d-44f2-92da-9a6230f77aa0-lib-modules\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.130985 kubelet[2627]: I0903 23:39:52.130839 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3226bf84-a15d-44f2-92da-9a6230f77aa0-hubble-tls\") pod \"cilium-j7jsc\" (UID: \"3226bf84-a15d-44f2-92da-9a6230f77aa0\") " pod="kube-system/cilium-j7jsc" Sep 3 23:39:52.136066 sshd[4401]: Accepted publickey for core from 10.0.0.1 port 39148 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:52.137261 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:52.141074 systemd-logind[1510]: New session 25 of user core. Sep 3 23:39:52.150869 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 3 23:39:52.200757 sshd[4403]: Connection closed by 10.0.0.1 port 39148 Sep 3 23:39:52.201168 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Sep 3 23:39:52.214803 systemd[1]: sshd@24-10.0.0.118:22-10.0.0.1:39148.service: Deactivated successfully. Sep 3 23:39:52.216498 systemd[1]: session-25.scope: Deactivated successfully. Sep 3 23:39:52.217275 systemd-logind[1510]: Session 25 logged out. Waiting for processes to exit. Sep 3 23:39:52.220380 systemd[1]: Started sshd@25-10.0.0.118:22-10.0.0.1:39162.service - OpenSSH per-connection server daemon (10.0.0.1:39162). Sep 3 23:39:52.220933 systemd-logind[1510]: Removed session 25. Sep 3 23:39:52.272289 sshd[4410]: Accepted publickey for core from 10.0.0.1 port 39162 ssh2: RSA SHA256:1HWmUcWn3RdCIc5OCfBmjcEfzTg2RLCcl058HjE/qfU Sep 3 23:39:52.273693 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:39:52.277791 systemd-logind[1510]: New session 26 of user core. Sep 3 23:39:52.287930 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 3 23:39:52.404837 kubelet[2627]: E0903 23:39:52.404546 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:52.406325 containerd[1534]: time="2025-09-03T23:39:52.406276879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7jsc,Uid:3226bf84-a15d-44f2-92da-9a6230f77aa0,Namespace:kube-system,Attempt:0,}" Sep 3 23:39:52.420339 containerd[1534]: time="2025-09-03T23:39:52.420259450Z" level=info msg="connecting to shim 77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f" address="unix:///run/containerd/s/9ee0304cb5bfb140369eecfc6dd916975063a747f456aececc5e9b71df93cbef" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:39:52.444890 systemd[1]: Started cri-containerd-77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f.scope - libcontainer container 77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f. Sep 3 23:39:52.468600 containerd[1534]: time="2025-09-03T23:39:52.468562790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7jsc,Uid:3226bf84-a15d-44f2-92da-9a6230f77aa0,Namespace:kube-system,Attempt:0,} returns sandbox id \"77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f\"" Sep 3 23:39:52.469607 kubelet[2627]: E0903 23:39:52.469159 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:52.471892 containerd[1534]: time="2025-09-03T23:39:52.471867015Z" level=info msg="CreateContainer within sandbox \"77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:39:52.477497 containerd[1534]: time="2025-09-03T23:39:52.477460652Z" level=info msg="Container 63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:39:52.484761 containerd[1534]: time="2025-09-03T23:39:52.484700381Z" level=info msg="CreateContainer within sandbox \"77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a\"" Sep 3 23:39:52.485235 containerd[1534]: time="2025-09-03T23:39:52.485209628Z" level=info msg="StartContainer for \"63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a\"" Sep 3 23:39:52.487170 containerd[1534]: time="2025-09-03T23:39:52.486769927Z" level=info msg="connecting to shim 63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a" address="unix:///run/containerd/s/9ee0304cb5bfb140369eecfc6dd916975063a747f456aececc5e9b71df93cbef" protocol=ttrpc version=3 Sep 3 23:39:52.508935 systemd[1]: Started cri-containerd-63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a.scope - libcontainer container 63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a. Sep 3 23:39:52.533616 containerd[1534]: time="2025-09-03T23:39:52.533538447Z" level=info msg="StartContainer for \"63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a\" returns successfully" Sep 3 23:39:52.544007 systemd[1]: cri-containerd-63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a.scope: Deactivated successfully. Sep 3 23:39:52.545956 containerd[1534]: time="2025-09-03T23:39:52.545922922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a\" id:\"63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a\" pid:4481 exited_at:{seconds:1756942792 nanos:545563345}" Sep 3 23:39:52.546195 containerd[1534]: time="2025-09-03T23:39:52.546115269Z" level=info msg="received exit event container_id:\"63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a\" id:\"63e44ddbd40b72a91bc64578fa92b70f978f4805712db342c654636dba19907a\" pid:4481 exited_at:{seconds:1756942792 nanos:545563345}" Sep 3 23:39:52.825191 kubelet[2627]: E0903 23:39:52.824550 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:52.825191 kubelet[2627]: E0903 23:39:52.825105 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:52.887057 kubelet[2627]: E0903 23:39:52.886997 2627 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 3 23:39:53.052271 kubelet[2627]: E0903 23:39:53.052100 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:53.054977 containerd[1534]: time="2025-09-03T23:39:53.054941561Z" level=info msg="CreateContainer within sandbox \"77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:39:53.062954 containerd[1534]: time="2025-09-03T23:39:53.062863083Z" level=info msg="Container 715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:39:53.067860 containerd[1534]: time="2025-09-03T23:39:53.067822303Z" level=info msg="CreateContainer within sandbox \"77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0\"" Sep 3 23:39:53.068295 containerd[1534]: time="2025-09-03T23:39:53.068271156Z" level=info msg="StartContainer for \"715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0\"" Sep 3 23:39:53.069267 containerd[1534]: time="2025-09-03T23:39:53.069240778Z" level=info msg="connecting to shim 715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0" address="unix:///run/containerd/s/9ee0304cb5bfb140369eecfc6dd916975063a747f456aececc5e9b71df93cbef" protocol=ttrpc version=3 Sep 3 23:39:53.094876 systemd[1]: Started cri-containerd-715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0.scope - libcontainer container 715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0. Sep 3 23:39:53.121074 containerd[1534]: time="2025-09-03T23:39:53.121014891Z" level=info msg="StartContainer for \"715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0\" returns successfully" Sep 3 23:39:53.126048 systemd[1]: cri-containerd-715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0.scope: Deactivated successfully. Sep 3 23:39:53.128395 containerd[1534]: time="2025-09-03T23:39:53.128283931Z" level=info msg="received exit event container_id:\"715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0\" id:\"715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0\" pid:4525 exited_at:{seconds:1756942793 nanos:128081624}" Sep 3 23:39:53.128395 containerd[1534]: time="2025-09-03T23:39:53.128346528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0\" id:\"715ca17e6516f89514669894a39998c54cfebbacaa28266e1a17386f011e34e0\" pid:4525 exited_at:{seconds:1756942793 nanos:128081624}" Sep 3 23:39:54.055993 kubelet[2627]: E0903 23:39:54.055942 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:54.061359 containerd[1534]: time="2025-09-03T23:39:54.061324342Z" level=info msg="CreateContainer within sandbox \"77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:39:54.074953 containerd[1534]: time="2025-09-03T23:39:54.074903462Z" level=info msg="Container 4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:39:54.082571 containerd[1534]: time="2025-09-03T23:39:54.082525916Z" level=info msg="CreateContainer within sandbox \"77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd\"" Sep 3 23:39:54.083770 containerd[1534]: time="2025-09-03T23:39:54.083062726Z" level=info msg="StartContainer for \"4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd\"" Sep 3 23:39:54.085110 containerd[1534]: time="2025-09-03T23:39:54.085076653Z" level=info msg="connecting to shim 4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd" address="unix:///run/containerd/s/9ee0304cb5bfb140369eecfc6dd916975063a747f456aececc5e9b71df93cbef" protocol=ttrpc version=3 Sep 3 23:39:54.105903 systemd[1]: Started cri-containerd-4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd.scope - libcontainer container 4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd. Sep 3 23:39:54.140768 containerd[1534]: time="2025-09-03T23:39:54.140689262Z" level=info msg="StartContainer for \"4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd\" returns successfully" Sep 3 23:39:54.140885 systemd[1]: cri-containerd-4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd.scope: Deactivated successfully. Sep 3 23:39:54.144305 containerd[1534]: time="2025-09-03T23:39:54.144229664Z" level=info msg="received exit event container_id:\"4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd\" id:\"4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd\" pid:4569 exited_at:{seconds:1756942794 nanos:143964639}" Sep 3 23:39:54.144494 containerd[1534]: time="2025-09-03T23:39:54.144287181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd\" id:\"4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd\" pid:4569 exited_at:{seconds:1756942794 nanos:143964639}" Sep 3 23:39:54.165515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f5cb80e8afcc4e9248e896ccaa5f6613432fd86fdd747f498587cb9eef5c8dd-rootfs.mount: Deactivated successfully. Sep 3 23:39:54.213078 kubelet[2627]: I0903 23:39:54.212536 2627 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-03T23:39:54Z","lastTransitionTime":"2025-09-03T23:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 3 23:39:55.060247 kubelet[2627]: E0903 23:39:55.060171 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:55.064048 containerd[1534]: time="2025-09-03T23:39:55.064010635Z" level=info msg="CreateContainer within sandbox \"77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:39:55.075768 containerd[1534]: time="2025-09-03T23:39:55.073760211Z" level=info msg="Container 803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:39:55.080354 containerd[1534]: time="2025-09-03T23:39:55.080315033Z" level=info msg="CreateContainer within sandbox \"77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45\"" Sep 3 23:39:55.081562 containerd[1534]: time="2025-09-03T23:39:55.081516571Z" level=info msg="StartContainer for \"803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45\"" Sep 3 23:39:55.083105 containerd[1534]: time="2025-09-03T23:39:55.083078090Z" level=info msg="connecting to shim 803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45" address="unix:///run/containerd/s/9ee0304cb5bfb140369eecfc6dd916975063a747f456aececc5e9b71df93cbef" protocol=ttrpc version=3 Sep 3 23:39:55.102875 systemd[1]: Started cri-containerd-803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45.scope - libcontainer container 803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45. Sep 3 23:39:55.125994 systemd[1]: cri-containerd-803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45.scope: Deactivated successfully. Sep 3 23:39:55.127122 containerd[1534]: time="2025-09-03T23:39:55.127088978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45\" id:\"803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45\" pid:4609 exited_at:{seconds:1756942795 nanos:126634401}" Sep 3 23:39:55.127410 containerd[1534]: time="2025-09-03T23:39:55.127300727Z" level=info msg="received exit event container_id:\"803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45\" id:\"803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45\" pid:4609 exited_at:{seconds:1756942795 nanos:126634401}" Sep 3 23:39:55.133826 containerd[1534]: time="2025-09-03T23:39:55.133800151Z" level=info msg="StartContainer for \"803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45\" returns successfully" Sep 3 23:39:55.144979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-803aef8a9e992fb0380b62dea628d937db7e2f083a14e5ab5e5d3e78c53a3f45-rootfs.mount: Deactivated successfully. Sep 3 23:39:56.065537 kubelet[2627]: E0903 23:39:56.065491 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:56.068912 containerd[1534]: time="2025-09-03T23:39:56.068710137Z" level=info msg="CreateContainer within sandbox \"77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:39:56.088452 containerd[1534]: time="2025-09-03T23:39:56.088415802Z" level=info msg="Container 60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:39:56.092140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300113106.mount: Deactivated successfully. Sep 3 23:39:56.095686 containerd[1534]: time="2025-09-03T23:39:56.095644139Z" level=info msg="CreateContainer within sandbox \"77f54d49d830151309656cb9b7cfd2a10ac797a1793c78516fdcb8ee304a529f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d\"" Sep 3 23:39:56.096387 containerd[1534]: time="2025-09-03T23:39:56.096343626Z" level=info msg="StartContainer for \"60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d\"" Sep 3 23:39:56.097558 containerd[1534]: time="2025-09-03T23:39:56.097494451Z" level=info msg="connecting to shim 60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d" address="unix:///run/containerd/s/9ee0304cb5bfb140369eecfc6dd916975063a747f456aececc5e9b71df93cbef" protocol=ttrpc version=3 Sep 3 23:39:56.116879 systemd[1]: Started cri-containerd-60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d.scope - libcontainer container 60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d. Sep 3 23:39:56.147743 containerd[1534]: time="2025-09-03T23:39:56.147684750Z" level=info msg="StartContainer for \"60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d\" returns successfully" Sep 3 23:39:56.196619 containerd[1534]: time="2025-09-03T23:39:56.196526712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d\" id:\"110b41320140d4ddc68e2c34fbe51d7207c0e84efa05060d8e3e26f5c6c38de9\" pid:4679 exited_at:{seconds:1756942796 nanos:196279644}" Sep 3 23:39:56.400746 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 3 23:39:57.071545 kubelet[2627]: E0903 23:39:57.071511 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:57.088133 kubelet[2627]: I0903 23:39:57.088027 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j7jsc" podStartSLOduration=5.088013053 podStartE2EDuration="5.088013053s" podCreationTimestamp="2025-09-03 23:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:39:57.087327083 +0000 UTC m=+84.352147555" watchObservedRunningTime="2025-09-03 23:39:57.088013053 +0000 UTC m=+84.352833445" Sep 3 23:39:58.405862 kubelet[2627]: E0903 23:39:58.405554 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:39:58.705806 containerd[1534]: time="2025-09-03T23:39:58.705669531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d\" id:\"ef4b4fe494f404cfda1a133679c54863ab6df9534299ad69d70cf54becb12e93\" pid:5074 exit_status:1 exited_at:{seconds:1756942798 nanos:705040556}" Sep 3 23:39:59.212003 systemd-networkd[1434]: lxc_health: Link UP Sep 3 23:39:59.212206 systemd-networkd[1434]: lxc_health: Gained carrier Sep 3 23:40:00.406127 kubelet[2627]: E0903 23:40:00.406074 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:40:00.851089 containerd[1534]: time="2025-09-03T23:40:00.850993001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d\" id:\"307a1f466cae9eed5d9270ec402b76f15bb76b1d9ae33ceb2d79237a2ab6fd8d\" pid:5219 exited_at:{seconds:1756942800 nanos:850619013}" Sep 3 23:40:01.082068 kubelet[2627]: E0903 23:40:01.082032 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:40:01.087904 systemd-networkd[1434]: lxc_health: Gained IPv6LL Sep 3 23:40:02.084121 kubelet[2627]: E0903 23:40:02.084092 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:40:02.955431 containerd[1534]: time="2025-09-03T23:40:02.955393539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d\" id:\"a55cb8539e5705c765885b1205ca8f079c479090b0a42b6fa05538ffd353ff9a\" pid:5252 exited_at:{seconds:1756942802 nanos:954793514}" Sep 3 23:40:05.058171 containerd[1534]: time="2025-09-03T23:40:05.058116615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60c10808180b6bc2348c156a4b66b880e600c4c17ca9c4477bdbfc4257975d2d\" id:\"9a765f256bcac7834d59c125c00087693b966c600cfd583854f355fd9b3fb636\" pid:5277 exited_at:{seconds:1756942805 nanos:57350827}" Sep 3 23:40:05.066011 sshd[4416]: Connection closed by 10.0.0.1 port 39162 Sep 3 23:40:05.066708 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Sep 3 23:40:05.070104 systemd[1]: sshd@25-10.0.0.118:22-10.0.0.1:39162.service: Deactivated successfully. Sep 3 23:40:05.071625 systemd[1]: session-26.scope: Deactivated successfully. Sep 3 23:40:05.073443 systemd-logind[1510]: Session 26 logged out. Waiting for processes to exit. Sep 3 23:40:05.074966 systemd-logind[1510]: Removed session 26.