Sep 13 00:06:19.866341 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:06:19.866362 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 13 00:06:19.866372 kernel: KASLR enabled Sep 13 00:06:19.866379 kernel: efi: EFI v2.7 by EDK II Sep 13 00:06:19.866384 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 13 00:06:19.866390 kernel: random: crng init done Sep 13 00:06:19.866402 kernel: ACPI: Early table checksum verification disabled Sep 13 00:06:19.866408 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 13 00:06:19.866414 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:06:19.866422 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:19.866428 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:19.866434 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:19.866440 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:19.866446 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:19.866454 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:19.866462 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:19.866468 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:19.866475 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:19.866481 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 13 00:06:19.866488 kernel: NUMA: Failed to initialise from firmware Sep 13 00:06:19.866495 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:06:19.866501 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 13 00:06:19.866508 kernel: Zone ranges: Sep 13 00:06:19.866514 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:06:19.866520 kernel: DMA32 empty Sep 13 00:06:19.866528 kernel: Normal empty Sep 13 00:06:19.866534 kernel: Movable zone start for each node Sep 13 00:06:19.866540 kernel: Early memory node ranges Sep 13 00:06:19.866547 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 13 00:06:19.866553 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 13 00:06:19.866559 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 13 00:06:19.866566 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 13 00:06:19.866572 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 13 00:06:19.866578 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 13 00:06:19.866585 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 13 00:06:19.866591 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:06:19.866598 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 13 00:06:19.866605 kernel: psci: probing for conduit method from ACPI. Sep 13 00:06:19.866612 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:06:19.866618 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:06:19.866627 kernel: psci: Trusted OS migration not required Sep 13 00:06:19.866633 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:06:19.866641 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:06:19.866649 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 13 00:06:19.866656 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 13 00:06:19.866663 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 13 00:06:19.866669 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:06:19.866676 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:06:19.866778 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:06:19.866786 kernel: CPU features: detected: Spectre-v4 Sep 13 00:06:19.866793 kernel: CPU features: detected: Spectre-BHB Sep 13 00:06:19.866800 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:06:19.866807 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:06:19.866817 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:06:19.866824 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:06:19.866830 kernel: alternatives: applying boot alternatives Sep 13 00:06:19.866838 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:06:19.866845 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:06:19.866852 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:06:19.866974 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:06:19.866987 kernel: Fallback order for Node 0: 0 Sep 13 00:06:19.866994 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 13 00:06:19.867000 kernel: Policy zone: DMA Sep 13 00:06:19.867007 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:06:19.867033 kernel: software IO TLB: area num 4. Sep 13 00:06:19.867040 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 13 00:06:19.867048 kernel: Memory: 2386340K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 185948K reserved, 0K cma-reserved) Sep 13 00:06:19.867055 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:06:19.867062 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:06:19.867069 kernel: rcu: RCU event tracing is enabled. Sep 13 00:06:19.867077 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:06:19.867092 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:06:19.867101 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:06:19.867108 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:06:19.867115 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:06:19.867124 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:06:19.867131 kernel: GICv3: 256 SPIs implemented Sep 13 00:06:19.867138 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:06:19.867144 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:06:19.867157 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 00:06:19.867164 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:06:19.867170 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:06:19.867177 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:06:19.867184 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:06:19.867191 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 13 00:06:19.867198 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 13 00:06:19.867205 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:06:19.867213 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:06:19.867220 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:06:19.867227 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:06:19.867233 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:06:19.867240 kernel: arm-pv: using stolen time PV Sep 13 00:06:19.867248 kernel: Console: colour dummy device 80x25 Sep 13 00:06:19.867254 kernel: ACPI: Core revision 20230628 Sep 13 00:06:19.867262 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:06:19.867268 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:06:19.867276 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:06:19.867287 kernel: landlock: Up and running. Sep 13 00:06:19.867295 kernel: SELinux: Initializing. Sep 13 00:06:19.867306 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:06:19.867316 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:06:19.867326 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:06:19.867336 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:06:19.867344 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:06:19.867351 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:06:19.867358 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:06:19.867367 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:06:19.867374 kernel: Remapping and enabling EFI services. Sep 13 00:06:19.867381 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:06:19.867389 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:06:19.867396 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:06:19.867403 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 13 00:06:19.867410 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:06:19.867417 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:06:19.867424 kernel: Detected PIPT I-cache on CPU2 Sep 13 00:06:19.867431 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 13 00:06:19.867440 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 13 00:06:19.867447 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:06:19.867459 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 13 00:06:19.867467 kernel: Detected PIPT I-cache on CPU3 Sep 13 00:06:19.867475 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 13 00:06:19.867482 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 13 00:06:19.867489 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:06:19.867496 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 13 00:06:19.867504 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:06:19.867513 kernel: SMP: Total of 4 processors activated. Sep 13 00:06:19.867520 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:06:19.867528 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:06:19.867535 kernel: CPU features: detected: Common not Private translations Sep 13 00:06:19.867542 kernel: CPU features: detected: CRC32 instructions Sep 13 00:06:19.867550 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 13 00:06:19.867557 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:06:19.867564 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:06:19.867573 kernel: CPU features: detected: Privileged Access Never Sep 13 00:06:19.867580 kernel: CPU features: detected: RAS Extension Support Sep 13 00:06:19.867587 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:06:19.867594 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:06:19.867602 kernel: alternatives: applying system-wide alternatives Sep 13 00:06:19.867609 kernel: devtmpfs: initialized Sep 13 00:06:19.867617 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:06:19.867624 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:06:19.867631 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:06:19.867641 kernel: SMBIOS 3.0.0 present. Sep 13 00:06:19.867648 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 13 00:06:19.867656 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:06:19.867663 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:06:19.867671 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:06:19.867679 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:06:19.867686 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:06:19.867694 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 Sep 13 00:06:19.867701 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:06:19.867710 kernel: cpuidle: using governor menu Sep 13 00:06:19.867717 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:06:19.867725 kernel: ASID allocator initialised with 32768 entries Sep 13 00:06:19.867732 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:06:19.867739 kernel: Serial: AMBA PL011 UART driver Sep 13 00:06:19.867747 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 00:06:19.867754 kernel: Modules: 0 pages in range for non-PLT usage Sep 13 00:06:19.867761 kernel: Modules: 508992 pages in range for PLT usage Sep 13 00:06:19.867769 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:06:19.867778 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:06:19.867785 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:06:19.867793 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 00:06:19.867800 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:06:19.867807 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:06:19.867815 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:06:19.867822 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 00:06:19.867830 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:06:19.867837 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:06:19.867846 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:06:19.867853 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:06:19.867861 kernel: ACPI: Interpreter enabled Sep 13 00:06:19.867869 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:06:19.867876 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:06:19.867884 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:06:19.867891 kernel: printk: console [ttyAMA0] enabled Sep 13 00:06:19.867898 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:06:19.868054 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:06:19.868184 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:06:19.868257 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:06:19.868341 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:06:19.868411 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:06:19.868421 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:06:19.868428 kernel: PCI host bridge to bus 0000:00 Sep 13 00:06:19.868497 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:06:19.868560 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:06:19.868618 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:06:19.868679 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:06:19.868774 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:06:19.868885 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:06:19.868984 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 13 00:06:19.869080 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 13 00:06:19.869189 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:06:19.869258 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:06:19.869327 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 13 00:06:19.869399 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 13 00:06:19.869459 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:06:19.869517 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:06:19.869579 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:06:19.869589 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:06:19.869597 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:06:19.869605 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:06:19.869613 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:06:19.869620 kernel: iommu: Default domain type: Translated Sep 13 00:06:19.869628 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:06:19.869636 kernel: efivars: Registered efivars operations Sep 13 00:06:19.869643 kernel: vgaarb: loaded Sep 13 00:06:19.869652 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:06:19.869659 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:06:19.869672 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:06:19.869680 kernel: pnp: PnP ACPI init Sep 13 00:06:19.869776 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:06:19.869789 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:06:19.869796 kernel: NET: Registered PF_INET protocol family Sep 13 00:06:19.869804 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:06:19.869815 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:06:19.869823 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:06:19.869830 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:06:19.869838 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:06:19.869845 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:06:19.869853 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:06:19.869860 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:06:19.869868 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:06:19.869875 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:06:19.869895 kernel: kvm [1]: HYP mode not available Sep 13 00:06:19.869903 kernel: Initialise system trusted keyrings Sep 13 00:06:19.869910 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:06:19.869917 kernel: Key type asymmetric registered Sep 13 00:06:19.869924 kernel: Asymmetric key parser 'x509' registered Sep 13 00:06:19.869932 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:06:19.869939 kernel: io scheduler mq-deadline registered Sep 13 00:06:19.869947 kernel: io scheduler kyber registered Sep 13 00:06:19.869955 kernel: io scheduler bfq registered Sep 13 00:06:19.869964 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:06:19.869971 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:06:19.869979 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:06:19.870107 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 13 00:06:19.870119 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:06:19.870127 kernel: thunder_xcv, ver 1.0 Sep 13 00:06:19.870134 kernel: thunder_bgx, ver 1.0 Sep 13 00:06:19.870141 kernel: nicpf, ver 1.0 Sep 13 00:06:19.870152 kernel: nicvf, ver 1.0 Sep 13 00:06:19.870232 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:06:19.870301 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:06:19 UTC (1757721979) Sep 13 00:06:19.870311 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:06:19.870319 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:06:19.870326 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 13 00:06:19.870334 kernel: watchdog: Hard watchdog permanently disabled Sep 13 00:06:19.870341 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:06:19.870349 kernel: Segment Routing with IPv6 Sep 13 00:06:19.870358 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:06:19.870366 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:06:19.870374 kernel: Key type dns_resolver registered Sep 13 00:06:19.870381 kernel: registered taskstats version 1 Sep 13 00:06:19.870388 kernel: Loading compiled-in X.509 certificates Sep 13 00:06:19.870396 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 13 00:06:19.870403 kernel: Key type .fscrypt registered Sep 13 00:06:19.870411 kernel: Key type fscrypt-provisioning registered Sep 13 00:06:19.870418 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:06:19.870427 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:06:19.870435 kernel: ima: No architecture policies found Sep 13 00:06:19.870442 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:06:19.870449 kernel: clk: Disabling unused clocks Sep 13 00:06:19.870456 kernel: Freeing unused kernel memory: 39488K Sep 13 00:06:19.870463 kernel: Run /init as init process Sep 13 00:06:19.870471 kernel: with arguments: Sep 13 00:06:19.870478 kernel: /init Sep 13 00:06:19.870485 kernel: with environment: Sep 13 00:06:19.870494 kernel: HOME=/ Sep 13 00:06:19.870501 kernel: TERM=linux Sep 13 00:06:19.870508 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:06:19.870518 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:06:19.870528 systemd[1]: Detected virtualization kvm. Sep 13 00:06:19.870537 systemd[1]: Detected architecture arm64. Sep 13 00:06:19.870544 systemd[1]: Running in initrd. Sep 13 00:06:19.870552 systemd[1]: No hostname configured, using default hostname. Sep 13 00:06:19.870561 systemd[1]: Hostname set to . Sep 13 00:06:19.870569 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:19.870577 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:06:19.870585 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:19.870593 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:19.870602 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:06:19.870610 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:06:19.870618 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:06:19.870628 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:06:19.870637 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:06:19.870646 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:06:19.870654 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:19.870662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:19.870670 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:06:19.870678 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:06:19.870687 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:06:19.870695 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:06:19.870703 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:06:19.870711 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:06:19.870719 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:06:19.870727 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:06:19.870735 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:19.870743 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:19.870752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:19.870760 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:06:19.870768 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:06:19.870776 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:06:19.870784 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:06:19.870792 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:06:19.870801 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:06:19.870809 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:06:19.870817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:19.870826 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:06:19.870834 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:19.870859 systemd-journald[237]: Collecting audit messages is disabled. Sep 13 00:06:19.870878 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:06:19.870888 systemd-journald[237]: Journal started Sep 13 00:06:19.870907 systemd-journald[237]: Runtime Journal (/run/log/journal/ddfb135c37b340079cf122c9f6979b20) is 5.9M, max 47.3M, 41.4M free. Sep 13 00:06:19.866290 systemd-modules-load[238]: Inserted module 'overlay' Sep 13 00:06:19.874774 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:06:19.874808 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:06:19.879171 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:06:19.878538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:19.879992 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:06:19.885173 kernel: Bridge firewalling registered Sep 13 00:06:19.883819 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 13 00:06:19.886378 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:19.899217 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:19.900838 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:19.902535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:06:19.904862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:06:19.912526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:19.918427 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:19.919918 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:19.925489 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:19.943453 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:06:19.945719 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:06:19.953134 dracut-cmdline[277]: dracut-dracut-053 Sep 13 00:06:19.955915 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:06:19.972074 systemd-resolved[281]: Positive Trust Anchors: Sep 13 00:06:19.972100 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:19.972131 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:06:19.976908 systemd-resolved[281]: Defaulting to hostname 'linux'. Sep 13 00:06:19.980100 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:06:19.981045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:20.020044 kernel: SCSI subsystem initialized Sep 13 00:06:20.025031 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:06:20.032038 kernel: iscsi: registered transport (tcp) Sep 13 00:06:20.046052 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:06:20.046110 kernel: QLogic iSCSI HBA Driver Sep 13 00:06:20.087827 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:06:20.103262 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:06:20.119624 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:06:20.119692 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:06:20.119713 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:06:20.166058 kernel: raid6: neonx8 gen() 13992 MB/s Sep 13 00:06:20.183062 kernel: raid6: neonx4 gen() 13548 MB/s Sep 13 00:06:20.200046 kernel: raid6: neonx2 gen() 12336 MB/s Sep 13 00:06:20.217040 kernel: raid6: neonx1 gen() 10368 MB/s Sep 13 00:06:20.234036 kernel: raid6: int64x8 gen() 6956 MB/s Sep 13 00:06:20.251034 kernel: raid6: int64x4 gen() 7341 MB/s Sep 13 00:06:20.268050 kernel: raid6: int64x2 gen() 6114 MB/s Sep 13 00:06:20.285039 kernel: raid6: int64x1 gen() 5021 MB/s Sep 13 00:06:20.285059 kernel: raid6: using algorithm neonx8 gen() 13992 MB/s Sep 13 00:06:20.302055 kernel: raid6: .... xor() 12013 MB/s, rmw enabled Sep 13 00:06:20.302105 kernel: raid6: using neon recovery algorithm Sep 13 00:06:20.308256 kernel: xor: measuring software checksum speed Sep 13 00:06:20.308305 kernel: 8regs : 18022 MB/sec Sep 13 00:06:20.309141 kernel: 32regs : 19664 MB/sec Sep 13 00:06:20.309214 kernel: arm64_neon : 25253 MB/sec Sep 13 00:06:20.310037 kernel: xor: using function: arm64_neon (25253 MB/sec) Sep 13 00:06:20.360066 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:06:20.376984 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:06:20.392334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:20.404248 systemd-udevd[464]: Using default interface naming scheme 'v255'. Sep 13 00:06:20.408544 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:20.424492 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:06:20.444356 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Sep 13 00:06:20.478310 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:06:20.492195 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:06:20.532621 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:20.545686 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:06:20.559988 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:06:20.561373 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:06:20.565866 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:20.566914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:06:20.576490 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:06:20.590120 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 13 00:06:20.590309 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:06:20.589999 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:06:20.599398 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:06:20.599447 kernel: GPT:9289727 != 19775487 Sep 13 00:06:20.599458 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:06:20.599467 kernel: GPT:9289727 != 19775487 Sep 13 00:06:20.600573 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:06:20.601254 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:20.609557 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:06:20.609688 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:20.613499 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:20.614727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:20.615214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:20.618420 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:20.633400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:20.639653 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (513) Sep 13 00:06:20.641032 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (526) Sep 13 00:06:20.647567 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:06:20.653341 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:06:20.655605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:20.663940 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:06:20.667904 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:06:20.669008 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:06:20.682188 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:06:20.684277 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:20.688346 disk-uuid[555]: Primary Header is updated. Sep 13 00:06:20.688346 disk-uuid[555]: Secondary Entries is updated. Sep 13 00:06:20.688346 disk-uuid[555]: Secondary Header is updated. Sep 13 00:06:20.692035 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:20.695063 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:20.700048 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:20.705875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:21.703053 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:21.703658 disk-uuid[556]: The operation has completed successfully. Sep 13 00:06:21.758905 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:06:21.759040 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:06:21.775184 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:06:21.778099 sh[579]: Success Sep 13 00:06:21.798054 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:06:21.836689 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:06:21.854769 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:06:21.856905 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:06:21.872943 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 13 00:06:21.872989 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:06:21.873000 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:06:21.873011 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:06:21.873599 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:06:21.879768 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:06:21.881499 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:06:21.891231 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:06:21.893191 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:06:21.909071 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:06:21.909129 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:06:21.909140 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:06:21.913037 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:06:21.921691 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:06:21.923242 kernel: BTRFS info (device vda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:06:21.930539 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:06:21.937276 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:06:22.012083 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:06:22.014818 ignition[681]: Ignition 2.19.0 Sep 13 00:06:22.014829 ignition[681]: Stage: fetch-offline Sep 13 00:06:22.023217 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:06:22.014867 ignition[681]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:22.014876 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:22.015070 ignition[681]: parsed url from cmdline: "" Sep 13 00:06:22.015074 ignition[681]: no config URL provided Sep 13 00:06:22.015079 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:06:22.015087 ignition[681]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:06:22.015111 ignition[681]: op(1): [started] loading QEMU firmware config module Sep 13 00:06:22.015115 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:06:22.022218 ignition[681]: op(1): [finished] loading QEMU firmware config module Sep 13 00:06:22.049225 systemd-networkd[770]: lo: Link UP Sep 13 00:06:22.049236 systemd-networkd[770]: lo: Gained carrier Sep 13 00:06:22.049927 systemd-networkd[770]: Enumeration completed Sep 13 00:06:22.050031 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:06:22.051132 systemd[1]: Reached target network.target - Network. Sep 13 00:06:22.052096 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:22.052099 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:22.057224 systemd-networkd[770]: eth0: Link UP Sep 13 00:06:22.057228 systemd-networkd[770]: eth0: Gained carrier Sep 13 00:06:22.057236 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:22.084795 ignition[681]: parsing config with SHA512: cffa5b3711563d6b2c929bf83bae46785c2bccf9bec7d326114efa43434ee82beb18797f036d245470df018aaae6b79c086a962cb83f44b4c0abd09c387d0b39 Sep 13 00:06:22.090094 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:06:22.091567 unknown[681]: fetched base config from "system" Sep 13 00:06:22.091583 unknown[681]: fetched user config from "qemu" Sep 13 00:06:22.092216 ignition[681]: fetch-offline: fetch-offline passed Sep 13 00:06:22.094392 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:06:22.092298 ignition[681]: Ignition finished successfully Sep 13 00:06:22.095708 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:06:22.105283 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:06:22.116159 ignition[776]: Ignition 2.19.0 Sep 13 00:06:22.116169 ignition[776]: Stage: kargs Sep 13 00:06:22.116340 ignition[776]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:22.116352 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:22.119192 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:06:22.117299 ignition[776]: kargs: kargs passed Sep 13 00:06:22.117350 ignition[776]: Ignition finished successfully Sep 13 00:06:22.133284 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:06:22.142971 ignition[785]: Ignition 2.19.0 Sep 13 00:06:22.142982 ignition[785]: Stage: disks Sep 13 00:06:22.143181 ignition[785]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:22.143191 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:22.144073 ignition[785]: disks: disks passed Sep 13 00:06:22.144121 ignition[785]: Ignition finished successfully Sep 13 00:06:22.146479 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:06:22.151100 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:06:22.151992 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:06:22.153983 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:06:22.155830 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:06:22.157538 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:06:22.170225 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:06:22.181635 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:06:22.189084 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:06:22.203202 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:06:22.246063 kernel: EXT4-fs (vda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 13 00:06:22.247222 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:06:22.248795 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:06:22.265134 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:06:22.267334 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:06:22.268120 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:06:22.268163 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:06:22.268185 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:06:22.273820 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:06:22.275608 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:06:22.289036 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (803) Sep 13 00:06:22.292437 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:06:22.292474 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:06:22.292486 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:06:22.296044 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:06:22.298927 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:06:22.327526 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:06:22.331604 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:06:22.336742 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:06:22.340137 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:06:22.421939 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:06:22.429156 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:06:22.432969 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:06:22.438080 kernel: BTRFS info (device vda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:06:22.457115 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:06:22.469348 ignition[917]: INFO : Ignition 2.19.0 Sep 13 00:06:22.471043 ignition[917]: INFO : Stage: mount Sep 13 00:06:22.471043 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:22.471043 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:22.474195 ignition[917]: INFO : mount: mount passed Sep 13 00:06:22.474195 ignition[917]: INFO : Ignition finished successfully Sep 13 00:06:22.473584 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:06:22.483411 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:06:22.871375 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:06:22.882239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:06:22.888245 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (929) Sep 13 00:06:22.888291 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:06:22.889736 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:06:22.889779 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:06:22.893060 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:06:22.894563 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:06:22.922072 ignition[947]: INFO : Ignition 2.19.0 Sep 13 00:06:22.922072 ignition[947]: INFO : Stage: files Sep 13 00:06:22.923804 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:22.923804 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:22.923804 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:06:22.928030 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:06:22.928030 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:06:22.928030 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:06:22.928030 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:06:22.928030 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:06:22.927945 unknown[947]: wrote ssh authorized keys file for user: core Sep 13 00:06:22.935751 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:06:22.935751 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:06:23.031295 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:06:23.410843 systemd-networkd[770]: eth0: Gained IPv6LL Sep 13 00:06:23.426254 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:06:23.426254 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:06:23.429214 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 00:06:23.639218 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:06:23.722484 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:06:23.722484 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:06:23.726605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:06:23.937298 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:06:24.214810 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:06:24.214810 ignition[947]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:06:24.217997 ignition[947]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:24.217997 ignition[947]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:24.217997 ignition[947]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:06:24.217997 ignition[947]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:06:24.217997 ignition[947]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:06:24.217997 ignition[947]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:06:24.217997 ignition[947]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:06:24.217997 ignition[947]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:06:24.236653 ignition[947]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:06:24.241851 ignition[947]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:06:24.243192 ignition[947]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:06:24.243192 ignition[947]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:24.243192 ignition[947]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:24.243192 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:24.243192 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:24.243192 ignition[947]: INFO : files: files passed Sep 13 00:06:24.243192 ignition[947]: INFO : Ignition finished successfully Sep 13 00:06:24.243961 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:06:24.256194 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:06:24.259211 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:06:24.262054 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:06:24.262885 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:06:24.267367 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:06:24.271240 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:24.271240 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:24.273879 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:24.275152 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:06:24.277607 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:06:24.288243 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:06:24.311939 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:06:24.312263 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:06:24.314438 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:06:24.316220 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:06:24.317915 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:06:24.318740 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:06:24.336147 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:06:24.349423 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:06:24.359480 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:24.360720 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:24.362296 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:06:24.363709 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:06:24.363832 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:06:24.365772 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:06:24.367377 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:06:24.368827 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:06:24.370740 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:06:24.372662 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:06:24.375333 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:06:24.376875 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:06:24.378490 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:06:24.380301 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:06:24.381604 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:06:24.383103 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:06:24.383233 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:06:24.384978 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:24.386532 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:24.387958 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:06:24.389560 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:24.390547 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:06:24.390663 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:06:24.392768 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:06:24.392881 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:06:24.394352 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:06:24.395557 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:06:24.397848 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:24.398905 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:06:24.400773 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:06:24.401925 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:06:24.402006 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:06:24.403218 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:06:24.403297 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:06:24.404449 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:06:24.404549 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:06:24.405933 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:06:24.406055 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:06:24.417225 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:06:24.418608 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:06:24.419338 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:06:24.419451 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:24.421199 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:06:24.421309 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:06:24.426466 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:06:24.427374 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:06:24.429486 ignition[1001]: INFO : Ignition 2.19.0 Sep 13 00:06:24.430349 ignition[1001]: INFO : Stage: umount Sep 13 00:06:24.430349 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:24.430349 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:24.432704 ignition[1001]: INFO : umount: umount passed Sep 13 00:06:24.432704 ignition[1001]: INFO : Ignition finished successfully Sep 13 00:06:24.431871 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:06:24.432068 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:06:24.435554 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:06:24.435910 systemd[1]: Stopped target network.target - Network. Sep 13 00:06:24.437780 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:06:24.437829 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:06:24.439058 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:06:24.439100 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:06:24.440316 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:06:24.440360 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:06:24.442095 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:06:24.442145 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:06:24.443822 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:06:24.446139 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:06:24.455062 systemd-networkd[770]: eth0: DHCPv6 lease lost Sep 13 00:06:24.455937 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:06:24.456065 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:06:24.458386 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:06:24.458509 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:06:24.460505 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:06:24.460568 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:24.471243 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:06:24.471912 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:06:24.471969 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:06:24.473642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:06:24.473685 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:24.474926 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:06:24.474962 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:24.476815 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:06:24.476855 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:24.478326 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:24.487627 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:06:24.487737 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:06:24.496641 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:06:24.496774 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:24.498554 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:06:24.498592 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:24.499994 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:06:24.500038 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:24.501467 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:06:24.501511 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:06:24.503672 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:06:24.503715 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:06:24.505807 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:06:24.505849 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:24.512165 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:06:24.512912 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:06:24.512959 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:24.514710 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:06:24.514746 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:06:24.516236 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:06:24.516271 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:24.517936 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:24.517969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:24.519799 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:06:24.519879 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:06:24.521905 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:06:24.521987 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:06:24.523893 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:06:24.524765 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:06:24.524827 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:06:24.526967 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:06:24.536279 systemd[1]: Switching root. Sep 13 00:06:24.563818 systemd-journald[237]: Journal stopped Sep 13 00:06:25.289429 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 13 00:06:25.289484 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:06:25.289500 kernel: SELinux: policy capability open_perms=1 Sep 13 00:06:25.289510 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:06:25.289519 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:06:25.289529 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:06:25.289539 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:06:25.289548 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:06:25.289558 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:06:25.289569 systemd[1]: Successfully loaded SELinux policy in 30.361ms. Sep 13 00:06:25.289593 kernel: audit: type=1403 audit(1757721984.724:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:06:25.289608 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.339ms. Sep 13 00:06:25.289620 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:06:25.289635 systemd[1]: Detected virtualization kvm. Sep 13 00:06:25.289646 systemd[1]: Detected architecture arm64. Sep 13 00:06:25.289658 systemd[1]: Detected first boot. Sep 13 00:06:25.289668 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:25.289679 zram_generator::config[1047]: No configuration found. Sep 13 00:06:25.289691 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:06:25.289703 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:06:25.289714 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:06:25.289725 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:06:25.289736 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:06:25.289746 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:06:25.289757 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:06:25.289767 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:06:25.289778 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:06:25.289788 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:06:25.289801 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:06:25.289811 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:06:25.289822 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:25.289833 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:25.289843 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:06:25.289854 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:06:25.289864 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:06:25.289875 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:06:25.289887 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 13 00:06:25.289901 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:25.289912 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:06:25.289923 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:06:25.289934 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:06:25.289944 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:06:25.289955 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:25.289965 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:06:25.289977 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:06:25.289988 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:06:25.289998 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:06:25.290010 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:06:25.290775 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:25.290798 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:25.290809 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:25.290819 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:06:25.290830 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:06:25.290841 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:06:25.290857 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:06:25.290868 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:06:25.290882 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:06:25.290893 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:06:25.290905 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:06:25.290916 systemd[1]: Reached target machines.target - Containers. Sep 13 00:06:25.290927 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:06:25.290938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:25.290952 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:06:25.290963 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:06:25.290974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:25.290984 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:06:25.290995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:25.291006 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:06:25.291048 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:25.291064 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:06:25.291078 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:06:25.291089 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:06:25.291100 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:06:25.291111 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:06:25.291122 kernel: fuse: init (API version 7.39) Sep 13 00:06:25.291132 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:06:25.291143 kernel: loop: module loaded Sep 13 00:06:25.291154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:06:25.291165 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:06:25.291178 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:06:25.291189 kernel: ACPI: bus type drm_connector registered Sep 13 00:06:25.291199 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:06:25.291210 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:06:25.291220 systemd[1]: Stopped verity-setup.service. Sep 13 00:06:25.291231 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:06:25.291242 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:06:25.291252 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:06:25.291263 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:06:25.291300 systemd-journald[1111]: Collecting audit messages is disabled. Sep 13 00:06:25.291322 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:06:25.291334 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:06:25.291345 systemd-journald[1111]: Journal started Sep 13 00:06:25.291369 systemd-journald[1111]: Runtime Journal (/run/log/journal/ddfb135c37b340079cf122c9f6979b20) is 5.9M, max 47.3M, 41.4M free. Sep 13 00:06:25.107983 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:06:25.126874 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:06:25.127264 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:06:25.293035 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:06:25.293741 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:25.296482 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:06:25.296635 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:06:25.297825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:25.297958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:25.299190 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:25.300176 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:06:25.301217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:25.302131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:25.303595 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:06:25.303715 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:06:25.305066 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:25.305200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:25.306328 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:25.307483 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:06:25.308681 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:06:25.309978 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:06:25.321918 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:06:25.335136 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:06:25.336980 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:06:25.337952 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:06:25.337990 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:06:25.339915 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:06:25.341984 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:06:25.343836 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:06:25.344779 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:25.346105 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:06:25.347967 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:06:25.349052 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:25.350190 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:06:25.350988 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:25.355188 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:25.357180 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:06:25.359503 systemd-journald[1111]: Time spent on flushing to /var/log/journal/ddfb135c37b340079cf122c9f6979b20 is 30.038ms for 860 entries. Sep 13 00:06:25.359503 systemd-journald[1111]: System Journal (/var/log/journal/ddfb135c37b340079cf122c9f6979b20) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:06:25.403679 systemd-journald[1111]: Received client request to flush runtime journal. Sep 13 00:06:25.403732 kernel: loop0: detected capacity change from 0 to 203944 Sep 13 00:06:25.403751 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:06:25.363232 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:06:25.366553 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:25.367752 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:06:25.370228 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:06:25.371437 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:06:25.380350 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:06:25.386078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:25.387540 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:06:25.388932 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:06:25.393541 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:06:25.398544 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:06:25.405399 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:06:25.415246 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Sep 13 00:06:25.415265 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Sep 13 00:06:25.417349 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:06:25.417978 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:06:25.420485 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:06:25.430250 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:06:25.439263 kernel: loop1: detected capacity change from 0 to 114432 Sep 13 00:06:25.456905 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:06:25.470440 kernel: loop2: detected capacity change from 0 to 114328 Sep 13 00:06:25.465209 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:06:25.481553 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 13 00:06:25.481572 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 13 00:06:25.485761 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:25.499319 kernel: loop3: detected capacity change from 0 to 203944 Sep 13 00:06:25.504043 kernel: loop4: detected capacity change from 0 to 114432 Sep 13 00:06:25.508035 kernel: loop5: detected capacity change from 0 to 114328 Sep 13 00:06:25.512186 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:06:25.512548 (sd-merge)[1186]: Merged extensions into '/usr'. Sep 13 00:06:25.517043 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:06:25.517147 systemd[1]: Reloading... Sep 13 00:06:25.576706 zram_generator::config[1213]: No configuration found. Sep 13 00:06:25.646651 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:06:25.673733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:25.710083 systemd[1]: Reloading finished in 192 ms. Sep 13 00:06:25.740604 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:06:25.742039 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:06:25.760229 systemd[1]: Starting ensure-sysext.service... Sep 13 00:06:25.762033 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:06:25.768552 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:06:25.768567 systemd[1]: Reloading... Sep 13 00:06:25.782489 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:06:25.784371 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:06:25.785351 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:06:25.785605 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 13 00:06:25.785660 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 13 00:06:25.798497 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:06:25.798509 systemd-tmpfiles[1248]: Skipping /boot Sep 13 00:06:25.812374 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:06:25.812390 systemd-tmpfiles[1248]: Skipping /boot Sep 13 00:06:25.824107 zram_generator::config[1278]: No configuration found. Sep 13 00:06:25.904942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:25.941061 systemd[1]: Reloading finished in 172 ms. Sep 13 00:06:25.961098 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:06:25.973429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:25.980762 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:25.982946 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:06:25.985158 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:06:25.990213 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:06:26.006265 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:26.011287 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:06:26.015161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:26.019504 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:26.024306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:26.026688 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:26.028219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:26.030994 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:06:26.032566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:26.035071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:26.036706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:26.037215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:26.039785 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:26.043222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:26.049197 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Sep 13 00:06:26.051639 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:06:26.054141 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:26.062636 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:26.066275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:26.067346 augenrules[1341]: No rules Sep 13 00:06:26.070806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:26.072497 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:26.074681 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:06:26.076313 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:06:26.079293 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:26.081079 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:06:26.082738 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:26.084822 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:06:26.087853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:26.087979 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:26.089344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:26.089474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:26.090910 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:26.091126 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:26.092534 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:06:26.104614 systemd[1]: Finished ensure-sysext.service. Sep 13 00:06:26.109684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:26.119207 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:26.123079 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:06:26.126140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:26.130278 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:26.131307 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:26.133289 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:06:26.138278 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:06:26.139474 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:06:26.139937 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:26.140150 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:26.140805 systemd-resolved[1315]: Positive Trust Anchors: Sep 13 00:06:26.141142 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:26.141223 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:06:26.141341 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:26.141484 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:06:26.142809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:26.142932 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:26.151171 systemd-resolved[1315]: Defaulting to hostname 'linux'. Sep 13 00:06:26.153919 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:06:26.158038 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1364) Sep 13 00:06:26.159099 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 13 00:06:26.160828 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:26.162899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:26.163555 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:26.163713 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:26.164946 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:26.205844 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:06:26.208770 systemd-networkd[1385]: lo: Link UP Sep 13 00:06:26.208779 systemd-networkd[1385]: lo: Gained carrier Sep 13 00:06:26.210369 systemd-networkd[1385]: Enumeration completed Sep 13 00:06:26.211061 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:26.211065 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:26.211536 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:06:26.211740 systemd-networkd[1385]: eth0: Link UP Sep 13 00:06:26.211748 systemd-networkd[1385]: eth0: Gained carrier Sep 13 00:06:26.211762 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:26.212658 systemd[1]: Reached target network.target - Network. Sep 13 00:06:26.214077 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:06:26.224147 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:06:26.224190 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:06:26.224841 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Sep 13 00:06:26.226857 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:06:26.226915 systemd-timesyncd[1386]: Initial clock synchronization to Sat 2025-09-13 00:06:26.206147 UTC. Sep 13 00:06:26.245755 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:06:26.265180 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:06:26.267393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:26.272003 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:06:26.274352 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:06:26.275475 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:06:26.295324 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:26.310415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:26.327687 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:06:26.328968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:26.329939 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:06:26.330922 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:06:26.331953 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:06:26.333158 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:06:26.334062 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:06:26.334960 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:06:26.335977 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:06:26.336022 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:06:26.336741 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:06:26.338333 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:06:26.340397 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:06:26.351981 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:06:26.353991 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:06:26.355312 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:06:26.356223 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:06:26.356913 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:06:26.357765 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:26.357796 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:26.358719 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:06:26.360500 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:06:26.362153 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:26.363779 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:06:26.366037 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:06:26.366902 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:06:26.369887 jq[1419]: false Sep 13 00:06:26.370324 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:06:26.372689 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:06:26.376235 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:06:26.379388 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:06:26.383993 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:06:26.385642 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:06:26.386392 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:06:26.387199 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:06:26.388613 extend-filesystems[1420]: Found loop3 Sep 13 00:06:26.392239 extend-filesystems[1420]: Found loop4 Sep 13 00:06:26.392239 extend-filesystems[1420]: Found loop5 Sep 13 00:06:26.392239 extend-filesystems[1420]: Found vda Sep 13 00:06:26.392239 extend-filesystems[1420]: Found vda1 Sep 13 00:06:26.392239 extend-filesystems[1420]: Found vda2 Sep 13 00:06:26.392239 extend-filesystems[1420]: Found vda3 Sep 13 00:06:26.392239 extend-filesystems[1420]: Found usr Sep 13 00:06:26.392239 extend-filesystems[1420]: Found vda4 Sep 13 00:06:26.392239 extend-filesystems[1420]: Found vda6 Sep 13 00:06:26.392239 extend-filesystems[1420]: Found vda7 Sep 13 00:06:26.392239 extend-filesystems[1420]: Found vda9 Sep 13 00:06:26.392239 extend-filesystems[1420]: Checking size of /dev/vda9 Sep 13 00:06:26.390190 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:06:26.394710 dbus-daemon[1418]: [system] SELinux support is enabled Sep 13 00:06:26.396426 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:06:26.400999 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:06:26.415381 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:06:26.418185 jq[1432]: true Sep 13 00:06:26.415543 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:06:26.416126 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:06:26.416282 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:06:26.419405 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:06:26.422519 extend-filesystems[1420]: Resized partition /dev/vda9 Sep 13 00:06:26.422234 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:06:26.430446 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:06:26.433471 update_engine[1429]: I20250913 00:06:26.433117 1429 main.cc:92] Flatcar Update Engine starting Sep 13 00:06:26.436173 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:06:26.434720 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:06:26.434756 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:06:26.437651 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:06:26.437831 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:06:26.439860 jq[1444]: true Sep 13 00:06:26.443829 update_engine[1429]: I20250913 00:06:26.442878 1429 update_check_scheduler.cc:74] Next update check in 11m52s Sep 13 00:06:26.444062 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1364) Sep 13 00:06:26.444727 systemd-logind[1427]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:06:26.444985 systemd-logind[1427]: New seat seat0. Sep 13 00:06:26.447346 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:06:26.452061 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:06:26.454834 (ntainerd)[1446]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:06:26.458964 tar[1441]: linux-arm64/helm Sep 13 00:06:26.457451 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:06:26.470562 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:06:26.485048 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:06:26.485048 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:06:26.485048 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:06:26.493381 extend-filesystems[1420]: Resized filesystem in /dev/vda9 Sep 13 00:06:26.490340 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:06:26.490568 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:06:26.509039 bash[1474]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:06:26.511169 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:06:26.512711 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:06:26.535379 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:06:26.603033 containerd[1446]: time="2025-09-13T00:06:26.602917560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:06:26.631366 containerd[1446]: time="2025-09-13T00:06:26.631316600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633041 containerd[1446]: time="2025-09-13T00:06:26.632919520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633041 containerd[1446]: time="2025-09-13T00:06:26.632979160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:06:26.633041 containerd[1446]: time="2025-09-13T00:06:26.632994760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:06:26.633187 containerd[1446]: time="2025-09-13T00:06:26.633163840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:06:26.633211 containerd[1446]: time="2025-09-13T00:06:26.633189840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633284 containerd[1446]: time="2025-09-13T00:06:26.633241320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633284 containerd[1446]: time="2025-09-13T00:06:26.633257120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633432 containerd[1446]: time="2025-09-13T00:06:26.633410160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633456 containerd[1446]: time="2025-09-13T00:06:26.633431320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633456 containerd[1446]: time="2025-09-13T00:06:26.633444800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633456 containerd[1446]: time="2025-09-13T00:06:26.633454440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633556 containerd[1446]: time="2025-09-13T00:06:26.633524080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633722 containerd[1446]: time="2025-09-13T00:06:26.633703920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633833 containerd[1446]: time="2025-09-13T00:06:26.633814120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:26.633858 containerd[1446]: time="2025-09-13T00:06:26.633833200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:06:26.633918 containerd[1446]: time="2025-09-13T00:06:26.633903920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:06:26.633987 containerd[1446]: time="2025-09-13T00:06:26.633954160Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:06:26.637952 containerd[1446]: time="2025-09-13T00:06:26.637924040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:06:26.638086 containerd[1446]: time="2025-09-13T00:06:26.637986080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:06:26.638086 containerd[1446]: time="2025-09-13T00:06:26.638008680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:06:26.638086 containerd[1446]: time="2025-09-13T00:06:26.638043240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:06:26.638086 containerd[1446]: time="2025-09-13T00:06:26.638057160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:06:26.638226 containerd[1446]: time="2025-09-13T00:06:26.638178200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638420760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638555680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638573200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638586600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638601120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638614160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638626440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638640040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638654760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638667960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638679760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638690960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638711080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639187 containerd[1446]: time="2025-09-13T00:06:26.638724760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638737760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638756720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638770880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638784760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638808400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638820720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638834360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638846440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638857800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638869680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638884480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638904320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638916360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.639443 containerd[1446]: time="2025-09-13T00:06:26.638927720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:06:26.640528 containerd[1446]: time="2025-09-13T00:06:26.640500720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:06:26.641628 containerd[1446]: time="2025-09-13T00:06:26.640789520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:06:26.641628 containerd[1446]: time="2025-09-13T00:06:26.640853160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:06:26.641628 containerd[1446]: time="2025-09-13T00:06:26.640871040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:06:26.641628 containerd[1446]: time="2025-09-13T00:06:26.640880920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.641628 containerd[1446]: time="2025-09-13T00:06:26.640893640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:06:26.641628 containerd[1446]: time="2025-09-13T00:06:26.640904520Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:06:26.641628 containerd[1446]: time="2025-09-13T00:06:26.640916440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:06:26.641793 containerd[1446]: time="2025-09-13T00:06:26.641269880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:06:26.641793 containerd[1446]: time="2025-09-13T00:06:26.641327240Z" level=info msg="Connect containerd service" Sep 13 00:06:26.641793 containerd[1446]: time="2025-09-13T00:06:26.641353880Z" level=info msg="using legacy CRI server" Sep 13 00:06:26.641793 containerd[1446]: time="2025-09-13T00:06:26.641361080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:06:26.641793 containerd[1446]: time="2025-09-13T00:06:26.641448480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:06:26.643734 containerd[1446]: time="2025-09-13T00:06:26.643699720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:06:26.643999 containerd[1446]: time="2025-09-13T00:06:26.643938080Z" level=info msg="Start subscribing containerd event" Sep 13 00:06:26.643999 containerd[1446]: time="2025-09-13T00:06:26.643990640Z" level=info msg="Start recovering state" Sep 13 00:06:26.644168 containerd[1446]: time="2025-09-13T00:06:26.644078400Z" level=info msg="Start event monitor" Sep 13 00:06:26.644168 containerd[1446]: time="2025-09-13T00:06:26.644096120Z" level=info msg="Start snapshots syncer" Sep 13 00:06:26.644168 containerd[1446]: time="2025-09-13T00:06:26.644105760Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:06:26.644168 containerd[1446]: time="2025-09-13T00:06:26.644112920Z" level=info msg="Start streaming server" Sep 13 00:06:26.644640 containerd[1446]: time="2025-09-13T00:06:26.644620400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:06:26.647065 containerd[1446]: time="2025-09-13T00:06:26.644670760Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:06:26.647065 containerd[1446]: time="2025-09-13T00:06:26.645726360Z" level=info msg="containerd successfully booted in 0.043902s" Sep 13 00:06:26.644803 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:06:26.812311 tar[1441]: linux-arm64/LICENSE Sep 13 00:06:26.812489 tar[1441]: linux-arm64/README.md Sep 13 00:06:26.828462 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:06:27.163837 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:06:27.182490 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:06:27.195299 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:06:27.200487 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:06:27.200648 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:06:27.202910 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:06:27.214854 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:06:27.219271 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:06:27.221075 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 13 00:06:27.222240 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:06:27.442260 systemd-networkd[1385]: eth0: Gained IPv6LL Sep 13 00:06:27.444926 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:06:27.446671 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:06:27.459242 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:06:27.461248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:27.463037 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:06:27.482924 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:06:27.483450 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:06:27.485596 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:06:27.492058 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:06:28.034081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:28.035763 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:06:28.038402 (kubelet)[1531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:06:28.041063 systemd[1]: Startup finished in 585ms (kernel) + 5.017s (initrd) + 3.347s (userspace) = 8.950s. Sep 13 00:06:28.423830 kubelet[1531]: E0913 00:06:28.423723 1531 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:28.426283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:28.426416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:32.194817 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:06:32.195918 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:37626.service - OpenSSH per-connection server daemon (10.0.0.1:37626). Sep 13 00:06:32.239948 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 37626 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:32.241389 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:32.248687 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:06:32.259378 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:06:32.260919 systemd-logind[1427]: New session 1 of user core. Sep 13 00:06:32.267592 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:06:32.271105 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:06:32.277519 (systemd)[1550]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:32.360797 systemd[1550]: Queued start job for default target default.target. Sep 13 00:06:32.372938 systemd[1550]: Created slice app.slice - User Application Slice. Sep 13 00:06:32.372969 systemd[1550]: Reached target paths.target - Paths. Sep 13 00:06:32.372981 systemd[1550]: Reached target timers.target - Timers. Sep 13 00:06:32.374203 systemd[1550]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:06:32.383654 systemd[1550]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:06:32.383723 systemd[1550]: Reached target sockets.target - Sockets. Sep 13 00:06:32.383785 systemd[1550]: Reached target basic.target - Basic System. Sep 13 00:06:32.383833 systemd[1550]: Reached target default.target - Main User Target. Sep 13 00:06:32.383860 systemd[1550]: Startup finished in 101ms. Sep 13 00:06:32.383957 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:06:32.385071 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:06:32.442926 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:37636.service - OpenSSH per-connection server daemon (10.0.0.1:37636). Sep 13 00:06:32.478278 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 37636 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:32.479535 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:32.484216 systemd-logind[1427]: New session 2 of user core. Sep 13 00:06:32.498188 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:06:32.550115 sshd[1561]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:32.559455 systemd[1]: sshd@1-10.0.0.88:22-10.0.0.1:37636.service: Deactivated successfully. Sep 13 00:06:32.560910 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:06:32.562049 systemd-logind[1427]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:06:32.563120 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:37638.service - OpenSSH per-connection server daemon (10.0.0.1:37638). Sep 13 00:06:32.563847 systemd-logind[1427]: Removed session 2. Sep 13 00:06:32.597391 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 37638 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:32.598699 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:32.602488 systemd-logind[1427]: New session 3 of user core. Sep 13 00:06:32.614167 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:06:32.664416 sshd[1568]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:32.675475 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:37638.service: Deactivated successfully. Sep 13 00:06:32.676940 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:06:32.678220 systemd-logind[1427]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:06:32.679397 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:37642.service - OpenSSH per-connection server daemon (10.0.0.1:37642). Sep 13 00:06:32.681412 systemd-logind[1427]: Removed session 3. Sep 13 00:06:32.713832 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 37642 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:32.715171 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:32.719001 systemd-logind[1427]: New session 4 of user core. Sep 13 00:06:32.732188 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:06:32.783973 sshd[1575]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:32.797360 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:37642.service: Deactivated successfully. Sep 13 00:06:32.798686 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:06:32.799211 systemd-logind[1427]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:06:32.800707 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:37648.service - OpenSSH per-connection server daemon (10.0.0.1:37648). Sep 13 00:06:32.801384 systemd-logind[1427]: Removed session 4. Sep 13 00:06:32.834234 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 37648 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:32.835361 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:32.839065 systemd-logind[1427]: New session 5 of user core. Sep 13 00:06:32.851224 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:06:32.914576 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:06:32.914859 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:32.927767 sudo[1585]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:32.929494 sshd[1582]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:32.952428 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:37648.service: Deactivated successfully. Sep 13 00:06:32.953847 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:06:32.955076 systemd-logind[1427]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:06:32.956237 systemd[1]: Started sshd@5-10.0.0.88:22-10.0.0.1:37658.service - OpenSSH per-connection server daemon (10.0.0.1:37658). Sep 13 00:06:32.956928 systemd-logind[1427]: Removed session 5. Sep 13 00:06:32.990393 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 37658 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:32.991566 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:32.995602 systemd-logind[1427]: New session 6 of user core. Sep 13 00:06:33.010209 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:06:33.061841 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:06:33.062459 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:33.065339 sudo[1594]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:33.069878 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:06:33.070167 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:33.086261 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:33.087381 auditctl[1597]: No rules Sep 13 00:06:33.088223 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:06:33.088429 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:33.090006 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:33.114630 augenrules[1615]: No rules Sep 13 00:06:33.115989 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:33.117628 sudo[1593]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:33.119530 sshd[1590]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:33.131653 systemd[1]: sshd@5-10.0.0.88:22-10.0.0.1:37658.service: Deactivated successfully. Sep 13 00:06:33.134470 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:06:33.135742 systemd-logind[1427]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:06:33.148390 systemd[1]: Started sshd@6-10.0.0.88:22-10.0.0.1:37674.service - OpenSSH per-connection server daemon (10.0.0.1:37674). Sep 13 00:06:33.149491 systemd-logind[1427]: Removed session 6. Sep 13 00:06:33.179643 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 37674 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:06:33.180935 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:33.185290 systemd-logind[1427]: New session 7 of user core. Sep 13 00:06:33.197236 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:06:33.248909 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:06:33.249984 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:33.535276 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:06:33.535438 (dockerd)[1644]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:06:33.750008 dockerd[1644]: time="2025-09-13T00:06:33.749718991Z" level=info msg="Starting up" Sep 13 00:06:33.908540 dockerd[1644]: time="2025-09-13T00:06:33.908425777Z" level=info msg="Loading containers: start." Sep 13 00:06:34.025049 kernel: Initializing XFRM netlink socket Sep 13 00:06:34.096095 systemd-networkd[1385]: docker0: Link UP Sep 13 00:06:34.118327 dockerd[1644]: time="2025-09-13T00:06:34.118271066Z" level=info msg="Loading containers: done." Sep 13 00:06:34.135767 dockerd[1644]: time="2025-09-13T00:06:34.135687835Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:06:34.135911 dockerd[1644]: time="2025-09-13T00:06:34.135821566Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:06:34.135956 dockerd[1644]: time="2025-09-13T00:06:34.135930590Z" level=info msg="Daemon has completed initialization" Sep 13 00:06:34.175767 dockerd[1644]: time="2025-09-13T00:06:34.175491287Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:06:34.175802 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:06:34.842301 containerd[1446]: time="2025-09-13T00:06:34.842050990Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:06:35.507792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322086386.mount: Deactivated successfully. Sep 13 00:06:36.431658 containerd[1446]: time="2025-09-13T00:06:36.430926152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:36.432145 containerd[1446]: time="2025-09-13T00:06:36.432115656Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 13 00:06:36.433919 containerd[1446]: time="2025-09-13T00:06:36.433888037Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:36.437190 containerd[1446]: time="2025-09-13T00:06:36.437155094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:36.438592 containerd[1446]: time="2025-09-13T00:06:36.438554736Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.596450254s" Sep 13 00:06:36.438653 containerd[1446]: time="2025-09-13T00:06:36.438595477Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:06:36.439926 containerd[1446]: time="2025-09-13T00:06:36.439896486Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:06:37.537616 containerd[1446]: time="2025-09-13T00:06:37.537549547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:37.538073 containerd[1446]: time="2025-09-13T00:06:37.538035959Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 13 00:06:37.538815 containerd[1446]: time="2025-09-13T00:06:37.538764137Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:37.541818 containerd[1446]: time="2025-09-13T00:06:37.541759411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:37.543063 containerd[1446]: time="2025-09-13T00:06:37.543031494Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.103015906s" Sep 13 00:06:37.543119 containerd[1446]: time="2025-09-13T00:06:37.543069876Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:06:37.543513 containerd[1446]: time="2025-09-13T00:06:37.543488560Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:06:38.622729 containerd[1446]: time="2025-09-13T00:06:38.622488401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:38.623956 containerd[1446]: time="2025-09-13T00:06:38.623704249Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 13 00:06:38.625140 containerd[1446]: time="2025-09-13T00:06:38.625090139Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:38.631583 containerd[1446]: time="2025-09-13T00:06:38.631482273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:38.633650 containerd[1446]: time="2025-09-13T00:06:38.632716871Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.089194088s" Sep 13 00:06:38.633650 containerd[1446]: time="2025-09-13T00:06:38.632753934Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:06:38.635032 containerd[1446]: time="2025-09-13T00:06:38.634984320Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:06:38.676750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:06:38.687259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:38.822559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:38.831447 (kubelet)[1865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:06:38.884901 kubelet[1865]: E0913 00:06:38.884761 1865 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:38.890830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:38.891060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:39.761221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964754171.mount: Deactivated successfully. Sep 13 00:06:40.038405 containerd[1446]: time="2025-09-13T00:06:40.038278103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:40.039636 containerd[1446]: time="2025-09-13T00:06:40.039594581Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 13 00:06:40.040464 containerd[1446]: time="2025-09-13T00:06:40.040444498Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:40.042452 containerd[1446]: time="2025-09-13T00:06:40.042413978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:40.043242 containerd[1446]: time="2025-09-13T00:06:40.043207799Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.408188134s" Sep 13 00:06:40.043281 containerd[1446]: time="2025-09-13T00:06:40.043244424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:06:40.043693 containerd[1446]: time="2025-09-13T00:06:40.043670762Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:06:40.520812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2122275735.mount: Deactivated successfully. Sep 13 00:06:41.337058 containerd[1446]: time="2025-09-13T00:06:41.336829673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:41.337474 containerd[1446]: time="2025-09-13T00:06:41.337266372Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 13 00:06:41.338445 containerd[1446]: time="2025-09-13T00:06:41.338413338Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:41.341716 containerd[1446]: time="2025-09-13T00:06:41.341665394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:41.343210 containerd[1446]: time="2025-09-13T00:06:41.343049702Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.299346994s" Sep 13 00:06:41.343210 containerd[1446]: time="2025-09-13T00:06:41.343092244Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:06:41.343873 containerd[1446]: time="2025-09-13T00:06:41.343659210Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:06:41.786216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1777237613.mount: Deactivated successfully. Sep 13 00:06:41.791591 containerd[1446]: time="2025-09-13T00:06:41.791539975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:41.792129 containerd[1446]: time="2025-09-13T00:06:41.792101783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 13 00:06:41.793238 containerd[1446]: time="2025-09-13T00:06:41.793207046Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:41.795610 containerd[1446]: time="2025-09-13T00:06:41.795571788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:41.796396 containerd[1446]: time="2025-09-13T00:06:41.796368859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 452.679462ms" Sep 13 00:06:41.796453 containerd[1446]: time="2025-09-13T00:06:41.796402125Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:06:41.797159 containerd[1446]: time="2025-09-13T00:06:41.796949339Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:06:42.226055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924050554.mount: Deactivated successfully. Sep 13 00:06:43.808553 containerd[1446]: time="2025-09-13T00:06:43.808506251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:43.810064 containerd[1446]: time="2025-09-13T00:06:43.809960287Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 13 00:06:43.810963 containerd[1446]: time="2025-09-13T00:06:43.810912157Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:43.814206 containerd[1446]: time="2025-09-13T00:06:43.814175172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:43.815663 containerd[1446]: time="2025-09-13T00:06:43.815633166Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.01865092s" Sep 13 00:06:43.815726 containerd[1446]: time="2025-09-13T00:06:43.815664794Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:06:47.828939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:47.844310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:47.874011 systemd[1]: Reloading requested from client PID 2022 ('systemctl') (unit session-7.scope)... Sep 13 00:06:47.874036 systemd[1]: Reloading... Sep 13 00:06:47.944058 zram_generator::config[2061]: No configuration found. Sep 13 00:06:48.034932 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:48.089388 systemd[1]: Reloading finished in 215 ms. Sep 13 00:06:48.136329 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:06:48.136395 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:06:48.136606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:48.141772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:48.256603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:48.260656 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:06:48.298321 kubelet[2107]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:48.298321 kubelet[2107]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:06:48.298321 kubelet[2107]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:48.298847 kubelet[2107]: I0913 00:06:48.298340 2107 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:06:49.473863 kubelet[2107]: I0913 00:06:49.473799 2107 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:06:49.473863 kubelet[2107]: I0913 00:06:49.473839 2107 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:06:49.474255 kubelet[2107]: I0913 00:06:49.474095 2107 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:06:49.500953 kubelet[2107]: E0913 00:06:49.500318 2107 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:49.501406 kubelet[2107]: I0913 00:06:49.501365 2107 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:06:49.511466 kubelet[2107]: E0913 00:06:49.511416 2107 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:06:49.511466 kubelet[2107]: I0913 00:06:49.511456 2107 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:06:49.514925 kubelet[2107]: I0913 00:06:49.514892 2107 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:06:49.515813 kubelet[2107]: I0913 00:06:49.515761 2107 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:06:49.515976 kubelet[2107]: I0913 00:06:49.515935 2107 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:06:49.516167 kubelet[2107]: I0913 00:06:49.515965 2107 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:06:49.516265 kubelet[2107]: I0913 00:06:49.516230 2107 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:06:49.516265 kubelet[2107]: I0913 00:06:49.516241 2107 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:06:49.516514 kubelet[2107]: I0913 00:06:49.516476 2107 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:49.520453 kubelet[2107]: I0913 00:06:49.520411 2107 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:06:49.520453 kubelet[2107]: I0913 00:06:49.520458 2107 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:06:49.521125 kubelet[2107]: I0913 00:06:49.520654 2107 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:06:49.521125 kubelet[2107]: I0913 00:06:49.520743 2107 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:06:49.523505 kubelet[2107]: W0913 00:06:49.523451 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 13 00:06:49.523636 kubelet[2107]: E0913 00:06:49.523618 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:49.524471 kubelet[2107]: W0913 00:06:49.524367 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 13 00:06:49.524541 kubelet[2107]: E0913 00:06:49.524472 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:49.524714 kubelet[2107]: I0913 00:06:49.524697 2107 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:06:49.525629 kubelet[2107]: I0913 00:06:49.525611 2107 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:06:49.525890 kubelet[2107]: W0913 00:06:49.525872 2107 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:06:49.527074 kubelet[2107]: I0913 00:06:49.527059 2107 server.go:1274] "Started kubelet" Sep 13 00:06:49.528572 kubelet[2107]: I0913 00:06:49.528035 2107 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:06:49.528572 kubelet[2107]: I0913 00:06:49.528143 2107 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:06:49.528572 kubelet[2107]: I0913 00:06:49.528467 2107 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:06:49.530060 kubelet[2107]: E0913 00:06:49.530039 2107 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:06:49.530413 kubelet[2107]: I0913 00:06:49.530393 2107 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:06:49.530608 kubelet[2107]: I0913 00:06:49.530592 2107 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:06:49.531583 kubelet[2107]: I0913 00:06:49.531554 2107 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:06:49.535062 kubelet[2107]: E0913 00:06:49.530910 2107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aedad7be86ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:06:49.527027437 +0000 UTC m=+1.263298189,LastTimestamp:2025-09-13 00:06:49.527027437 +0000 UTC m=+1.263298189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:06:49.535296 kubelet[2107]: E0913 00:06:49.535242 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:06:49.535848 kubelet[2107]: I0913 00:06:49.535812 2107 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:06:49.535848 kubelet[2107]: E0913 00:06:49.535820 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="200ms" Sep 13 00:06:49.536068 kubelet[2107]: I0913 00:06:49.536044 2107 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:06:49.536180 kubelet[2107]: I0913 00:06:49.536156 2107 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:06:49.536226 kubelet[2107]: I0913 00:06:49.536181 2107 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:06:49.536438 kubelet[2107]: I0913 00:06:49.536373 2107 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:06:49.537206 kubelet[2107]: W0913 00:06:49.536879 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 13 00:06:49.537206 kubelet[2107]: E0913 00:06:49.536935 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:49.538888 kubelet[2107]: I0913 00:06:49.538859 2107 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:06:49.552428 kubelet[2107]: I0913 00:06:49.551950 2107 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:06:49.552428 kubelet[2107]: I0913 00:06:49.551971 2107 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:06:49.552428 kubelet[2107]: I0913 00:06:49.551991 2107 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:49.555846 kubelet[2107]: I0913 00:06:49.555783 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:06:49.557392 kubelet[2107]: I0913 00:06:49.557266 2107 policy_none.go:49] "None policy: Start" Sep 13 00:06:49.557779 kubelet[2107]: I0913 00:06:49.557752 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:06:49.557779 kubelet[2107]: I0913 00:06:49.557779 2107 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:06:49.557856 kubelet[2107]: I0913 00:06:49.557798 2107 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:06:49.557856 kubelet[2107]: E0913 00:06:49.557846 2107 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:06:49.558647 kubelet[2107]: I0913 00:06:49.558333 2107 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:06:49.558647 kubelet[2107]: I0913 00:06:49.558362 2107 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:06:49.559546 kubelet[2107]: W0913 00:06:49.559516 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 13 00:06:49.559684 kubelet[2107]: E0913 00:06:49.559663 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:49.565286 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:06:49.584936 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:06:49.588711 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:06:49.603120 kubelet[2107]: I0913 00:06:49.603093 2107 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:06:49.603481 kubelet[2107]: I0913 00:06:49.603464 2107 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:06:49.604202 kubelet[2107]: I0913 00:06:49.603548 2107 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:06:49.604202 kubelet[2107]: I0913 00:06:49.604186 2107 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:06:49.604952 kubelet[2107]: E0913 00:06:49.604935 2107 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:06:49.666086 systemd[1]: Created slice kubepods-burstable-podb7b2c6574186d6f06c0128be083410b9.slice - libcontainer container kubepods-burstable-podb7b2c6574186d6f06c0128be083410b9.slice. Sep 13 00:06:49.680614 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 13 00:06:49.691601 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 13 00:06:49.705950 kubelet[2107]: I0913 00:06:49.705474 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:06:49.706102 kubelet[2107]: E0913 00:06:49.706009 2107 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Sep 13 00:06:49.736769 kubelet[2107]: E0913 00:06:49.736658 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="400ms" Sep 13 00:06:49.738002 kubelet[2107]: I0913 00:06:49.737962 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:06:49.738060 kubelet[2107]: I0913 00:06:49.738011 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7b2c6574186d6f06c0128be083410b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7b2c6574186d6f06c0128be083410b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:06:49.738060 kubelet[2107]: I0913 00:06:49.738049 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7b2c6574186d6f06c0128be083410b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7b2c6574186d6f06c0128be083410b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:06:49.738124 kubelet[2107]: I0913 00:06:49.738065 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7b2c6574186d6f06c0128be083410b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b7b2c6574186d6f06c0128be083410b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:06:49.738124 kubelet[2107]: I0913 00:06:49.738082 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:49.738124 kubelet[2107]: I0913 00:06:49.738106 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:49.738124 kubelet[2107]: I0913 00:06:49.738122 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:49.738220 kubelet[2107]: I0913 00:06:49.738146 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:49.738220 kubelet[2107]: I0913 00:06:49.738160 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:49.908004 kubelet[2107]: I0913 00:06:49.907644 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:06:49.908149 kubelet[2107]: E0913 00:06:49.908050 2107 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Sep 13 00:06:49.979096 kubelet[2107]: E0913 00:06:49.979044 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:49.980169 containerd[1446]: time="2025-09-13T00:06:49.979771697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b7b2c6574186d6f06c0128be083410b9,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:49.990590 kubelet[2107]: E0913 00:06:49.990481 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:49.991137 containerd[1446]: time="2025-09-13T00:06:49.991078193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:49.993957 kubelet[2107]: E0913 00:06:49.993923 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:49.994962 containerd[1446]: time="2025-09-13T00:06:49.994858341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:50.138219 kubelet[2107]: E0913 00:06:50.138176 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="800ms" Sep 13 00:06:50.310699 kubelet[2107]: I0913 00:06:50.310306 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:06:50.311070 kubelet[2107]: E0913 00:06:50.311040 2107 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Sep 13 00:06:50.367702 kubelet[2107]: W0913 00:06:50.367636 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 13 00:06:50.367841 kubelet[2107]: E0913 00:06:50.367717 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:50.453408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762591499.mount: Deactivated successfully. Sep 13 00:06:50.468254 containerd[1446]: time="2025-09-13T00:06:50.468198041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:50.469170 containerd[1446]: time="2025-09-13T00:06:50.469115036Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:50.472970 containerd[1446]: time="2025-09-13T00:06:50.471355380Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 13 00:06:50.472970 containerd[1446]: time="2025-09-13T00:06:50.472351631Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:50.473091 kubelet[2107]: W0913 00:06:50.472265 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 13 00:06:50.473091 kubelet[2107]: E0913 00:06:50.472305 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:50.474038 containerd[1446]: time="2025-09-13T00:06:50.473865680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:06:50.475072 containerd[1446]: time="2025-09-13T00:06:50.475036797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:06:50.477970 containerd[1446]: time="2025-09-13T00:06:50.477158258Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:50.482050 containerd[1446]: time="2025-09-13T00:06:50.481726199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:50.482541 containerd[1446]: time="2025-09-13T00:06:50.482338489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 491.149812ms" Sep 13 00:06:50.483573 containerd[1446]: time="2025-09-13T00:06:50.483005242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.125179ms" Sep 13 00:06:50.485616 containerd[1446]: time="2025-09-13T00:06:50.485578443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.461185ms" Sep 13 00:06:50.547623 kubelet[2107]: W0913 00:06:50.547544 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 13 00:06:50.547623 kubelet[2107]: E0913 00:06:50.547620 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:50.622138 containerd[1446]: time="2025-09-13T00:06:50.621953171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:50.622923 containerd[1446]: time="2025-09-13T00:06:50.622578577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:50.622923 containerd[1446]: time="2025-09-13T00:06:50.622627602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:50.622923 containerd[1446]: time="2025-09-13T00:06:50.622638238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:50.623275 containerd[1446]: time="2025-09-13T00:06:50.622968656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:50.623275 containerd[1446]: time="2025-09-13T00:06:50.622992088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:50.625343 containerd[1446]: time="2025-09-13T00:06:50.623878773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:50.625343 containerd[1446]: time="2025-09-13T00:06:50.623870136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:50.626506 containerd[1446]: time="2025-09-13T00:06:50.626236361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:50.626506 containerd[1446]: time="2025-09-13T00:06:50.626306259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:50.626506 containerd[1446]: time="2025-09-13T00:06:50.626321854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:50.626506 containerd[1446]: time="2025-09-13T00:06:50.626406108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:50.643236 systemd[1]: Started cri-containerd-70acf38822b1cbeefef4ca5e59c80766cc2e103fb5dcb60fb8b1c2890bd4ff63.scope - libcontainer container 70acf38822b1cbeefef4ca5e59c80766cc2e103fb5dcb60fb8b1c2890bd4ff63. Sep 13 00:06:50.644399 systemd[1]: Started cri-containerd-7bdfe6242ec9a813687b0b2ea79ba27070654c8f695ffb6b6ab2d73d21198e58.scope - libcontainer container 7bdfe6242ec9a813687b0b2ea79ba27070654c8f695ffb6b6ab2d73d21198e58. Sep 13 00:06:50.648075 systemd[1]: Started cri-containerd-4bd7473d95cf8a1ffac5067256eac8c4952025f4580490aa3d8500382518eca1.scope - libcontainer container 4bd7473d95cf8a1ffac5067256eac8c4952025f4580490aa3d8500382518eca1. Sep 13 00:06:50.649353 kubelet[2107]: W0913 00:06:50.649307 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 13 00:06:50.649353 kubelet[2107]: E0913 00:06:50.649360 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:50.684476 containerd[1446]: time="2025-09-13T00:06:50.684402337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b7b2c6574186d6f06c0128be083410b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bdfe6242ec9a813687b0b2ea79ba27070654c8f695ffb6b6ab2d73d21198e58\"" Sep 13 00:06:50.686771 kubelet[2107]: E0913 00:06:50.686740 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:50.690086 containerd[1446]: time="2025-09-13T00:06:50.689908027Z" level=info msg="CreateContainer within sandbox \"7bdfe6242ec9a813687b0b2ea79ba27070654c8f695ffb6b6ab2d73d21198e58\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:06:50.692538 containerd[1446]: time="2025-09-13T00:06:50.692503501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bd7473d95cf8a1ffac5067256eac8c4952025f4580490aa3d8500382518eca1\"" Sep 13 00:06:50.695404 kubelet[2107]: E0913 00:06:50.695198 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:50.696437 containerd[1446]: time="2025-09-13T00:06:50.696083909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"70acf38822b1cbeefef4ca5e59c80766cc2e103fb5dcb60fb8b1c2890bd4ff63\"" Sep 13 00:06:50.697529 containerd[1446]: time="2025-09-13T00:06:50.697497230Z" level=info msg="CreateContainer within sandbox \"4bd7473d95cf8a1ffac5067256eac8c4952025f4580490aa3d8500382518eca1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:06:50.697729 kubelet[2107]: E0913 00:06:50.697707 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:50.699694 containerd[1446]: time="2025-09-13T00:06:50.699604696Z" level=info msg="CreateContainer within sandbox \"70acf38822b1cbeefef4ca5e59c80766cc2e103fb5dcb60fb8b1c2890bd4ff63\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:06:50.715529 containerd[1446]: time="2025-09-13T00:06:50.715454374Z" level=info msg="CreateContainer within sandbox \"7bdfe6242ec9a813687b0b2ea79ba27070654c8f695ffb6b6ab2d73d21198e58\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d49c45d0994a7dee79dc0fc43c413a410b69e2da820fb12087e8d14e8f300554\"" Sep 13 00:06:50.717056 containerd[1446]: time="2025-09-13T00:06:50.716162794Z" level=info msg="StartContainer for \"d49c45d0994a7dee79dc0fc43c413a410b69e2da820fb12087e8d14e8f300554\"" Sep 13 00:06:50.732327 containerd[1446]: time="2025-09-13T00:06:50.732282587Z" level=info msg="CreateContainer within sandbox \"4bd7473d95cf8a1ffac5067256eac8c4952025f4580490aa3d8500382518eca1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"52f839fe8e4833b08ada6d73dbf2fd0c235d5b2f5e2eeab224a40c20b5350148\"" Sep 13 00:06:50.734443 containerd[1446]: time="2025-09-13T00:06:50.733260684Z" level=info msg="StartContainer for \"52f839fe8e4833b08ada6d73dbf2fd0c235d5b2f5e2eeab224a40c20b5350148\"" Sep 13 00:06:50.741111 containerd[1446]: time="2025-09-13T00:06:50.741040988Z" level=info msg="CreateContainer within sandbox \"70acf38822b1cbeefef4ca5e59c80766cc2e103fb5dcb60fb8b1c2890bd4ff63\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c9730665e633e2e448362b43eb2e5b44c83ab0b322a10e8eb8fdac2f1e9b55cc\"" Sep 13 00:06:50.741830 containerd[1446]: time="2025-09-13T00:06:50.741786076Z" level=info msg="StartContainer for \"c9730665e633e2e448362b43eb2e5b44c83ab0b322a10e8eb8fdac2f1e9b55cc\"" Sep 13 00:06:50.744222 systemd[1]: Started cri-containerd-d49c45d0994a7dee79dc0fc43c413a410b69e2da820fb12087e8d14e8f300554.scope - libcontainer container d49c45d0994a7dee79dc0fc43c413a410b69e2da820fb12087e8d14e8f300554. Sep 13 00:06:50.771301 systemd[1]: Started cri-containerd-52f839fe8e4833b08ada6d73dbf2fd0c235d5b2f5e2eeab224a40c20b5350148.scope - libcontainer container 52f839fe8e4833b08ada6d73dbf2fd0c235d5b2f5e2eeab224a40c20b5350148. Sep 13 00:06:50.774640 systemd[1]: Started cri-containerd-c9730665e633e2e448362b43eb2e5b44c83ab0b322a10e8eb8fdac2f1e9b55cc.scope - libcontainer container c9730665e633e2e448362b43eb2e5b44c83ab0b322a10e8eb8fdac2f1e9b55cc. Sep 13 00:06:50.870484 containerd[1446]: time="2025-09-13T00:06:50.870414090Z" level=info msg="StartContainer for \"c9730665e633e2e448362b43eb2e5b44c83ab0b322a10e8eb8fdac2f1e9b55cc\" returns successfully" Sep 13 00:06:50.870484 containerd[1446]: time="2025-09-13T00:06:50.870455677Z" level=info msg="StartContainer for \"52f839fe8e4833b08ada6d73dbf2fd0c235d5b2f5e2eeab224a40c20b5350148\" returns successfully" Sep 13 00:06:50.870643 containerd[1446]: time="2025-09-13T00:06:50.870440482Z" level=info msg="StartContainer for \"d49c45d0994a7dee79dc0fc43c413a410b69e2da820fb12087e8d14e8f300554\" returns successfully" Sep 13 00:06:51.113052 kubelet[2107]: I0913 00:06:51.112640 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:06:51.567007 kubelet[2107]: E0913 00:06:51.566905 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:51.567839 kubelet[2107]: E0913 00:06:51.567812 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:51.571637 kubelet[2107]: E0913 00:06:51.571604 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:52.308037 kubelet[2107]: E0913 00:06:52.307994 2107 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:06:52.435566 kubelet[2107]: I0913 00:06:52.435283 2107 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:06:52.435566 kubelet[2107]: E0913 00:06:52.435323 2107 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:06:52.522590 kubelet[2107]: I0913 00:06:52.522530 2107 apiserver.go:52] "Watching apiserver" Sep 13 00:06:52.537081 kubelet[2107]: I0913 00:06:52.537047 2107 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:06:52.577775 kubelet[2107]: E0913 00:06:52.577671 2107 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:06:52.578113 kubelet[2107]: E0913 00:06:52.577845 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:54.217713 systemd[1]: Reloading requested from client PID 2383 ('systemctl') (unit session-7.scope)... Sep 13 00:06:54.217726 systemd[1]: Reloading... Sep 13 00:06:54.304599 zram_generator::config[2421]: No configuration found. Sep 13 00:06:54.392484 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:54.458283 systemd[1]: Reloading finished in 240 ms. Sep 13 00:06:54.492497 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:54.504122 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:06:54.504370 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:54.504443 systemd[1]: kubelet.service: Consumed 1.450s CPU time, 128.6M memory peak, 0B memory swap peak. Sep 13 00:06:54.516262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:54.632075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:54.635948 (kubelet)[2464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:06:54.677329 kubelet[2464]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:54.678030 kubelet[2464]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:06:54.678030 kubelet[2464]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:54.678201 kubelet[2464]: I0913 00:06:54.677989 2464 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:06:54.683629 kubelet[2464]: I0913 00:06:54.683587 2464 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:06:54.683629 kubelet[2464]: I0913 00:06:54.683623 2464 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:06:54.683899 kubelet[2464]: I0913 00:06:54.683872 2464 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:06:54.685255 kubelet[2464]: I0913 00:06:54.685232 2464 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:06:54.688137 kubelet[2464]: I0913 00:06:54.688034 2464 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:06:54.693357 kubelet[2464]: E0913 00:06:54.693302 2464 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:06:54.693357 kubelet[2464]: I0913 00:06:54.693333 2464 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:06:54.695867 kubelet[2464]: I0913 00:06:54.695845 2464 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:06:54.695960 kubelet[2464]: I0913 00:06:54.695947 2464 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:06:54.696172 kubelet[2464]: I0913 00:06:54.696128 2464 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:06:54.696319 kubelet[2464]: I0913 00:06:54.696162 2464 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:06:54.696382 kubelet[2464]: I0913 00:06:54.696327 2464 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:06:54.696382 kubelet[2464]: I0913 00:06:54.696337 2464 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:06:54.696382 kubelet[2464]: I0913 00:06:54.696372 2464 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:54.696480 kubelet[2464]: I0913 00:06:54.696469 2464 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:06:54.696509 kubelet[2464]: I0913 00:06:54.696484 2464 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:06:54.696509 kubelet[2464]: I0913 00:06:54.696500 2464 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:06:54.696557 kubelet[2464]: I0913 00:06:54.696513 2464 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:06:54.697042 kubelet[2464]: I0913 00:06:54.696941 2464 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:06:54.697436 kubelet[2464]: I0913 00:06:54.697414 2464 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:06:54.697811 kubelet[2464]: I0913 00:06:54.697789 2464 server.go:1274] "Started kubelet" Sep 13 00:06:54.698114 kubelet[2464]: I0913 00:06:54.698082 2464 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:06:54.698900 kubelet[2464]: I0913 00:06:54.698875 2464 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:06:54.699346 kubelet[2464]: I0913 00:06:54.699266 2464 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:06:54.699757 kubelet[2464]: I0913 00:06:54.699714 2464 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:06:54.699910 kubelet[2464]: I0913 00:06:54.699896 2464 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:06:54.700959 kubelet[2464]: I0913 00:06:54.700477 2464 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:06:54.700959 kubelet[2464]: I0913 00:06:54.700586 2464 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:06:54.700959 kubelet[2464]: I0913 00:06:54.700707 2464 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:06:54.702375 kubelet[2464]: I0913 00:06:54.702335 2464 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:06:54.704415 kubelet[2464]: I0913 00:06:54.704379 2464 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:06:54.705508 kubelet[2464]: I0913 00:06:54.705489 2464 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:06:54.705599 kubelet[2464]: I0913 00:06:54.705583 2464 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:06:54.718093 kubelet[2464]: E0913 00:06:54.718025 2464 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:06:54.734222 kubelet[2464]: I0913 00:06:54.734162 2464 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:06:54.736450 kubelet[2464]: I0913 00:06:54.736270 2464 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:06:54.736450 kubelet[2464]: I0913 00:06:54.736303 2464 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:06:54.736450 kubelet[2464]: I0913 00:06:54.736323 2464 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:06:54.736450 kubelet[2464]: E0913 00:06:54.736365 2464 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:06:54.753683 kubelet[2464]: I0913 00:06:54.753557 2464 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:06:54.753683 kubelet[2464]: I0913 00:06:54.753580 2464 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:06:54.753683 kubelet[2464]: I0913 00:06:54.753681 2464 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:54.753916 kubelet[2464]: I0913 00:06:54.753879 2464 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:06:54.753916 kubelet[2464]: I0913 00:06:54.753901 2464 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:06:54.753992 kubelet[2464]: I0913 00:06:54.753921 2464 policy_none.go:49] "None policy: Start" Sep 13 00:06:54.754566 kubelet[2464]: I0913 00:06:54.754534 2464 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:06:54.754566 kubelet[2464]: I0913 00:06:54.754561 2464 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:06:54.754736 kubelet[2464]: I0913 00:06:54.754718 2464 state_mem.go:75] "Updated machine memory state" Sep 13 00:06:54.759251 kubelet[2464]: I0913 00:06:54.759216 2464 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:06:54.759392 kubelet[2464]: I0913 00:06:54.759376 2464 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:06:54.759430 kubelet[2464]: I0913 00:06:54.759393 2464 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:06:54.759786 kubelet[2464]: I0913 00:06:54.759727 2464 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:06:54.863390 kubelet[2464]: I0913 00:06:54.863318 2464 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:06:54.874630 kubelet[2464]: I0913 00:06:54.872315 2464 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:06:54.874630 kubelet[2464]: I0913 00:06:54.872395 2464 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:06:55.001636 kubelet[2464]: I0913 00:06:55.001595 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:55.001636 kubelet[2464]: I0913 00:06:55.001633 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:55.001812 kubelet[2464]: I0913 00:06:55.001659 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:06:55.001812 kubelet[2464]: I0913 00:06:55.001680 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7b2c6574186d6f06c0128be083410b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7b2c6574186d6f06c0128be083410b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:06:55.001812 kubelet[2464]: I0913 00:06:55.001697 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:55.001812 kubelet[2464]: I0913 00:06:55.001718 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:55.001812 kubelet[2464]: I0913 00:06:55.001739 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:55.001949 kubelet[2464]: I0913 00:06:55.001756 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7b2c6574186d6f06c0128be083410b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7b2c6574186d6f06c0128be083410b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:06:55.001949 kubelet[2464]: I0913 00:06:55.001794 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7b2c6574186d6f06c0128be083410b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b7b2c6574186d6f06c0128be083410b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:06:55.148864 kubelet[2464]: E0913 00:06:55.146241 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:55.148864 kubelet[2464]: E0913 00:06:55.146772 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:55.148864 kubelet[2464]: E0913 00:06:55.146898 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:55.209310 sudo[2500]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:06:55.209596 sudo[2500]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 00:06:55.634169 sudo[2500]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:55.698155 kubelet[2464]: I0913 00:06:55.697767 2464 apiserver.go:52] "Watching apiserver" Sep 13 00:06:55.747961 kubelet[2464]: E0913 00:06:55.747925 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:55.748293 kubelet[2464]: E0913 00:06:55.748235 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:55.749548 kubelet[2464]: E0913 00:06:55.749285 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:55.785286 kubelet[2464]: I0913 00:06:55.785205 2464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.785187538 podStartE2EDuration="1.785187538s" podCreationTimestamp="2025-09-13 00:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:55.773719897 +0000 UTC m=+1.134499512" watchObservedRunningTime="2025-09-13 00:06:55.785187538 +0000 UTC m=+1.145967153" Sep 13 00:06:55.793613 kubelet[2464]: I0913 00:06:55.793458 2464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.793430634 podStartE2EDuration="1.793430634s" podCreationTimestamp="2025-09-13 00:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:55.785421316 +0000 UTC m=+1.146200931" watchObservedRunningTime="2025-09-13 00:06:55.793430634 +0000 UTC m=+1.154210249" Sep 13 00:06:55.793613 kubelet[2464]: I0913 00:06:55.793543 2464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.793539325 podStartE2EDuration="1.793539325s" podCreationTimestamp="2025-09-13 00:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:55.793246923 +0000 UTC m=+1.154026498" watchObservedRunningTime="2025-09-13 00:06:55.793539325 +0000 UTC m=+1.154318900" Sep 13 00:06:55.800937 kubelet[2464]: I0913 00:06:55.800911 2464 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:06:56.750402 kubelet[2464]: E0913 00:06:56.750043 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:57.270274 sudo[1626]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:57.272385 sshd[1623]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:57.277521 systemd[1]: sshd@6-10.0.0.88:22-10.0.0.1:37674.service: Deactivated successfully. Sep 13 00:06:57.279886 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:06:57.280161 systemd[1]: session-7.scope: Consumed 5.991s CPU time, 155.8M memory peak, 0B memory swap peak. Sep 13 00:06:57.281447 systemd-logind[1427]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:06:57.282735 systemd-logind[1427]: Removed session 7. Sep 13 00:06:58.160162 kubelet[2464]: E0913 00:06:58.160042 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:00.029801 kubelet[2464]: E0913 00:07:00.029737 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:00.747332 kubelet[2464]: I0913 00:07:00.747283 2464 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:07:00.747745 containerd[1446]: time="2025-09-13T00:07:00.747696370Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:07:00.748029 kubelet[2464]: I0913 00:07:00.747913 2464 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:07:00.757133 kubelet[2464]: E0913 00:07:00.757110 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:01.717360 systemd[1]: Created slice kubepods-besteffort-pod8382768f_93c4_48a6_bbc4_8260b35c5f29.slice - libcontainer container kubepods-besteffort-pod8382768f_93c4_48a6_bbc4_8260b35c5f29.slice. Sep 13 00:07:01.731545 systemd[1]: Created slice kubepods-burstable-podf14d657e_3fd1_43fe_89d0_7b799f892ab2.slice - libcontainer container kubepods-burstable-podf14d657e_3fd1_43fe_89d0_7b799f892ab2.slice. Sep 13 00:07:01.752356 kubelet[2464]: I0913 00:07:01.752317 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8382768f-93c4-48a6-bbc4-8260b35c5f29-kube-proxy\") pod \"kube-proxy-w8w4g\" (UID: \"8382768f-93c4-48a6-bbc4-8260b35c5f29\") " pod="kube-system/kube-proxy-w8w4g" Sep 13 00:07:01.752356 kubelet[2464]: I0913 00:07:01.752358 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8382768f-93c4-48a6-bbc4-8260b35c5f29-xtables-lock\") pod \"kube-proxy-w8w4g\" (UID: \"8382768f-93c4-48a6-bbc4-8260b35c5f29\") " pod="kube-system/kube-proxy-w8w4g" Sep 13 00:07:01.752965 kubelet[2464]: I0913 00:07:01.752382 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-lib-modules\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.752965 kubelet[2464]: I0913 00:07:01.752398 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f14d657e-3fd1-43fe-89d0-7b799f892ab2-hubble-tls\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.752965 kubelet[2464]: I0913 00:07:01.752414 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cni-path\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.752965 kubelet[2464]: I0913 00:07:01.752428 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f14d657e-3fd1-43fe-89d0-7b799f892ab2-clustermesh-secrets\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.752965 kubelet[2464]: I0913 00:07:01.752445 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-host-proc-sys-kernel\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.752965 kubelet[2464]: I0913 00:07:01.752462 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8382768f-93c4-48a6-bbc4-8260b35c5f29-lib-modules\") pod \"kube-proxy-w8w4g\" (UID: \"8382768f-93c4-48a6-bbc4-8260b35c5f29\") " pod="kube-system/kube-proxy-w8w4g" Sep 13 00:07:01.753127 kubelet[2464]: I0913 00:07:01.752476 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-run\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.753127 kubelet[2464]: I0913 00:07:01.752490 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-bpf-maps\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.753127 kubelet[2464]: I0913 00:07:01.752503 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-cgroup\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.753127 kubelet[2464]: I0913 00:07:01.752517 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkkqf\" (UniqueName: \"kubernetes.io/projected/f14d657e-3fd1-43fe-89d0-7b799f892ab2-kube-api-access-jkkqf\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.753127 kubelet[2464]: I0913 00:07:01.752534 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-etc-cni-netd\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.753127 kubelet[2464]: I0913 00:07:01.752547 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-config-path\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.753295 kubelet[2464]: I0913 00:07:01.752563 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-xtables-lock\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.753295 kubelet[2464]: I0913 00:07:01.752598 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-host-proc-sys-net\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.753295 kubelet[2464]: I0913 00:07:01.752615 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2kr\" (UniqueName: \"kubernetes.io/projected/8382768f-93c4-48a6-bbc4-8260b35c5f29-kube-api-access-mp2kr\") pod \"kube-proxy-w8w4g\" (UID: \"8382768f-93c4-48a6-bbc4-8260b35c5f29\") " pod="kube-system/kube-proxy-w8w4g" Sep 13 00:07:01.753295 kubelet[2464]: I0913 00:07:01.752631 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-hostproc\") pod \"cilium-vphdc\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " pod="kube-system/cilium-vphdc" Sep 13 00:07:01.758723 kubelet[2464]: E0913 00:07:01.758691 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:01.795080 systemd[1]: Created slice kubepods-besteffort-podb31b7e42_27ef_4298_81a2_b371a8197a65.slice - libcontainer container kubepods-besteffort-podb31b7e42_27ef_4298_81a2_b371a8197a65.slice. Sep 13 00:07:01.853834 kubelet[2464]: I0913 00:07:01.853677 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-488c4\" (UniqueName: \"kubernetes.io/projected/b31b7e42-27ef-4298-81a2-b371a8197a65-kube-api-access-488c4\") pod \"cilium-operator-5d85765b45-bldnt\" (UID: \"b31b7e42-27ef-4298-81a2-b371a8197a65\") " pod="kube-system/cilium-operator-5d85765b45-bldnt" Sep 13 00:07:01.854225 kubelet[2464]: I0913 00:07:01.854178 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b31b7e42-27ef-4298-81a2-b371a8197a65-cilium-config-path\") pod \"cilium-operator-5d85765b45-bldnt\" (UID: \"b31b7e42-27ef-4298-81a2-b371a8197a65\") " pod="kube-system/cilium-operator-5d85765b45-bldnt" Sep 13 00:07:02.028061 kubelet[2464]: E0913 00:07:02.027857 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:02.028593 containerd[1446]: time="2025-09-13T00:07:02.028546843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8w4g,Uid:8382768f-93c4-48a6-bbc4-8260b35c5f29,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:02.035292 kubelet[2464]: E0913 00:07:02.034858 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:02.035629 containerd[1446]: time="2025-09-13T00:07:02.035597627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vphdc,Uid:f14d657e-3fd1-43fe-89d0-7b799f892ab2,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:02.060094 containerd[1446]: time="2025-09-13T00:07:02.059895472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:02.060094 containerd[1446]: time="2025-09-13T00:07:02.059965218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:02.060094 containerd[1446]: time="2025-09-13T00:07:02.060003929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:02.060260 containerd[1446]: time="2025-09-13T00:07:02.060102189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:02.068819 containerd[1446]: time="2025-09-13T00:07:02.068731878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:02.068819 containerd[1446]: time="2025-09-13T00:07:02.068790745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:02.068819 containerd[1446]: time="2025-09-13T00:07:02.068802303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:02.069124 containerd[1446]: time="2025-09-13T00:07:02.068866329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:02.080201 systemd[1]: Started cri-containerd-284110a2f06c4145b460e15c7a62d0148d93b623d22cbdb0b77bd3c2cb1b52ac.scope - libcontainer container 284110a2f06c4145b460e15c7a62d0148d93b623d22cbdb0b77bd3c2cb1b52ac. Sep 13 00:07:02.084607 systemd[1]: Started cri-containerd-31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5.scope - libcontainer container 31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5. Sep 13 00:07:02.098438 kubelet[2464]: E0913 00:07:02.098401 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:02.099348 containerd[1446]: time="2025-09-13T00:07:02.098913675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bldnt,Uid:b31b7e42-27ef-4298-81a2-b371a8197a65,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:02.110036 containerd[1446]: time="2025-09-13T00:07:02.109867591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8w4g,Uid:8382768f-93c4-48a6-bbc4-8260b35c5f29,Namespace:kube-system,Attempt:0,} returns sandbox id \"284110a2f06c4145b460e15c7a62d0148d93b623d22cbdb0b77bd3c2cb1b52ac\"" Sep 13 00:07:02.110612 kubelet[2464]: E0913 00:07:02.110590 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:02.111039 containerd[1446]: time="2025-09-13T00:07:02.111004750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vphdc,Uid:f14d657e-3fd1-43fe-89d0-7b799f892ab2,Namespace:kube-system,Attempt:0,} returns sandbox id \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\"" Sep 13 00:07:02.112907 kubelet[2464]: E0913 00:07:02.112448 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:02.115348 containerd[1446]: time="2025-09-13T00:07:02.115115598Z" level=info msg="CreateContainer within sandbox \"284110a2f06c4145b460e15c7a62d0148d93b623d22cbdb0b77bd3c2cb1b52ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:07:02.115763 containerd[1446]: time="2025-09-13T00:07:02.115688956Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:07:02.131834 containerd[1446]: time="2025-09-13T00:07:02.131705238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:02.131834 containerd[1446]: time="2025-09-13T00:07:02.131784381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:02.132137 containerd[1446]: time="2025-09-13T00:07:02.131808816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:02.132137 containerd[1446]: time="2025-09-13T00:07:02.132085357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:02.139420 containerd[1446]: time="2025-09-13T00:07:02.137667893Z" level=info msg="CreateContainer within sandbox \"284110a2f06c4145b460e15c7a62d0148d93b623d22cbdb0b77bd3c2cb1b52ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1a51f390e5fb222722d313fc0e930b6001e9d66195b0e2207b97edb2249321c7\"" Sep 13 00:07:02.140054 containerd[1446]: time="2025-09-13T00:07:02.139996719Z" level=info msg="StartContainer for \"1a51f390e5fb222722d313fc0e930b6001e9d66195b0e2207b97edb2249321c7\"" Sep 13 00:07:02.152195 systemd[1]: Started cri-containerd-ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90.scope - libcontainer container ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90. Sep 13 00:07:02.166183 systemd[1]: Started cri-containerd-1a51f390e5fb222722d313fc0e930b6001e9d66195b0e2207b97edb2249321c7.scope - libcontainer container 1a51f390e5fb222722d313fc0e930b6001e9d66195b0e2207b97edb2249321c7. Sep 13 00:07:02.188916 containerd[1446]: time="2025-09-13T00:07:02.188856834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bldnt,Uid:b31b7e42-27ef-4298-81a2-b371a8197a65,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90\"" Sep 13 00:07:02.190008 kubelet[2464]: E0913 00:07:02.189982 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:02.198757 containerd[1446]: time="2025-09-13T00:07:02.198721261Z" level=info msg="StartContainer for \"1a51f390e5fb222722d313fc0e930b6001e9d66195b0e2207b97edb2249321c7\" returns successfully" Sep 13 00:07:02.766177 kubelet[2464]: E0913 00:07:02.766133 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:02.779891 kubelet[2464]: I0913 00:07:02.779782 2464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w8w4g" podStartSLOduration=1.779765474 podStartE2EDuration="1.779765474s" podCreationTimestamp="2025-09-13 00:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:02.77940527 +0000 UTC m=+8.140184885" watchObservedRunningTime="2025-09-13 00:07:02.779765474 +0000 UTC m=+8.140545089" Sep 13 00:07:02.878092 kubelet[2464]: E0913 00:07:02.877624 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:03.768050 kubelet[2464]: E0913 00:07:03.767889 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:08.169699 kubelet[2464]: E0913 00:07:08.169107 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:12.101611 update_engine[1429]: I20250913 00:07:12.101047 1429 update_attempter.cc:509] Updating boot flags... Sep 13 00:07:12.136973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2849) Sep 13 00:07:12.176068 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2850) Sep 13 00:07:17.462237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417686262.mount: Deactivated successfully. Sep 13 00:07:18.840909 containerd[1446]: time="2025-09-13T00:07:18.840848236Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:18.841851 containerd[1446]: time="2025-09-13T00:07:18.841671131Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 13 00:07:18.846802 containerd[1446]: time="2025-09-13T00:07:18.846399447Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:18.848472 containerd[1446]: time="2025-09-13T00:07:18.848343479Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 16.732567061s" Sep 13 00:07:18.848472 containerd[1446]: time="2025-09-13T00:07:18.848382914Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 00:07:18.851704 containerd[1446]: time="2025-09-13T00:07:18.851657416Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:07:18.854337 containerd[1446]: time="2025-09-13T00:07:18.854301159Z" level=info msg="CreateContainer within sandbox \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:07:18.877526 containerd[1446]: time="2025-09-13T00:07:18.877434486Z" level=info msg="CreateContainer within sandbox \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a\"" Sep 13 00:07:18.878202 containerd[1446]: time="2025-09-13T00:07:18.878160113Z" level=info msg="StartContainer for \"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a\"" Sep 13 00:07:18.908247 systemd[1]: Started cri-containerd-9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a.scope - libcontainer container 9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a. Sep 13 00:07:18.932790 containerd[1446]: time="2025-09-13T00:07:18.932718790Z" level=info msg="StartContainer for \"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a\" returns successfully" Sep 13 00:07:18.944805 systemd[1]: cri-containerd-9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a.scope: Deactivated successfully. Sep 13 00:07:19.117055 containerd[1446]: time="2025-09-13T00:07:19.116913661Z" level=info msg="shim disconnected" id=9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a namespace=k8s.io Sep 13 00:07:19.117055 containerd[1446]: time="2025-09-13T00:07:19.116965535Z" level=warning msg="cleaning up after shim disconnected" id=9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a namespace=k8s.io Sep 13 00:07:19.117055 containerd[1446]: time="2025-09-13T00:07:19.116974214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:19.804166 kubelet[2464]: E0913 00:07:19.804128 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:19.806429 containerd[1446]: time="2025-09-13T00:07:19.806386247Z" level=info msg="CreateContainer within sandbox \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:07:19.875309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a-rootfs.mount: Deactivated successfully. Sep 13 00:07:19.877718 containerd[1446]: time="2025-09-13T00:07:19.877675672Z" level=info msg="CreateContainer within sandbox \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567\"" Sep 13 00:07:19.879769 containerd[1446]: time="2025-09-13T00:07:19.878764497Z" level=info msg="StartContainer for \"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567\"" Sep 13 00:07:19.915218 systemd[1]: Started cri-containerd-b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567.scope - libcontainer container b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567. Sep 13 00:07:19.949032 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:07:19.949245 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:07:19.949307 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:07:19.955259 containerd[1446]: time="2025-09-13T00:07:19.955212244Z" level=info msg="StartContainer for \"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567\" returns successfully" Sep 13 00:07:19.957387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:07:19.957555 systemd[1]: cri-containerd-b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567.scope: Deactivated successfully. Sep 13 00:07:19.974568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567-rootfs.mount: Deactivated successfully. Sep 13 00:07:19.976724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:07:19.988728 containerd[1446]: time="2025-09-13T00:07:19.988649070Z" level=info msg="shim disconnected" id=b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567 namespace=k8s.io Sep 13 00:07:19.988728 containerd[1446]: time="2025-09-13T00:07:19.988725100Z" level=warning msg="cleaning up after shim disconnected" id=b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567 namespace=k8s.io Sep 13 00:07:19.988728 containerd[1446]: time="2025-09-13T00:07:19.988734379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:20.807275 kubelet[2464]: E0913 00:07:20.807243 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:20.809913 containerd[1446]: time="2025-09-13T00:07:20.809876050Z" level=info msg="CreateContainer within sandbox \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:07:20.825860 containerd[1446]: time="2025-09-13T00:07:20.825803582Z" level=info msg="CreateContainer within sandbox \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312\"" Sep 13 00:07:20.826418 containerd[1446]: time="2025-09-13T00:07:20.826389432Z" level=info msg="StartContainer for \"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312\"" Sep 13 00:07:20.853199 systemd[1]: Started cri-containerd-90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312.scope - libcontainer container 90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312. Sep 13 00:07:20.875333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1404900823.mount: Deactivated successfully. Sep 13 00:07:20.881409 systemd[1]: cri-containerd-90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312.scope: Deactivated successfully. Sep 13 00:07:20.882329 containerd[1446]: time="2025-09-13T00:07:20.882292335Z" level=info msg="StartContainer for \"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312\" returns successfully" Sep 13 00:07:20.899997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312-rootfs.mount: Deactivated successfully. Sep 13 00:07:20.920090 containerd[1446]: time="2025-09-13T00:07:20.920032775Z" level=info msg="shim disconnected" id=90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312 namespace=k8s.io Sep 13 00:07:20.920090 containerd[1446]: time="2025-09-13T00:07:20.920085008Z" level=warning msg="cleaning up after shim disconnected" id=90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312 namespace=k8s.io Sep 13 00:07:20.920090 containerd[1446]: time="2025-09-13T00:07:20.920096527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:21.812258 kubelet[2464]: E0913 00:07:21.811754 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:21.816003 containerd[1446]: time="2025-09-13T00:07:21.815948267Z" level=info msg="CreateContainer within sandbox \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:07:21.858148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2510172459.mount: Deactivated successfully. Sep 13 00:07:21.866084 containerd[1446]: time="2025-09-13T00:07:21.865944145Z" level=info msg="CreateContainer within sandbox \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e\"" Sep 13 00:07:21.867829 containerd[1446]: time="2025-09-13T00:07:21.866946069Z" level=info msg="StartContainer for \"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e\"" Sep 13 00:07:21.892768 systemd[1]: run-containerd-runc-k8s.io-b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e-runc.SSZoN5.mount: Deactivated successfully. Sep 13 00:07:21.907395 systemd[1]: Started cri-containerd-b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e.scope - libcontainer container b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e. Sep 13 00:07:21.933518 systemd[1]: cri-containerd-b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e.scope: Deactivated successfully. Sep 13 00:07:21.937062 containerd[1446]: time="2025-09-13T00:07:21.936999580Z" level=info msg="StartContainer for \"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e\" returns successfully" Sep 13 00:07:21.954745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e-rootfs.mount: Deactivated successfully. Sep 13 00:07:21.976595 containerd[1446]: time="2025-09-13T00:07:21.976388769Z" level=info msg="shim disconnected" id=b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e namespace=k8s.io Sep 13 00:07:21.976595 containerd[1446]: time="2025-09-13T00:07:21.976438243Z" level=warning msg="cleaning up after shim disconnected" id=b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e namespace=k8s.io Sep 13 00:07:21.976595 containerd[1446]: time="2025-09-13T00:07:21.976446482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:21.990601 containerd[1446]: time="2025-09-13T00:07:21.990552805Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:07:22.117739 containerd[1446]: time="2025-09-13T00:07:22.117616763Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:22.118809 containerd[1446]: time="2025-09-13T00:07:22.118770113Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 13 00:07:22.119722 containerd[1446]: time="2025-09-13T00:07:22.119695209Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:22.121239 containerd[1446]: time="2025-09-13T00:07:22.121208479Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.26949687s" Sep 13 00:07:22.121297 containerd[1446]: time="2025-09-13T00:07:22.121245315Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 00:07:22.133625 containerd[1446]: time="2025-09-13T00:07:22.133586488Z" level=info msg="CreateContainer within sandbox \"ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:07:22.142582 containerd[1446]: time="2025-09-13T00:07:22.142533282Z" level=info msg="CreateContainer within sandbox \"ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\"" Sep 13 00:07:22.143261 containerd[1446]: time="2025-09-13T00:07:22.143228524Z" level=info msg="StartContainer for \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\"" Sep 13 00:07:22.172212 systemd[1]: Started cri-containerd-53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772.scope - libcontainer container 53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772. Sep 13 00:07:22.194541 containerd[1446]: time="2025-09-13T00:07:22.194426328Z" level=info msg="StartContainer for \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\" returns successfully" Sep 13 00:07:22.817545 kubelet[2464]: E0913 00:07:22.817507 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:22.827891 kubelet[2464]: E0913 00:07:22.827849 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:22.829908 containerd[1446]: time="2025-09-13T00:07:22.829873294Z" level=info msg="CreateContainer within sandbox \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:07:22.844760 kubelet[2464]: I0913 00:07:22.844467 2464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-bldnt" podStartSLOduration=1.909676575 podStartE2EDuration="21.844452055s" podCreationTimestamp="2025-09-13 00:07:01 +0000 UTC" firstStartedPulling="2025-09-13 00:07:02.190724957 +0000 UTC m=+7.551504572" lastFinishedPulling="2025-09-13 00:07:22.125500437 +0000 UTC m=+27.486280052" observedRunningTime="2025-09-13 00:07:22.840484021 +0000 UTC m=+28.201263636" watchObservedRunningTime="2025-09-13 00:07:22.844452055 +0000 UTC m=+28.205231670" Sep 13 00:07:22.857737 containerd[1446]: time="2025-09-13T00:07:22.857462232Z" level=info msg="CreateContainer within sandbox \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\"" Sep 13 00:07:22.859922 containerd[1446]: time="2025-09-13T00:07:22.858082323Z" level=info msg="StartContainer for \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\"" Sep 13 00:07:22.895155 systemd[1]: Started cri-containerd-0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b.scope - libcontainer container 0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b. Sep 13 00:07:22.922159 containerd[1446]: time="2025-09-13T00:07:22.922099886Z" level=info msg="StartContainer for \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\" returns successfully" Sep 13 00:07:23.059306 kubelet[2464]: I0913 00:07:23.059272 2464 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:07:23.102121 kubelet[2464]: I0913 00:07:23.100754 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fd8aa40-1310-4d3d-84a9-455980df74f6-config-volume\") pod \"coredns-7c65d6cfc9-j5ltk\" (UID: \"7fd8aa40-1310-4d3d-84a9-455980df74f6\") " pod="kube-system/coredns-7c65d6cfc9-j5ltk" Sep 13 00:07:23.102121 kubelet[2464]: I0913 00:07:23.100828 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aafa26c3-73b2-4bf8-9326-dcd39a790aba-config-volume\") pod \"coredns-7c65d6cfc9-d27cp\" (UID: \"aafa26c3-73b2-4bf8-9326-dcd39a790aba\") " pod="kube-system/coredns-7c65d6cfc9-d27cp" Sep 13 00:07:23.102121 kubelet[2464]: I0913 00:07:23.100863 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmzsb\" (UniqueName: \"kubernetes.io/projected/7fd8aa40-1310-4d3d-84a9-455980df74f6-kube-api-access-dmzsb\") pod \"coredns-7c65d6cfc9-j5ltk\" (UID: \"7fd8aa40-1310-4d3d-84a9-455980df74f6\") " pod="kube-system/coredns-7c65d6cfc9-j5ltk" Sep 13 00:07:23.102121 kubelet[2464]: I0913 00:07:23.100885 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hqf\" (UniqueName: \"kubernetes.io/projected/aafa26c3-73b2-4bf8-9326-dcd39a790aba-kube-api-access-j9hqf\") pod \"coredns-7c65d6cfc9-d27cp\" (UID: \"aafa26c3-73b2-4bf8-9326-dcd39a790aba\") " pod="kube-system/coredns-7c65d6cfc9-d27cp" Sep 13 00:07:23.101324 systemd[1]: Created slice kubepods-burstable-podaafa26c3_73b2_4bf8_9326_dcd39a790aba.slice - libcontainer container kubepods-burstable-podaafa26c3_73b2_4bf8_9326_dcd39a790aba.slice. Sep 13 00:07:23.110562 systemd[1]: Created slice kubepods-burstable-pod7fd8aa40_1310_4d3d_84a9_455980df74f6.slice - libcontainer container kubepods-burstable-pod7fd8aa40_1310_4d3d_84a9_455980df74f6.slice. Sep 13 00:07:23.409180 kubelet[2464]: E0913 00:07:23.408957 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:23.409938 containerd[1446]: time="2025-09-13T00:07:23.409888887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d27cp,Uid:aafa26c3-73b2-4bf8-9326-dcd39a790aba,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:23.413522 kubelet[2464]: E0913 00:07:23.413071 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:23.413597 containerd[1446]: time="2025-09-13T00:07:23.413410984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j5ltk,Uid:7fd8aa40-1310-4d3d-84a9-455980df74f6,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:23.826620 kubelet[2464]: E0913 00:07:23.826238 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:23.826620 kubelet[2464]: E0913 00:07:23.826370 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:23.843579 kubelet[2464]: I0913 00:07:23.843207 2464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vphdc" podStartSLOduration=6.106027324 podStartE2EDuration="22.84318962s" podCreationTimestamp="2025-09-13 00:07:01 +0000 UTC" firstStartedPulling="2025-09-13 00:07:02.114116729 +0000 UTC m=+7.474896344" lastFinishedPulling="2025-09-13 00:07:18.851279065 +0000 UTC m=+24.212058640" observedRunningTime="2025-09-13 00:07:23.842475857 +0000 UTC m=+29.203255472" watchObservedRunningTime="2025-09-13 00:07:23.84318962 +0000 UTC m=+29.203969235" Sep 13 00:07:24.829361 kubelet[2464]: E0913 00:07:24.829328 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:25.774759 systemd-networkd[1385]: cilium_host: Link UP Sep 13 00:07:25.777221 systemd-networkd[1385]: cilium_net: Link UP Sep 13 00:07:25.777423 systemd-networkd[1385]: cilium_net: Gained carrier Sep 13 00:07:25.777556 systemd-networkd[1385]: cilium_host: Gained carrier Sep 13 00:07:25.827193 systemd-networkd[1385]: cilium_host: Gained IPv6LL Sep 13 00:07:25.831661 kubelet[2464]: E0913 00:07:25.831190 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:25.873779 systemd-networkd[1385]: cilium_vxlan: Link UP Sep 13 00:07:25.873788 systemd-networkd[1385]: cilium_vxlan: Gained carrier Sep 13 00:07:26.126062 kernel: NET: Registered PF_ALG protocol family Sep 13 00:07:26.194155 systemd-networkd[1385]: cilium_net: Gained IPv6LL Sep 13 00:07:26.698148 systemd-networkd[1385]: lxc_health: Link UP Sep 13 00:07:26.706140 systemd-networkd[1385]: lxc_health: Gained carrier Sep 13 00:07:26.962382 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL Sep 13 00:07:26.977685 systemd-networkd[1385]: lxccd321ca28e67: Link UP Sep 13 00:07:26.992080 kernel: eth0: renamed from tmpc74d5 Sep 13 00:07:27.005167 systemd-networkd[1385]: lxccd321ca28e67: Gained carrier Sep 13 00:07:27.005371 systemd-networkd[1385]: lxce711781f7fc0: Link UP Sep 13 00:07:27.014080 kernel: eth0: renamed from tmpca2cd Sep 13 00:07:27.026992 systemd-networkd[1385]: lxce711781f7fc0: Gained carrier Sep 13 00:07:28.041232 kubelet[2464]: E0913 00:07:28.041186 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:28.242163 systemd-networkd[1385]: lxccd321ca28e67: Gained IPv6LL Sep 13 00:07:28.626239 systemd-networkd[1385]: lxc_health: Gained IPv6LL Sep 13 00:07:28.837027 kubelet[2464]: E0913 00:07:28.836973 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:28.991009 systemd[1]: Started sshd@7-10.0.0.88:22-10.0.0.1:39110.service - OpenSSH per-connection server daemon (10.0.0.1:39110). Sep 13 00:07:29.010372 systemd-networkd[1385]: lxce711781f7fc0: Gained IPv6LL Sep 13 00:07:29.030587 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 39110 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:29.032086 sshd[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:29.036992 systemd-logind[1427]: New session 8 of user core. Sep 13 00:07:29.048220 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:07:29.173310 sshd[3714]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:29.176927 systemd-logind[1427]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:07:29.177440 systemd[1]: sshd@7-10.0.0.88:22-10.0.0.1:39110.service: Deactivated successfully. Sep 13 00:07:29.180679 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:07:29.181927 systemd-logind[1427]: Removed session 8. Sep 13 00:07:29.839863 kubelet[2464]: E0913 00:07:29.839604 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:30.726803 containerd[1446]: time="2025-09-13T00:07:30.726631580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:30.726803 containerd[1446]: time="2025-09-13T00:07:30.726693535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:30.726803 containerd[1446]: time="2025-09-13T00:07:30.726716693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.727272 containerd[1446]: time="2025-09-13T00:07:30.726808045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.727743 containerd[1446]: time="2025-09-13T00:07:30.727649692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:30.727743 containerd[1446]: time="2025-09-13T00:07:30.727707247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:30.727743 containerd[1446]: time="2025-09-13T00:07:30.727728805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.727943 containerd[1446]: time="2025-09-13T00:07:30.727876112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.746522 systemd[1]: run-containerd-runc-k8s.io-c74d514f3a03596ea5e7c2e43d04bcef9862d3acde9d2326210d8ad153c0675a-runc.UyCr2D.mount: Deactivated successfully. Sep 13 00:07:30.757209 systemd[1]: Started cri-containerd-c74d514f3a03596ea5e7c2e43d04bcef9862d3acde9d2326210d8ad153c0675a.scope - libcontainer container c74d514f3a03596ea5e7c2e43d04bcef9862d3acde9d2326210d8ad153c0675a. Sep 13 00:07:30.758325 systemd[1]: Started cri-containerd-ca2cd8394ba06c5c3dd1e7651c3e6ccb483ca912620e3b22898fe0395953f7eb.scope - libcontainer container ca2cd8394ba06c5c3dd1e7651c3e6ccb483ca912620e3b22898fe0395953f7eb. Sep 13 00:07:30.769968 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:07:30.772321 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:07:30.794297 containerd[1446]: time="2025-09-13T00:07:30.794174011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d27cp,Uid:aafa26c3-73b2-4bf8-9326-dcd39a790aba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca2cd8394ba06c5c3dd1e7651c3e6ccb483ca912620e3b22898fe0395953f7eb\"" Sep 13 00:07:30.794543 containerd[1446]: time="2025-09-13T00:07:30.794482584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j5ltk,Uid:7fd8aa40-1310-4d3d-84a9-455980df74f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c74d514f3a03596ea5e7c2e43d04bcef9862d3acde9d2326210d8ad153c0675a\"" Sep 13 00:07:30.795345 kubelet[2464]: E0913 00:07:30.795317 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:30.795493 kubelet[2464]: E0913 00:07:30.795317 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:30.798589 containerd[1446]: time="2025-09-13T00:07:30.798538310Z" level=info msg="CreateContainer within sandbox \"c74d514f3a03596ea5e7c2e43d04bcef9862d3acde9d2326210d8ad153c0675a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:07:30.798951 containerd[1446]: time="2025-09-13T00:07:30.798926676Z" level=info msg="CreateContainer within sandbox \"ca2cd8394ba06c5c3dd1e7651c3e6ccb483ca912620e3b22898fe0395953f7eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:07:30.810986 containerd[1446]: time="2025-09-13T00:07:30.810942029Z" level=info msg="CreateContainer within sandbox \"ca2cd8394ba06c5c3dd1e7651c3e6ccb483ca912620e3b22898fe0395953f7eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9f1508db7938905d786f34afa3a0fb008d8c4a5a486739c461e98e80ce8d7e5\"" Sep 13 00:07:30.812244 containerd[1446]: time="2025-09-13T00:07:30.812218957Z" level=info msg="StartContainer for \"a9f1508db7938905d786f34afa3a0fb008d8c4a5a486739c461e98e80ce8d7e5\"" Sep 13 00:07:30.818730 containerd[1446]: time="2025-09-13T00:07:30.818687593Z" level=info msg="CreateContainer within sandbox \"c74d514f3a03596ea5e7c2e43d04bcef9862d3acde9d2326210d8ad153c0675a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"abd1c38b25aafdb36d8d347e6f10d4490bc689f10daa827472983000ff2fc985\"" Sep 13 00:07:30.820584 containerd[1446]: time="2025-09-13T00:07:30.820556790Z" level=info msg="StartContainer for \"abd1c38b25aafdb36d8d347e6f10d4490bc689f10daa827472983000ff2fc985\"" Sep 13 00:07:30.837314 systemd[1]: Started cri-containerd-a9f1508db7938905d786f34afa3a0fb008d8c4a5a486739c461e98e80ce8d7e5.scope - libcontainer container a9f1508db7938905d786f34afa3a0fb008d8c4a5a486739c461e98e80ce8d7e5. Sep 13 00:07:30.842070 systemd[1]: Started cri-containerd-abd1c38b25aafdb36d8d347e6f10d4490bc689f10daa827472983000ff2fc985.scope - libcontainer container abd1c38b25aafdb36d8d347e6f10d4490bc689f10daa827472983000ff2fc985. Sep 13 00:07:30.869411 containerd[1446]: time="2025-09-13T00:07:30.869359455Z" level=info msg="StartContainer for \"a9f1508db7938905d786f34afa3a0fb008d8c4a5a486739c461e98e80ce8d7e5\" returns successfully" Sep 13 00:07:30.869551 containerd[1446]: time="2025-09-13T00:07:30.869453286Z" level=info msg="StartContainer for \"abd1c38b25aafdb36d8d347e6f10d4490bc689f10daa827472983000ff2fc985\" returns successfully" Sep 13 00:07:31.850463 kubelet[2464]: E0913 00:07:31.850218 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:31.853951 kubelet[2464]: E0913 00:07:31.853903 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:31.873593 kubelet[2464]: I0913 00:07:31.873494 2464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-d27cp" podStartSLOduration=30.873477072 podStartE2EDuration="30.873477072s" podCreationTimestamp="2025-09-13 00:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:31.872871203 +0000 UTC m=+37.233650818" watchObservedRunningTime="2025-09-13 00:07:31.873477072 +0000 UTC m=+37.234256687" Sep 13 00:07:31.873989 kubelet[2464]: I0913 00:07:31.873799 2464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j5ltk" podStartSLOduration=30.873791045 podStartE2EDuration="30.873791045s" podCreationTimestamp="2025-09-13 00:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:31.860381818 +0000 UTC m=+37.221161433" watchObservedRunningTime="2025-09-13 00:07:31.873791045 +0000 UTC m=+37.234570660" Sep 13 00:07:32.855316 kubelet[2464]: E0913 00:07:32.855267 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:32.855316 kubelet[2464]: E0913 00:07:32.855315 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:33.857906 kubelet[2464]: E0913 00:07:33.857758 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:33.857906 kubelet[2464]: E0913 00:07:33.857798 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:34.187986 systemd[1]: Started sshd@8-10.0.0.88:22-10.0.0.1:60544.service - OpenSSH per-connection server daemon (10.0.0.1:60544). Sep 13 00:07:34.239327 sshd[3906]: Accepted publickey for core from 10.0.0.1 port 60544 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:34.240907 sshd[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:34.244816 systemd-logind[1427]: New session 9 of user core. Sep 13 00:07:34.254216 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:07:34.373188 sshd[3906]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:34.376620 systemd[1]: sshd@8-10.0.0.88:22-10.0.0.1:60544.service: Deactivated successfully. Sep 13 00:07:34.378371 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:07:34.378959 systemd-logind[1427]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:07:34.379876 systemd-logind[1427]: Removed session 9. Sep 13 00:07:39.385546 systemd[1]: Started sshd@9-10.0.0.88:22-10.0.0.1:60548.service - OpenSSH per-connection server daemon (10.0.0.1:60548). Sep 13 00:07:39.419209 sshd[3923]: Accepted publickey for core from 10.0.0.1 port 60548 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:39.420553 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:39.426228 systemd-logind[1427]: New session 10 of user core. Sep 13 00:07:39.430199 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:07:39.549878 sshd[3923]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:39.553263 systemd[1]: sshd@9-10.0.0.88:22-10.0.0.1:60548.service: Deactivated successfully. Sep 13 00:07:39.554868 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:07:39.558510 systemd-logind[1427]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:07:39.559462 systemd-logind[1427]: Removed session 10. Sep 13 00:07:44.577745 systemd[1]: Started sshd@10-10.0.0.88:22-10.0.0.1:40988.service - OpenSSH per-connection server daemon (10.0.0.1:40988). Sep 13 00:07:44.628785 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 40988 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:44.630165 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:44.634952 systemd-logind[1427]: New session 11 of user core. Sep 13 00:07:44.648224 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:07:44.782940 sshd[3939]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:44.792451 systemd[1]: sshd@10-10.0.0.88:22-10.0.0.1:40988.service: Deactivated successfully. Sep 13 00:07:44.794668 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:07:44.796503 systemd-logind[1427]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:07:44.805362 systemd[1]: Started sshd@11-10.0.0.88:22-10.0.0.1:40992.service - OpenSSH per-connection server daemon (10.0.0.1:40992). Sep 13 00:07:44.806352 systemd-logind[1427]: Removed session 11. Sep 13 00:07:44.846826 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 40992 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:44.848502 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:44.855478 systemd-logind[1427]: New session 12 of user core. Sep 13 00:07:44.865221 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:07:45.035037 sshd[3955]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:45.045445 systemd[1]: sshd@11-10.0.0.88:22-10.0.0.1:40992.service: Deactivated successfully. Sep 13 00:07:45.050003 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:07:45.055295 systemd-logind[1427]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:07:45.062600 systemd[1]: Started sshd@12-10.0.0.88:22-10.0.0.1:40994.service - OpenSSH per-connection server daemon (10.0.0.1:40994). Sep 13 00:07:45.066309 systemd-logind[1427]: Removed session 12. Sep 13 00:07:45.095739 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 40994 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:45.097244 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:45.102113 systemd-logind[1427]: New session 13 of user core. Sep 13 00:07:45.108226 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:07:45.221527 sshd[3968]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:45.226777 systemd[1]: sshd@12-10.0.0.88:22-10.0.0.1:40994.service: Deactivated successfully. Sep 13 00:07:45.231004 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:07:45.233695 systemd-logind[1427]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:07:45.235125 systemd-logind[1427]: Removed session 13. Sep 13 00:07:50.237377 systemd[1]: Started sshd@13-10.0.0.88:22-10.0.0.1:56390.service - OpenSSH per-connection server daemon (10.0.0.1:56390). Sep 13 00:07:50.279596 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 56390 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:50.281333 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:50.285936 systemd-logind[1427]: New session 14 of user core. Sep 13 00:07:50.300194 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:07:50.420573 sshd[3983]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:50.432304 systemd[1]: sshd@13-10.0.0.88:22-10.0.0.1:56390.service: Deactivated successfully. Sep 13 00:07:50.433680 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:07:50.437212 systemd-logind[1427]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:07:50.450323 systemd[1]: Started sshd@14-10.0.0.88:22-10.0.0.1:56406.service - OpenSSH per-connection server daemon (10.0.0.1:56406). Sep 13 00:07:50.453446 systemd-logind[1427]: Removed session 14. Sep 13 00:07:50.491669 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 56406 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:50.493586 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:50.498101 systemd-logind[1427]: New session 15 of user core. Sep 13 00:07:50.508216 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:07:50.745088 sshd[3997]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:50.751598 systemd[1]: sshd@14-10.0.0.88:22-10.0.0.1:56406.service: Deactivated successfully. Sep 13 00:07:50.757598 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:07:50.760482 systemd-logind[1427]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:07:50.768299 systemd[1]: Started sshd@15-10.0.0.88:22-10.0.0.1:56414.service - OpenSSH per-connection server daemon (10.0.0.1:56414). Sep 13 00:07:50.769235 systemd-logind[1427]: Removed session 15. Sep 13 00:07:50.813631 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 56414 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:50.817041 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:50.821683 systemd-logind[1427]: New session 16 of user core. Sep 13 00:07:50.833263 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:07:52.112087 sshd[4011]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:52.120546 systemd[1]: sshd@15-10.0.0.88:22-10.0.0.1:56414.service: Deactivated successfully. Sep 13 00:07:52.122114 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:07:52.123513 systemd-logind[1427]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:07:52.131064 systemd[1]: Started sshd@16-10.0.0.88:22-10.0.0.1:56426.service - OpenSSH per-connection server daemon (10.0.0.1:56426). Sep 13 00:07:52.132479 systemd-logind[1427]: Removed session 16. Sep 13 00:07:52.173784 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 56426 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:52.175178 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:52.179862 systemd-logind[1427]: New session 17 of user core. Sep 13 00:07:52.189185 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:07:52.410853 sshd[4031]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:52.418511 systemd[1]: sshd@16-10.0.0.88:22-10.0.0.1:56426.service: Deactivated successfully. Sep 13 00:07:52.421469 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:07:52.422910 systemd-logind[1427]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:07:52.430294 systemd[1]: Started sshd@17-10.0.0.88:22-10.0.0.1:56440.service - OpenSSH per-connection server daemon (10.0.0.1:56440). Sep 13 00:07:52.431494 systemd-logind[1427]: Removed session 17. Sep 13 00:07:52.470287 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 56440 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:52.471770 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:52.476747 systemd-logind[1427]: New session 18 of user core. Sep 13 00:07:52.487233 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:07:52.599169 sshd[4043]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:52.602075 systemd[1]: sshd@17-10.0.0.88:22-10.0.0.1:56440.service: Deactivated successfully. Sep 13 00:07:52.604050 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:07:52.606061 systemd-logind[1427]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:07:52.607269 systemd-logind[1427]: Removed session 18. Sep 13 00:07:57.610827 systemd[1]: Started sshd@18-10.0.0.88:22-10.0.0.1:56446.service - OpenSSH per-connection server daemon (10.0.0.1:56446). Sep 13 00:07:57.674180 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 56446 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:07:57.675881 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:57.680293 systemd-logind[1427]: New session 19 of user core. Sep 13 00:07:57.694218 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:07:57.821680 sshd[4064]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:57.825119 systemd[1]: sshd@18-10.0.0.88:22-10.0.0.1:56446.service: Deactivated successfully. Sep 13 00:07:57.827478 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:07:57.829856 systemd-logind[1427]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:07:57.830661 systemd-logind[1427]: Removed session 19. Sep 13 00:08:02.831379 systemd[1]: Started sshd@19-10.0.0.88:22-10.0.0.1:55866.service - OpenSSH per-connection server daemon (10.0.0.1:55866). Sep 13 00:08:02.865668 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 55866 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:08:02.866818 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:02.870384 systemd-logind[1427]: New session 20 of user core. Sep 13 00:08:02.879183 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:08:02.988031 sshd[4081]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:02.991183 systemd[1]: sshd@19-10.0.0.88:22-10.0.0.1:55866.service: Deactivated successfully. Sep 13 00:08:02.992756 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:08:02.993371 systemd-logind[1427]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:08:02.994418 systemd-logind[1427]: Removed session 20. Sep 13 00:08:07.998532 systemd[1]: Started sshd@20-10.0.0.88:22-10.0.0.1:55880.service - OpenSSH per-connection server daemon (10.0.0.1:55880). Sep 13 00:08:08.033377 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 55880 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:08:08.034624 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:08.038492 systemd-logind[1427]: New session 21 of user core. Sep 13 00:08:08.053276 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:08:08.163767 sshd[4096]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:08.177515 systemd[1]: sshd@20-10.0.0.88:22-10.0.0.1:55880.service: Deactivated successfully. Sep 13 00:08:08.179108 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:08:08.180310 systemd-logind[1427]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:08:08.189316 systemd[1]: Started sshd@21-10.0.0.88:22-10.0.0.1:55892.service - OpenSSH per-connection server daemon (10.0.0.1:55892). Sep 13 00:08:08.190687 systemd-logind[1427]: Removed session 21. Sep 13 00:08:08.222508 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 55892 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:08:08.223902 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:08.228236 systemd-logind[1427]: New session 22 of user core. Sep 13 00:08:08.238198 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:08:10.279809 containerd[1446]: time="2025-09-13T00:08:10.279595894Z" level=info msg="StopContainer for \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\" with timeout 30 (s)" Sep 13 00:08:10.287615 containerd[1446]: time="2025-09-13T00:08:10.282479344Z" level=info msg="Stop container \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\" with signal terminated" Sep 13 00:08:10.298259 systemd[1]: cri-containerd-53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772.scope: Deactivated successfully. Sep 13 00:08:10.306765 systemd[1]: run-containerd-runc-k8s.io-0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b-runc.WYMmes.mount: Deactivated successfully. Sep 13 00:08:10.321094 containerd[1446]: time="2025-09-13T00:08:10.320834204Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:08:10.326093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772-rootfs.mount: Deactivated successfully. Sep 13 00:08:10.329856 containerd[1446]: time="2025-09-13T00:08:10.329819184Z" level=info msg="StopContainer for \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\" with timeout 2 (s)" Sep 13 00:08:10.330325 containerd[1446]: time="2025-09-13T00:08:10.330244534Z" level=info msg="Stop container \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\" with signal terminated" Sep 13 00:08:10.336968 systemd-networkd[1385]: lxc_health: Link DOWN Sep 13 00:08:10.336976 systemd-networkd[1385]: lxc_health: Lost carrier Sep 13 00:08:10.342384 containerd[1446]: time="2025-09-13T00:08:10.342309198Z" level=info msg="shim disconnected" id=53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772 namespace=k8s.io Sep 13 00:08:10.342384 containerd[1446]: time="2025-09-13T00:08:10.342360877Z" level=warning msg="cleaning up after shim disconnected" id=53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772 namespace=k8s.io Sep 13 00:08:10.342384 containerd[1446]: time="2025-09-13T00:08:10.342369597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:10.357050 containerd[1446]: time="2025-09-13T00:08:10.355869386Z" level=info msg="StopContainer for \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\" returns successfully" Sep 13 00:08:10.357852 containerd[1446]: time="2025-09-13T00:08:10.357707781Z" level=info msg="StopPodSandbox for \"ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90\"" Sep 13 00:08:10.357852 containerd[1446]: time="2025-09-13T00:08:10.357761780Z" level=info msg="Container to stop \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:10.359376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90-shm.mount: Deactivated successfully. Sep 13 00:08:10.362708 systemd[1]: cri-containerd-0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b.scope: Deactivated successfully. Sep 13 00:08:10.363269 systemd[1]: cri-containerd-0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b.scope: Consumed 6.383s CPU time. Sep 13 00:08:10.376642 systemd[1]: cri-containerd-ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90.scope: Deactivated successfully. Sep 13 00:08:10.387595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b-rootfs.mount: Deactivated successfully. Sep 13 00:08:10.393986 containerd[1446]: time="2025-09-13T00:08:10.393924574Z" level=info msg="shim disconnected" id=0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b namespace=k8s.io Sep 13 00:08:10.394289 containerd[1446]: time="2025-09-13T00:08:10.394266206Z" level=warning msg="cleaning up after shim disconnected" id=0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b namespace=k8s.io Sep 13 00:08:10.394364 containerd[1446]: time="2025-09-13T00:08:10.394350564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:10.405971 containerd[1446]: time="2025-09-13T00:08:10.405892081Z" level=info msg="shim disconnected" id=ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90 namespace=k8s.io Sep 13 00:08:10.405971 containerd[1446]: time="2025-09-13T00:08:10.405952000Z" level=warning msg="cleaning up after shim disconnected" id=ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90 namespace=k8s.io Sep 13 00:08:10.405971 containerd[1446]: time="2025-09-13T00:08:10.405975359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:10.409224 containerd[1446]: time="2025-09-13T00:08:10.409103603Z" level=info msg="StopContainer for \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\" returns successfully" Sep 13 00:08:10.410259 containerd[1446]: time="2025-09-13T00:08:10.410230015Z" level=info msg="StopPodSandbox for \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\"" Sep 13 00:08:10.410501 containerd[1446]: time="2025-09-13T00:08:10.410353932Z" level=info msg="Container to stop \"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:10.410501 containerd[1446]: time="2025-09-13T00:08:10.410371732Z" level=info msg="Container to stop \"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:10.410501 containerd[1446]: time="2025-09-13T00:08:10.410381212Z" level=info msg="Container to stop \"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:10.410501 containerd[1446]: time="2025-09-13T00:08:10.410390811Z" level=info msg="Container to stop \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:10.410501 containerd[1446]: time="2025-09-13T00:08:10.410401411Z" level=info msg="Container to stop \"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:10.417719 systemd[1]: cri-containerd-31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5.scope: Deactivated successfully. Sep 13 00:08:10.418874 containerd[1446]: time="2025-09-13T00:08:10.418816765Z" level=info msg="TearDown network for sandbox \"ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90\" successfully" Sep 13 00:08:10.418874 containerd[1446]: time="2025-09-13T00:08:10.418859444Z" level=info msg="StopPodSandbox for \"ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90\" returns successfully" Sep 13 00:08:10.444568 containerd[1446]: time="2025-09-13T00:08:10.444480216Z" level=info msg="shim disconnected" id=31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5 namespace=k8s.io Sep 13 00:08:10.444942 containerd[1446]: time="2025-09-13T00:08:10.444547935Z" level=warning msg="cleaning up after shim disconnected" id=31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5 namespace=k8s.io Sep 13 00:08:10.444942 containerd[1446]: time="2025-09-13T00:08:10.444796849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:10.455629 containerd[1446]: time="2025-09-13T00:08:10.455543826Z" level=info msg="TearDown network for sandbox \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" successfully" Sep 13 00:08:10.455629 containerd[1446]: time="2025-09-13T00:08:10.455580305Z" level=info msg="StopPodSandbox for \"31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5\" returns successfully" Sep 13 00:08:10.593886 kubelet[2464]: I0913 00:08:10.593696 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-host-proc-sys-kernel\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.593886 kubelet[2464]: I0913 00:08:10.593756 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkkqf\" (UniqueName: \"kubernetes.io/projected/f14d657e-3fd1-43fe-89d0-7b799f892ab2-kube-api-access-jkkqf\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.593886 kubelet[2464]: I0913 00:08:10.593786 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-run\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.593886 kubelet[2464]: I0913 00:08:10.593803 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-config-path\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.593886 kubelet[2464]: I0913 00:08:10.593817 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cni-path\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.593886 kubelet[2464]: I0913 00:08:10.593832 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b31b7e42-27ef-4298-81a2-b371a8197a65-cilium-config-path\") pod \"b31b7e42-27ef-4298-81a2-b371a8197a65\" (UID: \"b31b7e42-27ef-4298-81a2-b371a8197a65\") " Sep 13 00:08:10.594386 kubelet[2464]: I0913 00:08:10.593849 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f14d657e-3fd1-43fe-89d0-7b799f892ab2-clustermesh-secrets\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.594598 kubelet[2464]: I0913 00:08:10.593866 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-hostproc\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.601223 kubelet[2464]: I0913 00:08:10.600032 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f14d657e-3fd1-43fe-89d0-7b799f892ab2-hubble-tls\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.601223 kubelet[2464]: I0913 00:08:10.600068 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-bpf-maps\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.601223 kubelet[2464]: I0913 00:08:10.600084 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-etc-cni-netd\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.601223 kubelet[2464]: I0913 00:08:10.600103 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-488c4\" (UniqueName: \"kubernetes.io/projected/b31b7e42-27ef-4298-81a2-b371a8197a65-kube-api-access-488c4\") pod \"b31b7e42-27ef-4298-81a2-b371a8197a65\" (UID: \"b31b7e42-27ef-4298-81a2-b371a8197a65\") " Sep 13 00:08:10.601223 kubelet[2464]: I0913 00:08:10.600125 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-lib-modules\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.601223 kubelet[2464]: I0913 00:08:10.600141 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-host-proc-sys-net\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.601452 kubelet[2464]: I0913 00:08:10.600157 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-cgroup\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.601452 kubelet[2464]: I0913 00:08:10.600172 2464 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-xtables-lock\") pod \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\" (UID: \"f14d657e-3fd1-43fe-89d0-7b799f892ab2\") " Sep 13 00:08:10.601452 kubelet[2464]: I0913 00:08:10.598872 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-hostproc" (OuterVolumeSpecName: "hostproc") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:10.601452 kubelet[2464]: I0913 00:08:10.599161 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cni-path" (OuterVolumeSpecName: "cni-path") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:10.601452 kubelet[2464]: I0913 00:08:10.599205 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:10.601576 kubelet[2464]: I0913 00:08:10.599961 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:10.601576 kubelet[2464]: I0913 00:08:10.600221 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:10.603085 kubelet[2464]: I0913 00:08:10.602351 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b31b7e42-27ef-4298-81a2-b371a8197a65-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b31b7e42-27ef-4298-81a2-b371a8197a65" (UID: "b31b7e42-27ef-4298-81a2-b371a8197a65"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:08:10.603256 kubelet[2464]: I0913 00:08:10.603220 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:08:10.603470 kubelet[2464]: I0913 00:08:10.603438 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f14d657e-3fd1-43fe-89d0-7b799f892ab2-kube-api-access-jkkqf" (OuterVolumeSpecName: "kube-api-access-jkkqf") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "kube-api-access-jkkqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:10.603507 kubelet[2464]: I0913 00:08:10.603483 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:10.603507 kubelet[2464]: I0913 00:08:10.603502 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:10.603570 kubelet[2464]: I0913 00:08:10.603516 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:10.603570 kubelet[2464]: I0913 00:08:10.603531 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:10.603570 kubelet[2464]: I0913 00:08:10.603546 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:10.603963 kubelet[2464]: I0913 00:08:10.603926 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f14d657e-3fd1-43fe-89d0-7b799f892ab2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:10.604738 kubelet[2464]: I0913 00:08:10.604579 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14d657e-3fd1-43fe-89d0-7b799f892ab2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f14d657e-3fd1-43fe-89d0-7b799f892ab2" (UID: "f14d657e-3fd1-43fe-89d0-7b799f892ab2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:08:10.608273 kubelet[2464]: I0913 00:08:10.608235 2464 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b31b7e42-27ef-4298-81a2-b371a8197a65-kube-api-access-488c4" (OuterVolumeSpecName: "kube-api-access-488c4") pod "b31b7e42-27ef-4298-81a2-b371a8197a65" (UID: "b31b7e42-27ef-4298-81a2-b371a8197a65"). InnerVolumeSpecName "kube-api-access-488c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:10.700760 kubelet[2464]: I0913 00:08:10.700591 2464 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f14d657e-3fd1-43fe-89d0-7b799f892ab2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.700760 kubelet[2464]: I0913 00:08:10.700629 2464 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.700760 kubelet[2464]: I0913 00:08:10.700641 2464 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f14d657e-3fd1-43fe-89d0-7b799f892ab2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.700760 kubelet[2464]: I0913 00:08:10.700649 2464 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.700760 kubelet[2464]: I0913 00:08:10.700657 2464 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.700760 kubelet[2464]: I0913 00:08:10.700665 2464 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-488c4\" (UniqueName: \"kubernetes.io/projected/b31b7e42-27ef-4298-81a2-b371a8197a65-kube-api-access-488c4\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.700760 kubelet[2464]: I0913 00:08:10.700673 2464 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.700760 kubelet[2464]: I0913 00:08:10.700680 2464 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.701084 kubelet[2464]: I0913 00:08:10.700688 2464 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.701084 kubelet[2464]: I0913 00:08:10.700695 2464 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.701084 kubelet[2464]: I0913 00:08:10.700702 2464 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.701084 kubelet[2464]: I0913 00:08:10.700710 2464 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkkqf\" (UniqueName: \"kubernetes.io/projected/f14d657e-3fd1-43fe-89d0-7b799f892ab2-kube-api-access-jkkqf\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.701084 kubelet[2464]: I0913 00:08:10.700718 2464 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.701084 kubelet[2464]: I0913 00:08:10.700726 2464 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.701084 kubelet[2464]: I0913 00:08:10.700733 2464 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f14d657e-3fd1-43fe-89d0-7b799f892ab2-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.701084 kubelet[2464]: I0913 00:08:10.700740 2464 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b31b7e42-27ef-4298-81a2-b371a8197a65-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:10.744367 systemd[1]: Removed slice kubepods-burstable-podf14d657e_3fd1_43fe_89d0_7b799f892ab2.slice - libcontainer container kubepods-burstable-podf14d657e_3fd1_43fe_89d0_7b799f892ab2.slice. Sep 13 00:08:10.744454 systemd[1]: kubepods-burstable-podf14d657e_3fd1_43fe_89d0_7b799f892ab2.slice: Consumed 6.458s CPU time. Sep 13 00:08:10.745963 systemd[1]: Removed slice kubepods-besteffort-podb31b7e42_27ef_4298_81a2_b371a8197a65.slice - libcontainer container kubepods-besteffort-podb31b7e42_27ef_4298_81a2_b371a8197a65.slice. Sep 13 00:08:10.937675 kubelet[2464]: I0913 00:08:10.937245 2464 scope.go:117] "RemoveContainer" containerID="53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772" Sep 13 00:08:10.939914 containerd[1446]: time="2025-09-13T00:08:10.939874365Z" level=info msg="RemoveContainer for \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\"" Sep 13 00:08:10.949056 containerd[1446]: time="2025-09-13T00:08:10.948995422Z" level=info msg="RemoveContainer for \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\" returns successfully" Sep 13 00:08:10.949349 kubelet[2464]: I0913 00:08:10.949271 2464 scope.go:117] "RemoveContainer" containerID="53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772" Sep 13 00:08:10.949578 containerd[1446]: time="2025-09-13T00:08:10.949503089Z" level=error msg="ContainerStatus for \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\": not found" Sep 13 00:08:10.961813 kubelet[2464]: E0913 00:08:10.961776 2464 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\": not found" containerID="53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772" Sep 13 00:08:10.962150 kubelet[2464]: I0913 00:08:10.961928 2464 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772"} err="failed to get container status \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\": rpc error: code = NotFound desc = an error occurred when try to find container \"53b9455582e336ec9e32c03ccf6042ff76b53b419b37c8179e9fe1e655e42772\": not found" Sep 13 00:08:10.962150 kubelet[2464]: I0913 00:08:10.962052 2464 scope.go:117] "RemoveContainer" containerID="0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b" Sep 13 00:08:10.963569 containerd[1446]: time="2025-09-13T00:08:10.963268752Z" level=info msg="RemoveContainer for \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\"" Sep 13 00:08:10.966074 containerd[1446]: time="2025-09-13T00:08:10.965967966Z" level=info msg="RemoveContainer for \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\" returns successfully" Sep 13 00:08:10.966204 kubelet[2464]: I0913 00:08:10.966177 2464 scope.go:117] "RemoveContainer" containerID="b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e" Sep 13 00:08:10.967258 containerd[1446]: time="2025-09-13T00:08:10.967236375Z" level=info msg="RemoveContainer for \"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e\"" Sep 13 00:08:10.969913 containerd[1446]: time="2025-09-13T00:08:10.969878871Z" level=info msg="RemoveContainer for \"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e\" returns successfully" Sep 13 00:08:10.970107 kubelet[2464]: I0913 00:08:10.970085 2464 scope.go:117] "RemoveContainer" containerID="90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312" Sep 13 00:08:10.971089 containerd[1446]: time="2025-09-13T00:08:10.971065521Z" level=info msg="RemoveContainer for \"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312\"" Sep 13 00:08:10.973320 containerd[1446]: time="2025-09-13T00:08:10.973282067Z" level=info msg="RemoveContainer for \"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312\" returns successfully" Sep 13 00:08:10.973472 kubelet[2464]: I0913 00:08:10.973453 2464 scope.go:117] "RemoveContainer" containerID="b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567" Sep 13 00:08:10.974633 containerd[1446]: time="2025-09-13T00:08:10.974403680Z" level=info msg="RemoveContainer for \"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567\"" Sep 13 00:08:10.977671 containerd[1446]: time="2025-09-13T00:08:10.977544923Z" level=info msg="RemoveContainer for \"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567\" returns successfully" Sep 13 00:08:10.978360 kubelet[2464]: I0913 00:08:10.978333 2464 scope.go:117] "RemoveContainer" containerID="9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a" Sep 13 00:08:10.979435 containerd[1446]: time="2025-09-13T00:08:10.979400877Z" level=info msg="RemoveContainer for \"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a\"" Sep 13 00:08:10.981750 containerd[1446]: time="2025-09-13T00:08:10.981722780Z" level=info msg="RemoveContainer for \"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a\" returns successfully" Sep 13 00:08:10.981945 kubelet[2464]: I0913 00:08:10.981922 2464 scope.go:117] "RemoveContainer" containerID="0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b" Sep 13 00:08:10.982257 containerd[1446]: time="2025-09-13T00:08:10.982169810Z" level=error msg="ContainerStatus for \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\": not found" Sep 13 00:08:10.982389 kubelet[2464]: E0913 00:08:10.982309 2464 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\": not found" containerID="0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b" Sep 13 00:08:10.982389 kubelet[2464]: I0913 00:08:10.982342 2464 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b"} err="failed to get container status \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\": rpc error: code = NotFound desc = an error occurred when try to find container \"0252fb1682b769997ea401c2d6768062db6d9c0defbeaf07454011c2225cd58b\": not found" Sep 13 00:08:10.982389 kubelet[2464]: I0913 00:08:10.982365 2464 scope.go:117] "RemoveContainer" containerID="b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e" Sep 13 00:08:10.982665 containerd[1446]: time="2025-09-13T00:08:10.982637598Z" level=error msg="ContainerStatus for \"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e\": not found" Sep 13 00:08:10.982956 kubelet[2464]: E0913 00:08:10.982929 2464 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e\": not found" containerID="b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e" Sep 13 00:08:10.983045 kubelet[2464]: I0913 00:08:10.982962 2464 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e"} err="failed to get container status \"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b08b58a958eb61cfa339e4408f47585147fd8b4d74bf8789c3d6f2876c526b4e\": not found" Sep 13 00:08:10.983045 kubelet[2464]: I0913 00:08:10.982989 2464 scope.go:117] "RemoveContainer" containerID="90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312" Sep 13 00:08:10.983393 containerd[1446]: time="2025-09-13T00:08:10.983318101Z" level=error msg="ContainerStatus for \"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312\": not found" Sep 13 00:08:10.983458 kubelet[2464]: E0913 00:08:10.983433 2464 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312\": not found" containerID="90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312" Sep 13 00:08:10.983491 kubelet[2464]: I0913 00:08:10.983458 2464 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312"} err="failed to get container status \"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312\": rpc error: code = NotFound desc = an error occurred when try to find container \"90b888b589b509d93803b34f65a197b378576a0fc50b079f967c3f5e4bc61312\": not found" Sep 13 00:08:10.983491 kubelet[2464]: I0913 00:08:10.983474 2464 scope.go:117] "RemoveContainer" containerID="b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567" Sep 13 00:08:10.983765 containerd[1446]: time="2025-09-13T00:08:10.983696172Z" level=error msg="ContainerStatus for \"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567\": not found" Sep 13 00:08:10.983810 kubelet[2464]: E0913 00:08:10.983788 2464 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567\": not found" containerID="b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567" Sep 13 00:08:10.983842 kubelet[2464]: I0913 00:08:10.983814 2464 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567"} err="failed to get container status \"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567\": rpc error: code = NotFound desc = an error occurred when try to find container \"b28454496350b8b599c7017caf306ac03643e7fed6cc3e649a774eeb17291567\": not found" Sep 13 00:08:10.983842 kubelet[2464]: I0913 00:08:10.983830 2464 scope.go:117] "RemoveContainer" containerID="9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a" Sep 13 00:08:10.984066 containerd[1446]: time="2025-09-13T00:08:10.984032084Z" level=error msg="ContainerStatus for \"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a\": not found" Sep 13 00:08:10.984200 kubelet[2464]: E0913 00:08:10.984176 2464 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a\": not found" containerID="9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a" Sep 13 00:08:10.984229 kubelet[2464]: I0913 00:08:10.984205 2464 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a"} err="failed to get container status \"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d1a48daf26a9e5febdfd24a73f994a9d64aa71b24b25a46ccf0346b7895db4a\": not found" Sep 13 00:08:11.303228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac912cb57931bbb5af56869872f479c9d5dca4c342a5803a587996f3efb7be90-rootfs.mount: Deactivated successfully. Sep 13 00:08:11.303341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5-rootfs.mount: Deactivated successfully. Sep 13 00:08:11.303399 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31c7c19747be02879385a9bf7e0e639383f1ceabad98e318edc580814b17c3f5-shm.mount: Deactivated successfully. Sep 13 00:08:11.303452 systemd[1]: var-lib-kubelet-pods-b31b7e42\x2d27ef\x2d4298\x2d81a2\x2db371a8197a65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d488c4.mount: Deactivated successfully. Sep 13 00:08:11.303503 systemd[1]: var-lib-kubelet-pods-f14d657e\x2d3fd1\x2d43fe\x2d89d0\x2d7b799f892ab2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djkkqf.mount: Deactivated successfully. Sep 13 00:08:11.303583 systemd[1]: var-lib-kubelet-pods-f14d657e\x2d3fd1\x2d43fe\x2d89d0\x2d7b799f892ab2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:08:11.303630 systemd[1]: var-lib-kubelet-pods-f14d657e\x2d3fd1\x2d43fe\x2d89d0\x2d7b799f892ab2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:08:12.230113 sshd[4110]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:12.242820 systemd[1]: sshd@21-10.0.0.88:22-10.0.0.1:55892.service: Deactivated successfully. Sep 13 00:08:12.244524 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:08:12.244746 systemd[1]: session-22.scope: Consumed 1.373s CPU time. Sep 13 00:08:12.245927 systemd-logind[1427]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:08:12.253327 systemd[1]: Started sshd@22-10.0.0.88:22-10.0.0.1:55344.service - OpenSSH per-connection server daemon (10.0.0.1:55344). Sep 13 00:08:12.254272 systemd-logind[1427]: Removed session 22. Sep 13 00:08:12.287840 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 55344 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:08:12.289266 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:12.293245 systemd-logind[1427]: New session 23 of user core. Sep 13 00:08:12.301166 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:08:12.737904 kubelet[2464]: E0913 00:08:12.737541 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:12.740072 kubelet[2464]: I0913 00:08:12.739268 2464 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b31b7e42-27ef-4298-81a2-b371a8197a65" path="/var/lib/kubelet/pods/b31b7e42-27ef-4298-81a2-b371a8197a65/volumes" Sep 13 00:08:12.740072 kubelet[2464]: I0913 00:08:12.739620 2464 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f14d657e-3fd1-43fe-89d0-7b799f892ab2" path="/var/lib/kubelet/pods/f14d657e-3fd1-43fe-89d0-7b799f892ab2/volumes" Sep 13 00:08:13.410777 sshd[4271]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:13.418263 systemd[1]: sshd@22-10.0.0.88:22-10.0.0.1:55344.service: Deactivated successfully. Sep 13 00:08:13.422939 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:08:13.423176 systemd[1]: session-23.scope: Consumed 1.030s CPU time. Sep 13 00:08:13.424194 systemd-logind[1427]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:08:13.432856 kubelet[2464]: E0913 00:08:13.432737 2464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f14d657e-3fd1-43fe-89d0-7b799f892ab2" containerName="clean-cilium-state" Sep 13 00:08:13.432856 kubelet[2464]: E0913 00:08:13.432771 2464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f14d657e-3fd1-43fe-89d0-7b799f892ab2" containerName="apply-sysctl-overwrites" Sep 13 00:08:13.432856 kubelet[2464]: E0913 00:08:13.432778 2464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f14d657e-3fd1-43fe-89d0-7b799f892ab2" containerName="cilium-agent" Sep 13 00:08:13.432856 kubelet[2464]: E0913 00:08:13.432784 2464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f14d657e-3fd1-43fe-89d0-7b799f892ab2" containerName="mount-cgroup" Sep 13 00:08:13.432856 kubelet[2464]: E0913 00:08:13.432789 2464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f14d657e-3fd1-43fe-89d0-7b799f892ab2" containerName="mount-bpf-fs" Sep 13 00:08:13.432856 kubelet[2464]: E0913 00:08:13.432796 2464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b31b7e42-27ef-4298-81a2-b371a8197a65" containerName="cilium-operator" Sep 13 00:08:13.432856 kubelet[2464]: I0913 00:08:13.432819 2464 memory_manager.go:354] "RemoveStaleState removing state" podUID="b31b7e42-27ef-4298-81a2-b371a8197a65" containerName="cilium-operator" Sep 13 00:08:13.432856 kubelet[2464]: I0913 00:08:13.432825 2464 memory_manager.go:354] "RemoveStaleState removing state" podUID="f14d657e-3fd1-43fe-89d0-7b799f892ab2" containerName="cilium-agent" Sep 13 00:08:13.434181 systemd[1]: Started sshd@23-10.0.0.88:22-10.0.0.1:55354.service - OpenSSH per-connection server daemon (10.0.0.1:55354). Sep 13 00:08:13.437778 systemd-logind[1427]: Removed session 23. Sep 13 00:08:13.452165 systemd[1]: Created slice kubepods-burstable-podc2bb2814_3c24_488b_8e93_150ef967e5d7.slice - libcontainer container kubepods-burstable-podc2bb2814_3c24_488b_8e93_150ef967e5d7.slice. Sep 13 00:08:13.485421 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 55354 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:08:13.488317 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:13.499736 systemd-logind[1427]: New session 24 of user core. Sep 13 00:08:13.507383 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:08:13.514247 kubelet[2464]: I0913 00:08:13.514208 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2bb2814-3c24-488b-8e93-150ef967e5d7-cni-path\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514247 kubelet[2464]: I0913 00:08:13.514245 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2bb2814-3c24-488b-8e93-150ef967e5d7-host-proc-sys-kernel\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514373 kubelet[2464]: I0913 00:08:13.514267 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2bb2814-3c24-488b-8e93-150ef967e5d7-bpf-maps\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514373 kubelet[2464]: I0913 00:08:13.514321 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2bb2814-3c24-488b-8e93-150ef967e5d7-hostproc\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514373 kubelet[2464]: I0913 00:08:13.514355 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2bb2814-3c24-488b-8e93-150ef967e5d7-cilium-config-path\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514433 kubelet[2464]: I0913 00:08:13.514376 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2bb2814-3c24-488b-8e93-150ef967e5d7-hubble-tls\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514433 kubelet[2464]: I0913 00:08:13.514397 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g9rp\" (UniqueName: \"kubernetes.io/projected/c2bb2814-3c24-488b-8e93-150ef967e5d7-kube-api-access-8g9rp\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514433 kubelet[2464]: I0913 00:08:13.514415 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2bb2814-3c24-488b-8e93-150ef967e5d7-lib-modules\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514433 kubelet[2464]: I0913 00:08:13.514430 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2bb2814-3c24-488b-8e93-150ef967e5d7-cilium-cgroup\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514518 kubelet[2464]: I0913 00:08:13.514446 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2bb2814-3c24-488b-8e93-150ef967e5d7-etc-cni-netd\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514518 kubelet[2464]: I0913 00:08:13.514471 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2bb2814-3c24-488b-8e93-150ef967e5d7-xtables-lock\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514563 kubelet[2464]: I0913 00:08:13.514518 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2bb2814-3c24-488b-8e93-150ef967e5d7-clustermesh-secrets\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514563 kubelet[2464]: I0913 00:08:13.514538 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c2bb2814-3c24-488b-8e93-150ef967e5d7-cilium-ipsec-secrets\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514610 kubelet[2464]: I0913 00:08:13.514565 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2bb2814-3c24-488b-8e93-150ef967e5d7-cilium-run\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.514610 kubelet[2464]: I0913 00:08:13.514586 2464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2bb2814-3c24-488b-8e93-150ef967e5d7-host-proc-sys-net\") pod \"cilium-gvndh\" (UID: \"c2bb2814-3c24-488b-8e93-150ef967e5d7\") " pod="kube-system/cilium-gvndh" Sep 13 00:08:13.558279 sshd[4284]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:13.564372 systemd[1]: sshd@23-10.0.0.88:22-10.0.0.1:55354.service: Deactivated successfully. Sep 13 00:08:13.565989 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:08:13.567540 systemd-logind[1427]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:08:13.569012 systemd[1]: Started sshd@24-10.0.0.88:22-10.0.0.1:55362.service - OpenSSH per-connection server daemon (10.0.0.1:55362). Sep 13 00:08:13.569751 systemd-logind[1427]: Removed session 24. Sep 13 00:08:13.602753 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 55362 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:08:13.603980 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:13.608008 systemd-logind[1427]: New session 25 of user core. Sep 13 00:08:13.620214 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:08:13.737099 kubelet[2464]: E0913 00:08:13.737051 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:13.755687 kubelet[2464]: E0913 00:08:13.755432 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:13.756835 containerd[1446]: time="2025-09-13T00:08:13.756773579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gvndh,Uid:c2bb2814-3c24-488b-8e93-150ef967e5d7,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:13.783794 containerd[1446]: time="2025-09-13T00:08:13.783689540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:13.783794 containerd[1446]: time="2025-09-13T00:08:13.783755018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:13.784448 containerd[1446]: time="2025-09-13T00:08:13.784255887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:13.784448 containerd[1446]: time="2025-09-13T00:08:13.784361925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:13.806220 systemd[1]: Started cri-containerd-447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032.scope - libcontainer container 447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032. Sep 13 00:08:13.824987 containerd[1446]: time="2025-09-13T00:08:13.824940101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gvndh,Uid:c2bb2814-3c24-488b-8e93-150ef967e5d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032\"" Sep 13 00:08:13.825671 kubelet[2464]: E0913 00:08:13.825648 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:13.827601 containerd[1446]: time="2025-09-13T00:08:13.827355688Z" level=info msg="CreateContainer within sandbox \"447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:08:13.840348 containerd[1446]: time="2025-09-13T00:08:13.840256040Z" level=info msg="CreateContainer within sandbox \"447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d29432a2d0837cf2ee63fc943233c723f1a535dedc212bcfa4aa4025e7ec5bf8\"" Sep 13 00:08:13.841533 containerd[1446]: time="2025-09-13T00:08:13.840871427Z" level=info msg="StartContainer for \"d29432a2d0837cf2ee63fc943233c723f1a535dedc212bcfa4aa4025e7ec5bf8\"" Sep 13 00:08:13.866199 systemd[1]: Started cri-containerd-d29432a2d0837cf2ee63fc943233c723f1a535dedc212bcfa4aa4025e7ec5bf8.scope - libcontainer container d29432a2d0837cf2ee63fc943233c723f1a535dedc212bcfa4aa4025e7ec5bf8. Sep 13 00:08:13.885178 containerd[1446]: time="2025-09-13T00:08:13.885136081Z" level=info msg="StartContainer for \"d29432a2d0837cf2ee63fc943233c723f1a535dedc212bcfa4aa4025e7ec5bf8\" returns successfully" Sep 13 00:08:13.897376 systemd[1]: cri-containerd-d29432a2d0837cf2ee63fc943233c723f1a535dedc212bcfa4aa4025e7ec5bf8.scope: Deactivated successfully. Sep 13 00:08:13.934712 containerd[1446]: time="2025-09-13T00:08:13.934650739Z" level=info msg="shim disconnected" id=d29432a2d0837cf2ee63fc943233c723f1a535dedc212bcfa4aa4025e7ec5bf8 namespace=k8s.io Sep 13 00:08:13.934712 containerd[1446]: time="2025-09-13T00:08:13.934706938Z" level=warning msg="cleaning up after shim disconnected" id=d29432a2d0837cf2ee63fc943233c723f1a535dedc212bcfa4aa4025e7ec5bf8 namespace=k8s.io Sep 13 00:08:13.934712 containerd[1446]: time="2025-09-13T00:08:13.934715817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:13.950250 kubelet[2464]: E0913 00:08:13.950203 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:13.954290 containerd[1446]: time="2025-09-13T00:08:13.953532958Z" level=info msg="CreateContainer within sandbox \"447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:08:13.964542 containerd[1446]: time="2025-09-13T00:08:13.964502554Z" level=info msg="CreateContainer within sandbox \"447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f56d8afeec6fbc768f7eef685ab5ad70f6ab7f67a3065ac947b4e3079b9bec75\"" Sep 13 00:08:13.965207 containerd[1446]: time="2025-09-13T00:08:13.965098461Z" level=info msg="StartContainer for \"f56d8afeec6fbc768f7eef685ab5ad70f6ab7f67a3065ac947b4e3079b9bec75\"" Sep 13 00:08:13.990192 systemd[1]: Started cri-containerd-f56d8afeec6fbc768f7eef685ab5ad70f6ab7f67a3065ac947b4e3079b9bec75.scope - libcontainer container f56d8afeec6fbc768f7eef685ab5ad70f6ab7f67a3065ac947b4e3079b9bec75. Sep 13 00:08:14.014033 containerd[1446]: time="2025-09-13T00:08:14.013883584Z" level=info msg="StartContainer for \"f56d8afeec6fbc768f7eef685ab5ad70f6ab7f67a3065ac947b4e3079b9bec75\" returns successfully" Sep 13 00:08:14.021247 systemd[1]: cri-containerd-f56d8afeec6fbc768f7eef685ab5ad70f6ab7f67a3065ac947b4e3079b9bec75.scope: Deactivated successfully. Sep 13 00:08:14.044606 containerd[1446]: time="2025-09-13T00:08:14.044546522Z" level=info msg="shim disconnected" id=f56d8afeec6fbc768f7eef685ab5ad70f6ab7f67a3065ac947b4e3079b9bec75 namespace=k8s.io Sep 13 00:08:14.044857 containerd[1446]: time="2025-09-13T00:08:14.044838236Z" level=warning msg="cleaning up after shim disconnected" id=f56d8afeec6fbc768f7eef685ab5ad70f6ab7f67a3065ac947b4e3079b9bec75 namespace=k8s.io Sep 13 00:08:14.044932 containerd[1446]: time="2025-09-13T00:08:14.044917634Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:14.788146 kubelet[2464]: E0913 00:08:14.788077 2464 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:08:14.955630 kubelet[2464]: E0913 00:08:14.955584 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:14.963065 containerd[1446]: time="2025-09-13T00:08:14.959966939Z" level=info msg="CreateContainer within sandbox \"447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:08:14.980773 containerd[1446]: time="2025-09-13T00:08:14.980396219Z" level=info msg="CreateContainer within sandbox \"447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8e2eb1837f25d8481e54838a88f81f0af2eeb169f288861558fd448b4f441469\"" Sep 13 00:08:14.982205 containerd[1446]: time="2025-09-13T00:08:14.982157421Z" level=info msg="StartContainer for \"8e2eb1837f25d8481e54838a88f81f0af2eeb169f288861558fd448b4f441469\"" Sep 13 00:08:15.010211 systemd[1]: Started cri-containerd-8e2eb1837f25d8481e54838a88f81f0af2eeb169f288861558fd448b4f441469.scope - libcontainer container 8e2eb1837f25d8481e54838a88f81f0af2eeb169f288861558fd448b4f441469. Sep 13 00:08:15.045698 systemd[1]: cri-containerd-8e2eb1837f25d8481e54838a88f81f0af2eeb169f288861558fd448b4f441469.scope: Deactivated successfully. Sep 13 00:08:15.131577 containerd[1446]: time="2025-09-13T00:08:15.126743947Z" level=info msg="StartContainer for \"8e2eb1837f25d8481e54838a88f81f0af2eeb169f288861558fd448b4f441469\" returns successfully" Sep 13 00:08:15.202695 containerd[1446]: time="2025-09-13T00:08:15.202470805Z" level=info msg="shim disconnected" id=8e2eb1837f25d8481e54838a88f81f0af2eeb169f288861558fd448b4f441469 namespace=k8s.io Sep 13 00:08:15.202695 containerd[1446]: time="2025-09-13T00:08:15.202527684Z" level=warning msg="cleaning up after shim disconnected" id=8e2eb1837f25d8481e54838a88f81f0af2eeb169f288861558fd448b4f441469 namespace=k8s.io Sep 13 00:08:15.202695 containerd[1446]: time="2025-09-13T00:08:15.202535883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:15.621838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e2eb1837f25d8481e54838a88f81f0af2eeb169f288861558fd448b4f441469-rootfs.mount: Deactivated successfully. Sep 13 00:08:15.965365 kubelet[2464]: E0913 00:08:15.965140 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:15.975468 containerd[1446]: time="2025-09-13T00:08:15.975425135Z" level=info msg="CreateContainer within sandbox \"447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:08:15.995943 containerd[1446]: time="2025-09-13T00:08:15.995741191Z" level=info msg="CreateContainer within sandbox \"447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"573d31f4e2a14769e604809df38c6c369842e0d6b35655bdcb577bb44535cfa7\"" Sep 13 00:08:15.998090 containerd[1446]: time="2025-09-13T00:08:15.998036743Z" level=info msg="StartContainer for \"573d31f4e2a14769e604809df38c6c369842e0d6b35655bdcb577bb44535cfa7\"" Sep 13 00:08:16.029182 systemd[1]: Started cri-containerd-573d31f4e2a14769e604809df38c6c369842e0d6b35655bdcb577bb44535cfa7.scope - libcontainer container 573d31f4e2a14769e604809df38c6c369842e0d6b35655bdcb577bb44535cfa7. Sep 13 00:08:16.059492 systemd[1]: cri-containerd-573d31f4e2a14769e604809df38c6c369842e0d6b35655bdcb577bb44535cfa7.scope: Deactivated successfully. Sep 13 00:08:16.064722 containerd[1446]: time="2025-09-13T00:08:16.064652673Z" level=info msg="StartContainer for \"573d31f4e2a14769e604809df38c6c369842e0d6b35655bdcb577bb44535cfa7\" returns successfully" Sep 13 00:08:16.084544 containerd[1446]: time="2025-09-13T00:08:16.084483392Z" level=info msg="shim disconnected" id=573d31f4e2a14769e604809df38c6c369842e0d6b35655bdcb577bb44535cfa7 namespace=k8s.io Sep 13 00:08:16.084544 containerd[1446]: time="2025-09-13T00:08:16.084536751Z" level=warning msg="cleaning up after shim disconnected" id=573d31f4e2a14769e604809df38c6c369842e0d6b35655bdcb577bb44535cfa7 namespace=k8s.io Sep 13 00:08:16.084544 containerd[1446]: time="2025-09-13T00:08:16.084545030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:16.493718 kubelet[2464]: I0913 00:08:16.493599 2464 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:08:16Z","lastTransitionTime":"2025-09-13T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:08:16.622204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-573d31f4e2a14769e604809df38c6c369842e0d6b35655bdcb577bb44535cfa7-rootfs.mount: Deactivated successfully. Sep 13 00:08:16.980389 kubelet[2464]: E0913 00:08:16.980345 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:16.985150 containerd[1446]: time="2025-09-13T00:08:16.985112802Z" level=info msg="CreateContainer within sandbox \"447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:08:17.010709 containerd[1446]: time="2025-09-13T00:08:17.010662051Z" level=info msg="CreateContainer within sandbox \"447a09d822674c81291941fecd99fed5f760ccfa499dfc8b747c384463c5d032\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e8496222a04f3d318490026052354a9080b5d1bf92f6b82ff4454792efe0056b\"" Sep 13 00:08:17.011762 containerd[1446]: time="2025-09-13T00:08:17.011617113Z" level=info msg="StartContainer for \"e8496222a04f3d318490026052354a9080b5d1bf92f6b82ff4454792efe0056b\"" Sep 13 00:08:17.044202 systemd[1]: Started cri-containerd-e8496222a04f3d318490026052354a9080b5d1bf92f6b82ff4454792efe0056b.scope - libcontainer container e8496222a04f3d318490026052354a9080b5d1bf92f6b82ff4454792efe0056b. Sep 13 00:08:17.081084 containerd[1446]: time="2025-09-13T00:08:17.080902074Z" level=info msg="StartContainer for \"e8496222a04f3d318490026052354a9080b5d1bf92f6b82ff4454792efe0056b\" returns successfully" Sep 13 00:08:17.362221 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 13 00:08:17.987429 kubelet[2464]: E0913 00:08:17.987203 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:18.012596 kubelet[2464]: I0913 00:08:18.012515 2464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gvndh" podStartSLOduration=5.012496094 podStartE2EDuration="5.012496094s" podCreationTimestamp="2025-09-13 00:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:18.010725088 +0000 UTC m=+83.371504703" watchObservedRunningTime="2025-09-13 00:08:18.012496094 +0000 UTC m=+83.373275709" Sep 13 00:08:19.756522 kubelet[2464]: E0913 00:08:19.756440 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:19.967659 systemd[1]: run-containerd-runc-k8s.io-e8496222a04f3d318490026052354a9080b5d1bf92f6b82ff4454792efe0056b-runc.2r41DN.mount: Deactivated successfully. Sep 13 00:08:20.230936 systemd-networkd[1385]: lxc_health: Link UP Sep 13 00:08:20.239901 systemd-networkd[1385]: lxc_health: Gained carrier Sep 13 00:08:21.618182 systemd-networkd[1385]: lxc_health: Gained IPv6LL Sep 13 00:08:21.778523 kubelet[2464]: E0913 00:08:21.778466 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:21.994911 kubelet[2464]: E0913 00:08:21.994707 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:24.303064 kubelet[2464]: E0913 00:08:24.302960 2464 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38640->127.0.0.1:43331: write tcp 127.0.0.1:38640->127.0.0.1:43331: write: connection reset by peer Sep 13 00:08:26.413420 sshd[4292]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:26.417213 systemd-logind[1427]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:08:26.417827 systemd[1]: sshd@24-10.0.0.88:22-10.0.0.1:55362.service: Deactivated successfully. Sep 13 00:08:26.419921 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:08:26.420780 systemd-logind[1427]: Removed session 25.