Sep 12 16:50:12.825738 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 16:50:12.825758 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Fri Sep 12 15:34:33 -00 2025 Sep 12 16:50:12.825768 kernel: KASLR enabled Sep 12 16:50:12.825774 kernel: efi: EFI v2.7 by EDK II Sep 12 16:50:12.825780 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Sep 12 16:50:12.825786 kernel: random: crng init done Sep 12 16:50:12.825793 kernel: secureboot: Secure boot disabled Sep 12 16:50:12.825799 kernel: ACPI: Early table checksum verification disabled Sep 12 16:50:12.825805 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Sep 12 16:50:12.825812 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 16:50:12.825819 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 16:50:12.825824 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 16:50:12.825830 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 16:50:12.825836 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 16:50:12.825844 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 16:50:12.825851 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 16:50:12.825858 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 16:50:12.825864 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 16:50:12.825870 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 16:50:12.825876 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 16:50:12.825883 kernel: NUMA: Failed to initialise from firmware Sep 12 16:50:12.825889 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 16:50:12.825895 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 12 16:50:12.825901 kernel: Zone ranges: Sep 12 16:50:12.825908 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 16:50:12.825915 kernel: DMA32 empty Sep 12 16:50:12.825921 kernel: Normal empty Sep 12 16:50:12.825927 kernel: Movable zone start for each node Sep 12 16:50:12.825933 kernel: Early memory node ranges Sep 12 16:50:12.825940 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Sep 12 16:50:12.825946 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Sep 12 16:50:12.825952 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Sep 12 16:50:12.825958 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 12 16:50:12.825964 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 12 16:50:12.825970 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 12 16:50:12.825977 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 12 16:50:12.825983 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 12 16:50:12.825990 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 16:50:12.825997 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 16:50:12.826003 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 16:50:12.826012 kernel: psci: probing for conduit method from ACPI. Sep 12 16:50:12.826018 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 16:50:12.826025 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 16:50:12.826033 kernel: psci: Trusted OS migration not required Sep 12 16:50:12.826039 kernel: psci: SMC Calling Convention v1.1 Sep 12 16:50:12.826046 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 16:50:12.826053 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 16:50:12.826059 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 16:50:12.826066 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 16:50:12.826073 kernel: Detected PIPT I-cache on CPU0 Sep 12 16:50:12.826079 kernel: CPU features: detected: GIC system register CPU interface Sep 12 16:50:12.826086 kernel: CPU features: detected: Hardware dirty bit management Sep 12 16:50:12.826092 kernel: CPU features: detected: Spectre-v4 Sep 12 16:50:12.826100 kernel: CPU features: detected: Spectre-BHB Sep 12 16:50:12.826106 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 16:50:12.826113 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 16:50:12.826120 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 16:50:12.826126 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 16:50:12.826133 kernel: alternatives: applying boot alternatives Sep 12 16:50:12.826140 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=82b413d7549dba6b35b1edf421a17f61aa80704059d10fedd611b1eff5298abd Sep 12 16:50:12.826147 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 16:50:12.826154 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 16:50:12.826160 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 16:50:12.826167 kernel: Fallback order for Node 0: 0 Sep 12 16:50:12.826175 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 12 16:50:12.826181 kernel: Policy zone: DMA Sep 12 16:50:12.826188 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 16:50:12.826194 kernel: software IO TLB: area num 4. Sep 12 16:50:12.826201 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 12 16:50:12.826208 kernel: Memory: 2387412K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 184876K reserved, 0K cma-reserved) Sep 12 16:50:12.826215 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 16:50:12.826221 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 16:50:12.826228 kernel: rcu: RCU event tracing is enabled. Sep 12 16:50:12.826235 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 16:50:12.826242 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 16:50:12.826249 kernel: Tracing variant of Tasks RCU enabled. Sep 12 16:50:12.826257 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 16:50:12.826263 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 16:50:12.826270 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 16:50:12.826276 kernel: GICv3: 256 SPIs implemented Sep 12 16:50:12.826283 kernel: GICv3: 0 Extended SPIs implemented Sep 12 16:50:12.826290 kernel: Root IRQ handler: gic_handle_irq Sep 12 16:50:12.826296 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 16:50:12.826303 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 16:50:12.826320 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 16:50:12.826327 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 16:50:12.826333 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 12 16:50:12.826343 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 12 16:50:12.826349 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 12 16:50:12.826356 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 16:50:12.826363 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 16:50:12.826369 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 16:50:12.826376 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 16:50:12.826383 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 16:50:12.826389 kernel: arm-pv: using stolen time PV Sep 12 16:50:12.826396 kernel: Console: colour dummy device 80x25 Sep 12 16:50:12.826403 kernel: ACPI: Core revision 20230628 Sep 12 16:50:12.826410 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 16:50:12.826418 kernel: pid_max: default: 32768 minimum: 301 Sep 12 16:50:12.826424 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 16:50:12.826431 kernel: landlock: Up and running. Sep 12 16:50:12.826438 kernel: SELinux: Initializing. Sep 12 16:50:12.826445 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 16:50:12.826451 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 16:50:12.826458 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 16:50:12.826465 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 16:50:12.826472 kernel: rcu: Hierarchical SRCU implementation. Sep 12 16:50:12.826480 kernel: rcu: Max phase no-delay instances is 400. Sep 12 16:50:12.826487 kernel: Platform MSI: ITS@0x8080000 domain created Sep 12 16:50:12.826494 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 12 16:50:12.826500 kernel: Remapping and enabling EFI services. Sep 12 16:50:12.826507 kernel: smp: Bringing up secondary CPUs ... Sep 12 16:50:12.826514 kernel: Detected PIPT I-cache on CPU1 Sep 12 16:50:12.826521 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 16:50:12.826527 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 12 16:50:12.826534 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 16:50:12.826542 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 16:50:12.826549 kernel: Detected PIPT I-cache on CPU2 Sep 12 16:50:12.826560 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 16:50:12.826569 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 12 16:50:12.826576 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 16:50:12.826583 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 16:50:12.826590 kernel: Detected PIPT I-cache on CPU3 Sep 12 16:50:12.826597 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 16:50:12.826605 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 12 16:50:12.826682 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 16:50:12.826689 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 16:50:12.826696 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 16:50:12.826703 kernel: SMP: Total of 4 processors activated. Sep 12 16:50:12.826710 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 16:50:12.826717 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 16:50:12.826724 kernel: CPU features: detected: Common not Private translations Sep 12 16:50:12.826731 kernel: CPU features: detected: CRC32 instructions Sep 12 16:50:12.826740 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 16:50:12.826747 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 16:50:12.826755 kernel: CPU features: detected: LSE atomic instructions Sep 12 16:50:12.826762 kernel: CPU features: detected: Privileged Access Never Sep 12 16:50:12.826769 kernel: CPU features: detected: RAS Extension Support Sep 12 16:50:12.826776 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 16:50:12.826783 kernel: CPU: All CPU(s) started at EL1 Sep 12 16:50:12.826790 kernel: alternatives: applying system-wide alternatives Sep 12 16:50:12.826797 kernel: devtmpfs: initialized Sep 12 16:50:12.826806 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 16:50:12.826813 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 16:50:12.826820 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 16:50:12.826827 kernel: SMBIOS 3.0.0 present. Sep 12 16:50:12.826834 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 16:50:12.826841 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 16:50:12.826848 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 16:50:12.826856 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 16:50:12.826863 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 16:50:12.826871 kernel: audit: initializing netlink subsys (disabled) Sep 12 16:50:12.826878 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Sep 12 16:50:12.826885 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 16:50:12.826892 kernel: cpuidle: using governor menu Sep 12 16:50:12.826899 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 16:50:12.826907 kernel: ASID allocator initialised with 32768 entries Sep 12 16:50:12.826914 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 16:50:12.826921 kernel: Serial: AMBA PL011 UART driver Sep 12 16:50:12.826928 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 16:50:12.826936 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 16:50:12.826943 kernel: Modules: 509248 pages in range for PLT usage Sep 12 16:50:12.826950 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 16:50:12.826957 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 16:50:12.826965 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 16:50:12.826972 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 16:50:12.826979 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 16:50:12.826986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 16:50:12.826993 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 16:50:12.827001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 16:50:12.827008 kernel: ACPI: Added _OSI(Module Device) Sep 12 16:50:12.827015 kernel: ACPI: Added _OSI(Processor Device) Sep 12 16:50:12.827023 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 16:50:12.827030 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 16:50:12.827037 kernel: ACPI: Interpreter enabled Sep 12 16:50:12.827044 kernel: ACPI: Using GIC for interrupt routing Sep 12 16:50:12.827051 kernel: ACPI: MCFG table detected, 1 entries Sep 12 16:50:12.827058 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 16:50:12.827065 kernel: printk: console [ttyAMA0] enabled Sep 12 16:50:12.827074 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 16:50:12.827210 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 16:50:12.827281 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 16:50:12.827359 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 16:50:12.827422 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 16:50:12.827484 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 16:50:12.827494 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 16:50:12.827504 kernel: PCI host bridge to bus 0000:00 Sep 12 16:50:12.827573 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 16:50:12.827647 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 16:50:12.827705 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 16:50:12.827765 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 16:50:12.827844 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 12 16:50:12.827922 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 16:50:12.827988 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 12 16:50:12.828052 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 12 16:50:12.828117 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 16:50:12.828181 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 16:50:12.828245 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 12 16:50:12.828315 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 12 16:50:12.828377 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 16:50:12.828433 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 16:50:12.828490 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 16:50:12.828499 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 16:50:12.828506 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 16:50:12.828514 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 16:50:12.828521 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 16:50:12.828528 kernel: iommu: Default domain type: Translated Sep 12 16:50:12.828536 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 16:50:12.828544 kernel: efivars: Registered efivars operations Sep 12 16:50:12.828551 kernel: vgaarb: loaded Sep 12 16:50:12.828558 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 16:50:12.828565 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 16:50:12.828572 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 16:50:12.828579 kernel: pnp: PnP ACPI init Sep 12 16:50:12.828666 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 16:50:12.828679 kernel: pnp: PnP ACPI: found 1 devices Sep 12 16:50:12.828686 kernel: NET: Registered PF_INET protocol family Sep 12 16:50:12.828694 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 16:50:12.828701 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 16:50:12.828708 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 16:50:12.828715 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 16:50:12.828722 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 16:50:12.828730 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 16:50:12.828737 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 16:50:12.828745 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 16:50:12.828752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 16:50:12.828760 kernel: PCI: CLS 0 bytes, default 64 Sep 12 16:50:12.828767 kernel: kvm [1]: HYP mode not available Sep 12 16:50:12.828774 kernel: Initialise system trusted keyrings Sep 12 16:50:12.828781 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 16:50:12.828792 kernel: Key type asymmetric registered Sep 12 16:50:12.828799 kernel: Asymmetric key parser 'x509' registered Sep 12 16:50:12.828806 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 16:50:12.828815 kernel: io scheduler mq-deadline registered Sep 12 16:50:12.828822 kernel: io scheduler kyber registered Sep 12 16:50:12.828829 kernel: io scheduler bfq registered Sep 12 16:50:12.828836 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 16:50:12.828844 kernel: ACPI: button: Power Button [PWRB] Sep 12 16:50:12.828851 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 16:50:12.828920 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 16:50:12.828930 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 16:50:12.828937 kernel: thunder_xcv, ver 1.0 Sep 12 16:50:12.828944 kernel: thunder_bgx, ver 1.0 Sep 12 16:50:12.828953 kernel: nicpf, ver 1.0 Sep 12 16:50:12.828960 kernel: nicvf, ver 1.0 Sep 12 16:50:12.829031 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 16:50:12.829092 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T16:50:12 UTC (1757695812) Sep 12 16:50:12.829102 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 16:50:12.829109 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 12 16:50:12.829116 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 16:50:12.829125 kernel: watchdog: Hard watchdog permanently disabled Sep 12 16:50:12.829132 kernel: NET: Registered PF_INET6 protocol family Sep 12 16:50:12.829140 kernel: Segment Routing with IPv6 Sep 12 16:50:12.829147 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 16:50:12.829154 kernel: NET: Registered PF_PACKET protocol family Sep 12 16:50:12.829161 kernel: Key type dns_resolver registered Sep 12 16:50:12.829168 kernel: registered taskstats version 1 Sep 12 16:50:12.829175 kernel: Loading compiled-in X.509 certificates Sep 12 16:50:12.829182 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: d6f11852774cea54e4c26b4ad4f8effa8d89e628' Sep 12 16:50:12.829189 kernel: Key type .fscrypt registered Sep 12 16:50:12.829198 kernel: Key type fscrypt-provisioning registered Sep 12 16:50:12.829205 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 16:50:12.829212 kernel: ima: Allocated hash algorithm: sha1 Sep 12 16:50:12.829219 kernel: ima: No architecture policies found Sep 12 16:50:12.829226 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 16:50:12.829233 kernel: clk: Disabling unused clocks Sep 12 16:50:12.829240 kernel: Freeing unused kernel memory: 38400K Sep 12 16:50:12.829247 kernel: Run /init as init process Sep 12 16:50:12.829256 kernel: with arguments: Sep 12 16:50:12.829263 kernel: /init Sep 12 16:50:12.829270 kernel: with environment: Sep 12 16:50:12.829277 kernel: HOME=/ Sep 12 16:50:12.829284 kernel: TERM=linux Sep 12 16:50:12.829291 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 16:50:12.829299 systemd[1]: Successfully made /usr/ read-only. Sep 12 16:50:12.829315 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 16:50:12.829326 systemd[1]: Detected virtualization kvm. Sep 12 16:50:12.829334 systemd[1]: Detected architecture arm64. Sep 12 16:50:12.829341 systemd[1]: Running in initrd. Sep 12 16:50:12.829349 systemd[1]: No hostname configured, using default hostname. Sep 12 16:50:12.829357 systemd[1]: Hostname set to . Sep 12 16:50:12.829365 systemd[1]: Initializing machine ID from VM UUID. Sep 12 16:50:12.829372 systemd[1]: Queued start job for default target initrd.target. Sep 12 16:50:12.829380 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 16:50:12.829389 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 16:50:12.829398 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 16:50:12.829406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 16:50:12.829413 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 16:50:12.829422 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 16:50:12.829431 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 16:50:12.829438 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 16:50:12.829448 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 16:50:12.829455 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 16:50:12.829463 systemd[1]: Reached target paths.target - Path Units. Sep 12 16:50:12.829471 systemd[1]: Reached target slices.target - Slice Units. Sep 12 16:50:12.829478 systemd[1]: Reached target swap.target - Swaps. Sep 12 16:50:12.829486 systemd[1]: Reached target timers.target - Timer Units. Sep 12 16:50:12.829494 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 16:50:12.829501 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 16:50:12.829509 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 16:50:12.829518 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 16:50:12.829526 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 16:50:12.829534 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 16:50:12.829541 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 16:50:12.829549 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 16:50:12.829557 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 16:50:12.829564 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 16:50:12.829572 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 16:50:12.829581 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 16:50:12.829589 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 16:50:12.829597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 16:50:12.829604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 16:50:12.829620 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 16:50:12.829628 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 16:50:12.829638 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 16:50:12.829646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 16:50:12.829670 systemd-journald[239]: Collecting audit messages is disabled. Sep 12 16:50:12.829690 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 16:50:12.829698 systemd-journald[239]: Journal started Sep 12 16:50:12.829716 systemd-journald[239]: Runtime Journal (/run/log/journal/c3087f3427984326b552581ad7276b78) is 5.9M, max 47.3M, 41.4M free. Sep 12 16:50:12.823350 systemd-modules-load[240]: Inserted module 'overlay' Sep 12 16:50:12.834247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 16:50:12.834280 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 16:50:12.834291 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 16:50:12.836644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 16:50:12.839403 systemd-modules-load[240]: Inserted module 'br_netfilter' Sep 12 16:50:12.840332 kernel: Bridge firewalling registered Sep 12 16:50:12.840469 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 16:50:12.842858 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 16:50:12.845080 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 16:50:12.849564 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 16:50:12.850845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 16:50:12.853822 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 16:50:12.856029 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 16:50:12.858248 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 16:50:12.859278 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 16:50:12.862037 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 16:50:12.870977 dracut-cmdline[278]: dracut-dracut-053 Sep 12 16:50:12.873156 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=82b413d7549dba6b35b1edf421a17f61aa80704059d10fedd611b1eff5298abd Sep 12 16:50:12.888863 systemd-resolved[280]: Positive Trust Anchors: Sep 12 16:50:12.888879 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 16:50:12.888909 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 16:50:12.893460 systemd-resolved[280]: Defaulting to hostname 'linux'. Sep 12 16:50:12.894464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 16:50:12.897716 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 16:50:12.937629 kernel: SCSI subsystem initialized Sep 12 16:50:12.941622 kernel: Loading iSCSI transport class v2.0-870. Sep 12 16:50:12.948633 kernel: iscsi: registered transport (tcp) Sep 12 16:50:12.960909 kernel: iscsi: registered transport (qla4xxx) Sep 12 16:50:12.960932 kernel: QLogic iSCSI HBA Driver Sep 12 16:50:13.000130 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 16:50:13.017779 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 16:50:13.033063 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 16:50:13.033115 kernel: device-mapper: uevent: version 1.0.3 Sep 12 16:50:13.033125 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 16:50:13.077634 kernel: raid6: neonx8 gen() 15780 MB/s Sep 12 16:50:13.094633 kernel: raid6: neonx4 gen() 15776 MB/s Sep 12 16:50:13.111632 kernel: raid6: neonx2 gen() 13139 MB/s Sep 12 16:50:13.128620 kernel: raid6: neonx1 gen() 10517 MB/s Sep 12 16:50:13.145628 kernel: raid6: int64x8 gen() 6782 MB/s Sep 12 16:50:13.162631 kernel: raid6: int64x4 gen() 7331 MB/s Sep 12 16:50:13.179620 kernel: raid6: int64x2 gen() 6099 MB/s Sep 12 16:50:13.196630 kernel: raid6: int64x1 gen() 5037 MB/s Sep 12 16:50:13.196657 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Sep 12 16:50:13.213630 kernel: raid6: .... xor() 11958 MB/s, rmw enabled Sep 12 16:50:13.213643 kernel: raid6: using neon recovery algorithm Sep 12 16:50:13.218652 kernel: xor: measuring software checksum speed Sep 12 16:50:13.218668 kernel: 8regs : 21613 MB/sec Sep 12 16:50:13.219719 kernel: 32regs : 21693 MB/sec Sep 12 16:50:13.219735 kernel: arm64_neon : 27917 MB/sec Sep 12 16:50:13.219745 kernel: xor: using function: arm64_neon (27917 MB/sec) Sep 12 16:50:13.267633 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 16:50:13.278037 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 16:50:13.291782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 16:50:13.304097 systemd-udevd[463]: Using default interface naming scheme 'v255'. Sep 12 16:50:13.307775 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 16:50:13.317827 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 16:50:13.328323 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 12 16:50:13.353467 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 16:50:13.366739 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 16:50:13.406156 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 16:50:13.412806 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 16:50:13.424665 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 16:50:13.426336 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 16:50:13.428686 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 16:50:13.430674 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 16:50:13.442751 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 16:50:13.452547 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 16:50:13.457512 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 16:50:13.457679 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 16:50:13.463635 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 16:50:13.463704 kernel: GPT:9289727 != 19775487 Sep 12 16:50:13.463721 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 16:50:13.463746 kernel: GPT:9289727 != 19775487 Sep 12 16:50:13.464433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 16:50:13.464544 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 16:50:13.468817 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 16:50:13.470175 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 16:50:13.473445 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 16:50:13.470320 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 16:50:13.481136 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 16:50:13.473454 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 16:50:13.485964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 16:50:13.500919 kernel: BTRFS: device fsid 402ea12e-53e0-48e3-8f03-9fb2de6b0089 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (524) Sep 12 16:50:13.498764 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 16:50:13.504671 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (514) Sep 12 16:50:13.508133 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 16:50:13.528742 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 16:50:13.534665 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 16:50:13.535695 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 16:50:13.544435 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 16:50:13.563764 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 16:50:13.565396 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 16:50:13.570228 disk-uuid[553]: Primary Header is updated. Sep 12 16:50:13.570228 disk-uuid[553]: Secondary Entries is updated. Sep 12 16:50:13.570228 disk-uuid[553]: Secondary Header is updated. Sep 12 16:50:13.573634 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 16:50:13.590052 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 16:50:14.581633 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 16:50:14.582300 disk-uuid[554]: The operation has completed successfully. Sep 12 16:50:14.608058 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 16:50:14.608153 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 16:50:14.640735 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 16:50:14.643283 sh[574]: Success Sep 12 16:50:14.652654 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 16:50:14.678364 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 16:50:14.691802 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 16:50:14.694738 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 16:50:14.703248 kernel: BTRFS info (device dm-0): first mount of filesystem 402ea12e-53e0-48e3-8f03-9fb2de6b0089 Sep 12 16:50:14.703281 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 16:50:14.703292 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 16:50:14.703309 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 16:50:14.704616 kernel: BTRFS info (device dm-0): using free space tree Sep 12 16:50:14.708092 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 16:50:14.708897 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 16:50:14.716734 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 16:50:14.717997 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 16:50:14.731338 kernel: BTRFS info (device vda6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 16:50:14.731377 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 16:50:14.731387 kernel: BTRFS info (device vda6): using free space tree Sep 12 16:50:14.733650 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 16:50:14.737627 kernel: BTRFS info (device vda6): last unmount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 16:50:14.740627 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 16:50:14.750965 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 16:50:14.799205 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 16:50:14.812775 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 16:50:14.815979 ignition[664]: Ignition 2.20.0 Sep 12 16:50:14.815987 ignition[664]: Stage: fetch-offline Sep 12 16:50:14.816020 ignition[664]: no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:14.816029 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 16:50:14.816169 ignition[664]: parsed url from cmdline: "" Sep 12 16:50:14.816172 ignition[664]: no config URL provided Sep 12 16:50:14.816176 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 16:50:14.816183 ignition[664]: no config at "/usr/lib/ignition/user.ign" Sep 12 16:50:14.816203 ignition[664]: op(1): [started] loading QEMU firmware config module Sep 12 16:50:14.816207 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 16:50:14.821407 ignition[664]: op(1): [finished] loading QEMU firmware config module Sep 12 16:50:14.837386 systemd-networkd[760]: lo: Link UP Sep 12 16:50:14.837397 systemd-networkd[760]: lo: Gained carrier Sep 12 16:50:14.838164 systemd-networkd[760]: Enumeration completed Sep 12 16:50:14.838237 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 16:50:14.838571 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 16:50:14.838575 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 16:50:14.839428 systemd[1]: Reached target network.target - Network. Sep 12 16:50:14.839436 systemd-networkd[760]: eth0: Link UP Sep 12 16:50:14.839439 systemd-networkd[760]: eth0: Gained carrier Sep 12 16:50:14.839445 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 16:50:14.861674 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 16:50:14.871249 ignition[664]: parsing config with SHA512: 38464e49dd1a9c05efd5bd7bb7d10faae7baf8e8f8bd615a5542ee28b2797acfbc12263b5f35dbedff0339142f17e0190355995bb2f744a011c69c1558fd0571 Sep 12 16:50:14.877423 unknown[664]: fetched base config from "system" Sep 12 16:50:14.877432 unknown[664]: fetched user config from "qemu" Sep 12 16:50:14.877888 ignition[664]: fetch-offline: fetch-offline passed Sep 12 16:50:14.877964 ignition[664]: Ignition finished successfully Sep 12 16:50:14.880655 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 16:50:14.881679 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 16:50:14.885737 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 16:50:14.896955 ignition[767]: Ignition 2.20.0 Sep 12 16:50:14.896965 ignition[767]: Stage: kargs Sep 12 16:50:14.897115 ignition[767]: no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:14.897125 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 16:50:14.897986 ignition[767]: kargs: kargs passed Sep 12 16:50:14.898023 ignition[767]: Ignition finished successfully Sep 12 16:50:14.901039 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 16:50:14.911737 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 16:50:14.921020 ignition[776]: Ignition 2.20.0 Sep 12 16:50:14.921033 ignition[776]: Stage: disks Sep 12 16:50:14.921192 ignition[776]: no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:14.921201 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 16:50:14.922078 ignition[776]: disks: disks passed Sep 12 16:50:14.922134 ignition[776]: Ignition finished successfully Sep 12 16:50:14.925661 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 16:50:14.927381 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 16:50:14.928320 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 16:50:14.929910 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 16:50:14.931323 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 16:50:14.932634 systemd[1]: Reached target basic.target - Basic System. Sep 12 16:50:14.945748 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 16:50:14.955791 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 16:50:14.958558 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 16:50:14.969695 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 16:50:15.010621 kernel: EXT4-fs (vda9): mounted filesystem 397cbf4d-cf5b-4786-906a-df7c3e18edd9 r/w with ordered data mode. Quota mode: none. Sep 12 16:50:15.011028 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 16:50:15.012004 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 16:50:15.023701 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 16:50:15.025149 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 16:50:15.026447 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 16:50:15.026487 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 16:50:15.032023 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (795) Sep 12 16:50:15.026510 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 16:50:15.036593 kernel: BTRFS info (device vda6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 16:50:15.036625 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 16:50:15.036637 kernel: BTRFS info (device vda6): using free space tree Sep 12 16:50:15.036646 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 16:50:15.030442 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 16:50:15.036453 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 16:50:15.038178 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 16:50:15.069394 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 16:50:15.072389 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Sep 12 16:50:15.075344 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 16:50:15.078861 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 16:50:15.139714 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 16:50:15.151764 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 16:50:15.153061 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 16:50:15.157093 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 16:50:15.161634 kernel: BTRFS info (device vda6): last unmount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 16:50:15.170928 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 16:50:15.176927 ignition[908]: INFO : Ignition 2.20.0 Sep 12 16:50:15.176927 ignition[908]: INFO : Stage: mount Sep 12 16:50:15.178148 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:15.178148 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 16:50:15.178148 ignition[908]: INFO : mount: mount passed Sep 12 16:50:15.178148 ignition[908]: INFO : Ignition finished successfully Sep 12 16:50:15.181664 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 16:50:15.192705 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 16:50:16.020759 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 16:50:16.026896 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (921) Sep 12 16:50:16.026935 kernel: BTRFS info (device vda6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 16:50:16.026946 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 16:50:16.028115 kernel: BTRFS info (device vda6): using free space tree Sep 12 16:50:16.029634 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 16:50:16.030960 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 16:50:16.050949 ignition[938]: INFO : Ignition 2.20.0 Sep 12 16:50:16.050949 ignition[938]: INFO : Stage: files Sep 12 16:50:16.052239 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:16.052239 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 16:50:16.052239 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Sep 12 16:50:16.055154 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 16:50:16.055154 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 16:50:16.058045 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 16:50:16.059223 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 16:50:16.059223 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 16:50:16.058517 unknown[938]: wrote ssh authorized keys file for user: core Sep 12 16:50:16.062202 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 16:50:16.062202 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 16:50:16.144215 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 16:50:16.210765 systemd-networkd[760]: eth0: Gained IPv6LL Sep 12 16:50:16.710788 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 16:50:16.710788 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 16:50:16.713775 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 16:50:16.923350 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 16:50:17.006324 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 16:50:17.006324 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 16:50:17.009331 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 16:50:17.304780 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 16:50:17.606796 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 16:50:17.606796 ignition[938]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 16:50:17.609685 ignition[938]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 16:50:17.609685 ignition[938]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 16:50:17.609685 ignition[938]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 16:50:17.609685 ignition[938]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 16:50:17.609685 ignition[938]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 16:50:17.609685 ignition[938]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 16:50:17.609685 ignition[938]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 16:50:17.609685 ignition[938]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 16:50:17.621812 ignition[938]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 16:50:17.625085 ignition[938]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 16:50:17.627145 ignition[938]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 16:50:17.627145 ignition[938]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 16:50:17.627145 ignition[938]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 16:50:17.627145 ignition[938]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 16:50:17.627145 ignition[938]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 16:50:17.627145 ignition[938]: INFO : files: files passed Sep 12 16:50:17.627145 ignition[938]: INFO : Ignition finished successfully Sep 12 16:50:17.627866 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 16:50:17.637753 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 16:50:17.640020 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 16:50:17.641226 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 16:50:17.641318 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 16:50:17.647138 initrd-setup-root-after-ignition[966]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 16:50:17.650427 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 16:50:17.650427 initrd-setup-root-after-ignition[968]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 16:50:17.652875 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 16:50:17.652476 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 16:50:17.654258 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 16:50:17.664797 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 16:50:17.682943 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 16:50:17.683749 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 16:50:17.685210 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 16:50:17.686533 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 16:50:17.687998 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 16:50:17.689804 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 16:50:17.703042 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 16:50:17.711787 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 16:50:17.719614 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 16:50:17.720578 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 16:50:17.722247 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 16:50:17.723670 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 16:50:17.723794 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 16:50:17.725793 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 16:50:17.727356 systemd[1]: Stopped target basic.target - Basic System. Sep 12 16:50:17.728761 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 16:50:17.730060 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 16:50:17.731501 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 16:50:17.733081 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 16:50:17.734500 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 16:50:17.736068 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 16:50:17.737499 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 16:50:17.738840 systemd[1]: Stopped target swap.target - Swaps. Sep 12 16:50:17.740010 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 16:50:17.740131 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 16:50:17.741938 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 16:50:17.743498 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 16:50:17.745059 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 16:50:17.746605 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 16:50:17.747566 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 16:50:17.747705 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 16:50:17.750112 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 16:50:17.750228 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 16:50:17.751812 systemd[1]: Stopped target paths.target - Path Units. Sep 12 16:50:17.753026 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 16:50:17.756668 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 16:50:17.757674 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 16:50:17.759386 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 16:50:17.760572 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 16:50:17.760663 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 16:50:17.761868 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 16:50:17.761942 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 16:50:17.763194 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 16:50:17.763310 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 16:50:17.764662 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 16:50:17.764761 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 16:50:17.775784 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 16:50:17.776498 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 16:50:17.776644 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 16:50:17.779788 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 16:50:17.780459 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 16:50:17.780572 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 16:50:17.782056 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 16:50:17.787255 ignition[994]: INFO : Ignition 2.20.0 Sep 12 16:50:17.787255 ignition[994]: INFO : Stage: umount Sep 12 16:50:17.787255 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 16:50:17.787255 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 16:50:17.782181 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 16:50:17.793176 ignition[994]: INFO : umount: umount passed Sep 12 16:50:17.793176 ignition[994]: INFO : Ignition finished successfully Sep 12 16:50:17.788869 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 16:50:17.788955 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 16:50:17.792269 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 16:50:17.792791 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 16:50:17.792882 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 16:50:17.795967 systemd[1]: Stopped target network.target - Network. Sep 12 16:50:17.796913 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 16:50:17.796974 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 16:50:17.798332 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 16:50:17.798375 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 16:50:17.799659 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 16:50:17.799699 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 16:50:17.801178 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 16:50:17.801219 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 16:50:17.802792 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 16:50:17.804293 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 16:50:17.813162 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 16:50:17.813295 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 16:50:17.816534 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 16:50:17.816765 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 16:50:17.816850 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 16:50:17.820440 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 16:50:17.821045 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 16:50:17.821097 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 16:50:17.829693 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 16:50:17.830382 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 16:50:17.830445 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 16:50:17.832151 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 16:50:17.832193 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 16:50:17.834638 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 16:50:17.834683 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 16:50:17.835545 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 16:50:17.835585 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 16:50:17.838054 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 16:50:17.842673 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 16:50:17.842741 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 16:50:17.849205 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 16:50:17.849323 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 16:50:17.851126 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 16:50:17.851762 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 16:50:17.853883 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 16:50:17.853947 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 16:50:17.855189 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 16:50:17.855224 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 16:50:17.856536 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 16:50:17.856584 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 16:50:17.858827 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 16:50:17.858874 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 16:50:17.861041 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 16:50:17.861089 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 16:50:17.875754 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 16:50:17.876526 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 16:50:17.876578 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 16:50:17.879170 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 16:50:17.879211 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 16:50:17.882219 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 16:50:17.882270 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 16:50:17.882570 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 16:50:17.882669 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 16:50:17.884443 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 16:50:17.884516 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 16:50:17.886720 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 16:50:17.888071 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 16:50:17.888128 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 16:50:17.890278 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 16:50:17.898863 systemd[1]: Switching root. Sep 12 16:50:17.924449 systemd-journald[239]: Journal stopped Sep 12 16:50:18.634031 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Sep 12 16:50:18.634082 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 16:50:18.634097 kernel: SELinux: policy capability open_perms=1 Sep 12 16:50:18.634107 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 16:50:18.634119 kernel: SELinux: policy capability always_check_network=0 Sep 12 16:50:18.634137 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 16:50:18.634147 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 16:50:18.634156 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 16:50:18.634165 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 16:50:18.634174 kernel: audit: type=1403 audit(1757695818.088:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 16:50:18.634185 systemd[1]: Successfully loaded SELinux policy in 32.250ms. Sep 12 16:50:18.634204 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.846ms. Sep 12 16:50:18.634216 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 16:50:18.634227 systemd[1]: Detected virtualization kvm. Sep 12 16:50:18.634237 systemd[1]: Detected architecture arm64. Sep 12 16:50:18.634247 systemd[1]: Detected first boot. Sep 12 16:50:18.634259 systemd[1]: Initializing machine ID from VM UUID. Sep 12 16:50:18.634269 zram_generator::config[1042]: No configuration found. Sep 12 16:50:18.634288 kernel: NET: Registered PF_VSOCK protocol family Sep 12 16:50:18.634299 systemd[1]: Populated /etc with preset unit settings. Sep 12 16:50:18.634309 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 16:50:18.634321 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 16:50:18.634331 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 16:50:18.634341 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 16:50:18.634351 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 16:50:18.634362 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 16:50:18.634372 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 16:50:18.634382 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 16:50:18.634392 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 16:50:18.634404 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 16:50:18.634415 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 16:50:18.634425 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 16:50:18.634435 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 16:50:18.634445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 16:50:18.634479 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 16:50:18.634490 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 16:50:18.634500 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 16:50:18.634511 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 16:50:18.634522 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 16:50:18.634533 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 16:50:18.634543 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 16:50:18.634553 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 16:50:18.634564 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 16:50:18.634576 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 16:50:18.634589 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 16:50:18.634599 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 16:50:18.634668 systemd[1]: Reached target slices.target - Slice Units. Sep 12 16:50:18.634681 systemd[1]: Reached target swap.target - Swaps. Sep 12 16:50:18.634691 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 16:50:18.634701 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 16:50:18.634711 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 16:50:18.634721 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 16:50:18.634731 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 16:50:18.634742 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 16:50:18.634752 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 16:50:18.634764 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 16:50:18.634774 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 16:50:18.634784 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 16:50:18.634796 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 16:50:18.634806 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 16:50:18.634816 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 16:50:18.634827 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 16:50:18.634837 systemd[1]: Reached target machines.target - Containers. Sep 12 16:50:18.634848 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 16:50:18.634860 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 16:50:18.634870 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 16:50:18.634881 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 16:50:18.634891 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 16:50:18.634902 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 16:50:18.634911 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 16:50:18.634921 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 16:50:18.634932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 16:50:18.634944 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 16:50:18.634955 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 16:50:18.634966 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 16:50:18.634976 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 16:50:18.634986 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 16:50:18.634997 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 16:50:18.635007 kernel: fuse: init (API version 7.39) Sep 12 16:50:18.635017 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 16:50:18.635027 kernel: ACPI: bus type drm_connector registered Sep 12 16:50:18.635040 kernel: loop: module loaded Sep 12 16:50:18.635051 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 16:50:18.635061 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 16:50:18.635072 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 16:50:18.635082 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 16:50:18.635111 systemd-journald[1117]: Collecting audit messages is disabled. Sep 12 16:50:18.635133 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 16:50:18.635146 systemd-journald[1117]: Journal started Sep 12 16:50:18.635167 systemd-journald[1117]: Runtime Journal (/run/log/journal/c3087f3427984326b552581ad7276b78) is 5.9M, max 47.3M, 41.4M free. Sep 12 16:50:18.454036 systemd[1]: Queued start job for default target multi-user.target. Sep 12 16:50:18.470465 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 16:50:18.470863 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 16:50:18.637922 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 16:50:18.637956 systemd[1]: Stopped verity-setup.service. Sep 12 16:50:18.653633 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 16:50:18.654004 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 16:50:18.654936 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 16:50:18.655863 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 16:50:18.656710 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 16:50:18.657655 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 16:50:18.658577 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 16:50:18.659587 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 16:50:18.661778 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 16:50:18.662959 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 16:50:18.663125 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 16:50:18.664291 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 16:50:18.664443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 16:50:18.665576 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 16:50:18.665744 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 16:50:18.666759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 16:50:18.666917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 16:50:18.668053 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 16:50:18.668206 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 16:50:18.669478 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 16:50:18.669659 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 16:50:18.670758 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 16:50:18.671885 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 16:50:18.673236 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 16:50:18.674476 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 16:50:18.686395 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 16:50:18.694759 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 16:50:18.696552 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 16:50:18.697489 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 16:50:18.697516 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 16:50:18.699223 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 16:50:18.701172 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 16:50:18.703044 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 16:50:18.703939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 16:50:18.704925 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 16:50:18.706534 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 16:50:18.707683 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 16:50:18.710783 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 16:50:18.713886 systemd-journald[1117]: Time spent on flushing to /var/log/journal/c3087f3427984326b552581ad7276b78 is 18.045ms for 867 entries. Sep 12 16:50:18.713886 systemd-journald[1117]: System Journal (/var/log/journal/c3087f3427984326b552581ad7276b78) is 8M, max 195.6M, 187.6M free. Sep 12 16:50:18.737076 systemd-journald[1117]: Received client request to flush runtime journal. Sep 12 16:50:18.712975 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 16:50:18.714760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 16:50:18.719311 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 16:50:18.723790 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 16:50:18.727631 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 16:50:18.729347 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 16:50:18.730395 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 16:50:18.731817 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 16:50:18.733143 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 16:50:18.736731 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 16:50:18.739374 kernel: loop0: detected capacity change from 0 to 207008 Sep 12 16:50:18.747152 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 16:50:18.752904 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 16:50:18.755069 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 16:50:18.759718 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 16:50:18.760618 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 16:50:18.766439 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 16:50:18.772473 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 16:50:18.781905 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 16:50:18.783746 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 16:50:18.792844 kernel: loop1: detected capacity change from 0 to 123192 Sep 12 16:50:18.799907 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Sep 12 16:50:18.799924 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Sep 12 16:50:18.803909 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 16:50:18.835757 kernel: loop2: detected capacity change from 0 to 113512 Sep 12 16:50:18.870639 kernel: loop3: detected capacity change from 0 to 207008 Sep 12 16:50:18.879347 kernel: loop4: detected capacity change from 0 to 123192 Sep 12 16:50:18.884590 kernel: loop5: detected capacity change from 0 to 113512 Sep 12 16:50:18.887308 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 16:50:18.887698 (sd-merge)[1185]: Merged extensions into '/usr'. Sep 12 16:50:18.894475 systemd[1]: Reload requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 16:50:18.894492 systemd[1]: Reloading... Sep 12 16:50:18.955334 zram_generator::config[1213]: No configuration found. Sep 12 16:50:18.971287 ldconfig[1154]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 16:50:19.038987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 16:50:19.087651 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 16:50:19.087945 systemd[1]: Reloading finished in 193 ms. Sep 12 16:50:19.105742 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 16:50:19.106999 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 16:50:19.121730 systemd[1]: Starting ensure-sysext.service... Sep 12 16:50:19.123205 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 16:50:19.131173 systemd[1]: Reload requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Sep 12 16:50:19.131186 systemd[1]: Reloading... Sep 12 16:50:19.138219 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 16:50:19.138758 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 16:50:19.139476 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 16:50:19.139813 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 12 16:50:19.139932 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 12 16:50:19.142500 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 16:50:19.142602 systemd-tmpfiles[1250]: Skipping /boot Sep 12 16:50:19.151335 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 16:50:19.151473 systemd-tmpfiles[1250]: Skipping /boot Sep 12 16:50:19.177634 zram_generator::config[1276]: No configuration found. Sep 12 16:50:19.257145 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 16:50:19.305724 systemd[1]: Reloading finished in 174 ms. Sep 12 16:50:19.319037 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 16:50:19.335642 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 16:50:19.342787 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 16:50:19.344896 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 16:50:19.347199 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 16:50:19.353885 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 16:50:19.358593 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 16:50:19.362930 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 16:50:19.366248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 16:50:19.368970 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 16:50:19.371889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 16:50:19.374233 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 16:50:19.375207 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 16:50:19.375321 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 16:50:19.378008 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 16:50:19.380196 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 16:50:19.381782 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 16:50:19.382005 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 16:50:19.383973 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 16:50:19.384116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 16:50:19.384267 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Sep 12 16:50:19.388685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 16:50:19.388916 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 16:50:19.395571 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 16:50:19.402594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 16:50:19.405868 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 16:50:19.410179 augenrules[1350]: No rules Sep 12 16:50:19.410641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 16:50:19.412021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 16:50:19.412141 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 16:50:19.413408 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 16:50:19.415793 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 16:50:19.417897 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 16:50:19.418143 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 16:50:19.420436 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 16:50:19.422746 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 16:50:19.424079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 16:50:19.424267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 16:50:19.427378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 16:50:19.427532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 16:50:19.429023 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 16:50:19.429180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 16:50:19.430654 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 16:50:19.435319 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 16:50:19.446729 systemd[1]: Finished ensure-sysext.service. Sep 12 16:50:19.455784 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 16:50:19.456990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 16:50:19.458793 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 16:50:19.463104 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 16:50:19.464840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 16:50:19.466601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 16:50:19.468519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 16:50:19.468564 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 16:50:19.471005 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 16:50:19.471906 augenrules[1379]: /sbin/augenrules: No change Sep 12 16:50:19.475884 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 16:50:19.476779 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 16:50:19.480935 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 16:50:19.481142 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 16:50:19.482328 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 16:50:19.482483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 16:50:19.483203 augenrules[1411]: No rules Sep 12 16:50:19.483708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 16:50:19.483871 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 16:50:19.486977 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 16:50:19.487153 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 16:50:19.488234 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 16:50:19.488395 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 16:50:19.493257 systemd-resolved[1319]: Positive Trust Anchors: Sep 12 16:50:19.493283 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 16:50:19.493318 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 16:50:19.494219 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 16:50:19.494317 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 16:50:19.505479 systemd-resolved[1319]: Defaulting to hostname 'linux'. Sep 12 16:50:19.516381 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1374) Sep 12 16:50:19.541697 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 16:50:19.545718 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 16:50:19.548395 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 16:50:19.553523 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 16:50:19.554603 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 16:50:19.562346 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 16:50:19.565218 systemd-networkd[1398]: lo: Link UP Sep 12 16:50:19.565227 systemd-networkd[1398]: lo: Gained carrier Sep 12 16:50:19.566494 systemd-networkd[1398]: Enumeration completed Sep 12 16:50:19.572883 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 16:50:19.573885 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 16:50:19.574178 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 16:50:19.574293 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 16:50:19.575013 systemd-networkd[1398]: eth0: Link UP Sep 12 16:50:19.575142 systemd-networkd[1398]: eth0: Gained carrier Sep 12 16:50:19.575198 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 16:50:19.575751 systemd[1]: Reached target network.target - Network. Sep 12 16:50:19.577491 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 16:50:19.579390 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 16:50:19.587715 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 16:50:19.589696 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 16:50:19.590344 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Sep 12 16:50:20.081119 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 16:50:20.081169 systemd-timesyncd[1403]: Initial clock synchronization to Fri 2025-09-12 16:50:20.081042 UTC. Sep 12 16:50:20.081576 systemd-resolved[1319]: Clock change detected. Flushing caches. Sep 12 16:50:20.083727 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 16:50:20.124926 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 16:50:20.138781 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 16:50:20.141276 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 16:50:20.154115 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 16:50:20.157803 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 16:50:20.196779 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 16:50:20.197915 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 16:50:20.198814 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 16:50:20.199798 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 16:50:20.200703 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 16:50:20.201756 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 16:50:20.202606 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 16:50:20.203628 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 16:50:20.204645 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 16:50:20.204675 systemd[1]: Reached target paths.target - Path Units. Sep 12 16:50:20.205569 systemd[1]: Reached target timers.target - Timer Units. Sep 12 16:50:20.207129 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 16:50:20.209157 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 16:50:20.212132 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 16:50:20.213274 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 16:50:20.214259 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 16:50:20.220598 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 16:50:20.222068 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 16:50:20.224062 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 16:50:20.225402 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 16:50:20.226316 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 16:50:20.227067 systemd[1]: Reached target basic.target - Basic System. Sep 12 16:50:20.227752 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 16:50:20.227782 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 16:50:20.228644 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 16:50:20.230550 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 16:50:20.231345 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 16:50:20.233956 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 16:50:20.235982 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 16:50:20.238276 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 16:50:20.238972 jq[1453]: false Sep 12 16:50:20.240129 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 16:50:20.241778 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 16:50:20.244381 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 16:50:20.248424 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 16:50:20.251228 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 16:50:20.252384 extend-filesystems[1454]: Found loop3 Sep 12 16:50:20.253345 extend-filesystems[1454]: Found loop4 Sep 12 16:50:20.254658 extend-filesystems[1454]: Found loop5 Sep 12 16:50:20.254658 extend-filesystems[1454]: Found vda Sep 12 16:50:20.254658 extend-filesystems[1454]: Found vda1 Sep 12 16:50:20.254658 extend-filesystems[1454]: Found vda2 Sep 12 16:50:20.254658 extend-filesystems[1454]: Found vda3 Sep 12 16:50:20.254658 extend-filesystems[1454]: Found usr Sep 12 16:50:20.254658 extend-filesystems[1454]: Found vda4 Sep 12 16:50:20.254658 extend-filesystems[1454]: Found vda6 Sep 12 16:50:20.254658 extend-filesystems[1454]: Found vda7 Sep 12 16:50:20.254658 extend-filesystems[1454]: Found vda9 Sep 12 16:50:20.254658 extend-filesystems[1454]: Checking size of /dev/vda9 Sep 12 16:50:20.253581 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 16:50:20.270165 dbus-daemon[1452]: [system] SELinux support is enabled Sep 12 16:50:20.276149 extend-filesystems[1454]: Resized partition /dev/vda9 Sep 12 16:50:20.253981 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 16:50:20.257311 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 16:50:20.261276 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 16:50:20.279063 jq[1471]: true Sep 12 16:50:20.263582 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 16:50:20.268267 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 16:50:20.268843 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 16:50:20.269105 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 16:50:20.269333 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 16:50:20.273094 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 16:50:20.280061 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 16:50:20.280313 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 16:50:20.282043 extend-filesystems[1477]: resize2fs 1.47.1 (20-May-2024) Sep 12 16:50:20.286731 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 16:50:20.304715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1368) Sep 12 16:50:20.304752 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 16:50:20.301324 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 16:50:20.321920 jq[1479]: true Sep 12 16:50:20.322048 update_engine[1468]: I20250912 16:50:20.310721 1468 main.cc:92] Flatcar Update Engine starting Sep 12 16:50:20.322048 update_engine[1468]: I20250912 16:50:20.319130 1468 update_check_scheduler.cc:74] Next update check in 2m17s Sep 12 16:50:20.322228 tar[1476]: linux-arm64/LICENSE Sep 12 16:50:20.322228 tar[1476]: linux-arm64/helm Sep 12 16:50:20.310029 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 16:50:20.322558 extend-filesystems[1477]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 16:50:20.322558 extend-filesystems[1477]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 16:50:20.322558 extend-filesystems[1477]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 16:50:20.310054 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 16:50:20.325708 extend-filesystems[1454]: Resized filesystem in /dev/vda9 Sep 12 16:50:20.311059 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 16:50:20.311072 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 16:50:20.315336 systemd[1]: Started update-engine.service - Update Engine. Sep 12 16:50:20.331865 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 16:50:20.333435 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 16:50:20.334733 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 16:50:20.341707 systemd-logind[1465]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 16:50:20.342212 systemd-logind[1465]: New seat seat0. Sep 12 16:50:20.343731 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 16:50:20.372433 bash[1508]: Updated "/home/core/.ssh/authorized_keys" Sep 12 16:50:20.374758 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 16:50:20.376287 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 16:50:20.397495 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 16:50:20.450763 containerd[1480]: time="2025-09-12T16:50:20.450663843Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 16:50:20.476937 containerd[1480]: time="2025-09-12T16:50:20.476885643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 16:50:20.478740 containerd[1480]: time="2025-09-12T16:50:20.478344923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 16:50:20.478740 containerd[1480]: time="2025-09-12T16:50:20.478384283Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 16:50:20.478740 containerd[1480]: time="2025-09-12T16:50:20.478400643Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 16:50:20.478740 containerd[1480]: time="2025-09-12T16:50:20.478551043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 16:50:20.478740 containerd[1480]: time="2025-09-12T16:50:20.478567683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 16:50:20.478740 containerd[1480]: time="2025-09-12T16:50:20.478617003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 16:50:20.478740 containerd[1480]: time="2025-09-12T16:50:20.478628363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 16:50:20.478915 containerd[1480]: time="2025-09-12T16:50:20.478837003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 16:50:20.478915 containerd[1480]: time="2025-09-12T16:50:20.478854523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 16:50:20.478915 containerd[1480]: time="2025-09-12T16:50:20.478867163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 16:50:20.478915 containerd[1480]: time="2025-09-12T16:50:20.478876483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 16:50:20.478984 containerd[1480]: time="2025-09-12T16:50:20.478948803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 16:50:20.479154 containerd[1480]: time="2025-09-12T16:50:20.479129723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 16:50:20.479269 containerd[1480]: time="2025-09-12T16:50:20.479254203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 16:50:20.479293 containerd[1480]: time="2025-09-12T16:50:20.479271723Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 16:50:20.479367 containerd[1480]: time="2025-09-12T16:50:20.479343123Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 16:50:20.479410 containerd[1480]: time="2025-09-12T16:50:20.479398683Z" level=info msg="metadata content store policy set" policy=shared Sep 12 16:50:20.483000 containerd[1480]: time="2025-09-12T16:50:20.482966483Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 16:50:20.483045 containerd[1480]: time="2025-09-12T16:50:20.483016163Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 16:50:20.483045 containerd[1480]: time="2025-09-12T16:50:20.483033083Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 16:50:20.483080 containerd[1480]: time="2025-09-12T16:50:20.483047603Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 16:50:20.483080 containerd[1480]: time="2025-09-12T16:50:20.483060643Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 16:50:20.483221 containerd[1480]: time="2025-09-12T16:50:20.483201323Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 16:50:20.483526 containerd[1480]: time="2025-09-12T16:50:20.483510283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 16:50:20.483626 containerd[1480]: time="2025-09-12T16:50:20.483609963Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 16:50:20.483649 containerd[1480]: time="2025-09-12T16:50:20.483632243Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 16:50:20.483667 containerd[1480]: time="2025-09-12T16:50:20.483647323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 16:50:20.483667 containerd[1480]: time="2025-09-12T16:50:20.483660803Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 16:50:20.483725 containerd[1480]: time="2025-09-12T16:50:20.483687403Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 16:50:20.483725 containerd[1480]: time="2025-09-12T16:50:20.483715163Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 16:50:20.483759 containerd[1480]: time="2025-09-12T16:50:20.483728403Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 16:50:20.483759 containerd[1480]: time="2025-09-12T16:50:20.483744883Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 16:50:20.483759 containerd[1480]: time="2025-09-12T16:50:20.483756843Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 16:50:20.483815 containerd[1480]: time="2025-09-12T16:50:20.483768403Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 16:50:20.483815 containerd[1480]: time="2025-09-12T16:50:20.483778563Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 16:50:20.483815 containerd[1480]: time="2025-09-12T16:50:20.483798603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483815 containerd[1480]: time="2025-09-12T16:50:20.483811683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483889 containerd[1480]: time="2025-09-12T16:50:20.483823523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483889 containerd[1480]: time="2025-09-12T16:50:20.483835603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483889 containerd[1480]: time="2025-09-12T16:50:20.483847483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483889 containerd[1480]: time="2025-09-12T16:50:20.483860283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483889 containerd[1480]: time="2025-09-12T16:50:20.483871523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483889 containerd[1480]: time="2025-09-12T16:50:20.483883323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483986 containerd[1480]: time="2025-09-12T16:50:20.483895683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483986 containerd[1480]: time="2025-09-12T16:50:20.483909243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483986 containerd[1480]: time="2025-09-12T16:50:20.483919643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483986 containerd[1480]: time="2025-09-12T16:50:20.483930843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483986 containerd[1480]: time="2025-09-12T16:50:20.483942003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483986 containerd[1480]: time="2025-09-12T16:50:20.483956403Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 16:50:20.483986 containerd[1480]: time="2025-09-12T16:50:20.483975083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.483986 containerd[1480]: time="2025-09-12T16:50:20.483988083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.484115 containerd[1480]: time="2025-09-12T16:50:20.483998723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 16:50:20.485301 containerd[1480]: time="2025-09-12T16:50:20.484760323Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 16:50:20.485301 containerd[1480]: time="2025-09-12T16:50:20.484803283Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 16:50:20.485301 containerd[1480]: time="2025-09-12T16:50:20.484832723Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 16:50:20.485301 containerd[1480]: time="2025-09-12T16:50:20.484845763Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 16:50:20.485301 containerd[1480]: time="2025-09-12T16:50:20.484855203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.485301 containerd[1480]: time="2025-09-12T16:50:20.484867883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 16:50:20.485301 containerd[1480]: time="2025-09-12T16:50:20.484877883Z" level=info msg="NRI interface is disabled by configuration." Sep 12 16:50:20.485301 containerd[1480]: time="2025-09-12T16:50:20.484888003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 16:50:20.488797 containerd[1480]: time="2025-09-12T16:50:20.488733043Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 16:50:20.488968 containerd[1480]: time="2025-09-12T16:50:20.488950683Z" level=info msg="Connect containerd service" Sep 12 16:50:20.489065 containerd[1480]: time="2025-09-12T16:50:20.489050643Z" level=info msg="using legacy CRI server" Sep 12 16:50:20.489114 containerd[1480]: time="2025-09-12T16:50:20.489102283Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 16:50:20.489408 containerd[1480]: time="2025-09-12T16:50:20.489387403Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 16:50:20.490112 containerd[1480]: time="2025-09-12T16:50:20.490082003Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 16:50:20.490681 containerd[1480]: time="2025-09-12T16:50:20.490656683Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 16:50:20.490889 containerd[1480]: time="2025-09-12T16:50:20.490810563Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 16:50:20.490979 containerd[1480]: time="2025-09-12T16:50:20.490711563Z" level=info msg="Start subscribing containerd event" Sep 12 16:50:20.493822 containerd[1480]: time="2025-09-12T16:50:20.493745243Z" level=info msg="Start recovering state" Sep 12 16:50:20.493822 containerd[1480]: time="2025-09-12T16:50:20.493823203Z" level=info msg="Start event monitor" Sep 12 16:50:20.493901 containerd[1480]: time="2025-09-12T16:50:20.493842043Z" level=info msg="Start snapshots syncer" Sep 12 16:50:20.493901 containerd[1480]: time="2025-09-12T16:50:20.493853283Z" level=info msg="Start cni network conf syncer for default" Sep 12 16:50:20.493901 containerd[1480]: time="2025-09-12T16:50:20.493859963Z" level=info msg="Start streaming server" Sep 12 16:50:20.494046 containerd[1480]: time="2025-09-12T16:50:20.493986963Z" level=info msg="containerd successfully booted in 0.044616s" Sep 12 16:50:20.494064 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 16:50:20.684008 tar[1476]: linux-arm64/README.md Sep 12 16:50:20.700303 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 16:50:20.840481 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 16:50:20.857758 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 16:50:20.867927 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 16:50:20.873561 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 16:50:20.873777 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 16:50:20.875909 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 16:50:20.885534 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 16:50:20.887846 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 16:50:20.889551 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 16:50:20.890630 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 16:50:21.692883 systemd-networkd[1398]: eth0: Gained IPv6LL Sep 12 16:50:21.698397 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 16:50:21.699852 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 16:50:21.713912 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 16:50:21.716030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:50:21.717788 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 16:50:21.730057 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 16:50:21.730237 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 16:50:21.732780 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 16:50:21.735615 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 16:50:22.249260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:50:22.250573 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 16:50:22.252482 (kubelet)[1565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 16:50:22.254857 systemd[1]: Startup finished in 491ms (kernel) + 5.417s (initrd) + 3.708s (userspace) = 9.617s. Sep 12 16:50:22.582383 kubelet[1565]: E0912 16:50:22.582263 1565 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 16:50:22.585062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 16:50:22.585203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 16:50:22.586857 systemd[1]: kubelet.service: Consumed 728ms CPU time, 260.5M memory peak. Sep 12 16:50:26.033959 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 16:50:26.035018 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:52224.service - OpenSSH per-connection server daemon (10.0.0.1:52224). Sep 12 16:50:26.087772 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 52224 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:50:26.089534 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:50:26.095007 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 16:50:26.101001 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 16:50:26.105855 systemd-logind[1465]: New session 1 of user core. Sep 12 16:50:26.110726 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 16:50:26.112976 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 16:50:26.118554 (systemd)[1582]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 16:50:26.120478 systemd-logind[1465]: New session c1 of user core. Sep 12 16:50:26.229163 systemd[1582]: Queued start job for default target default.target. Sep 12 16:50:26.240626 systemd[1582]: Created slice app.slice - User Application Slice. Sep 12 16:50:26.240654 systemd[1582]: Reached target paths.target - Paths. Sep 12 16:50:26.240690 systemd[1582]: Reached target timers.target - Timers. Sep 12 16:50:26.241913 systemd[1582]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 16:50:26.250548 systemd[1582]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 16:50:26.250606 systemd[1582]: Reached target sockets.target - Sockets. Sep 12 16:50:26.250643 systemd[1582]: Reached target basic.target - Basic System. Sep 12 16:50:26.250670 systemd[1582]: Reached target default.target - Main User Target. Sep 12 16:50:26.250726 systemd[1582]: Startup finished in 125ms. Sep 12 16:50:26.250915 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 16:50:26.252365 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 16:50:26.312964 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:52236.service - OpenSSH per-connection server daemon (10.0.0.1:52236). Sep 12 16:50:26.361011 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 52236 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:50:26.362098 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:50:26.366516 systemd-logind[1465]: New session 2 of user core. Sep 12 16:50:26.379875 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 16:50:26.430875 sshd[1595]: Connection closed by 10.0.0.1 port 52236 Sep 12 16:50:26.431523 sshd-session[1593]: pam_unix(sshd:session): session closed for user core Sep 12 16:50:26.451760 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:52236.service: Deactivated successfully. Sep 12 16:50:26.453303 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 16:50:26.454664 systemd-logind[1465]: Session 2 logged out. Waiting for processes to exit. Sep 12 16:50:26.456168 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:52250.service - OpenSSH per-connection server daemon (10.0.0.1:52250). Sep 12 16:50:26.457317 systemd-logind[1465]: Removed session 2. Sep 12 16:50:26.495183 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 52250 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:50:26.496291 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:50:26.500487 systemd-logind[1465]: New session 3 of user core. Sep 12 16:50:26.511848 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 16:50:26.559744 sshd[1603]: Connection closed by 10.0.0.1 port 52250 Sep 12 16:50:26.560059 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Sep 12 16:50:26.570779 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:52250.service: Deactivated successfully. Sep 12 16:50:26.572202 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 16:50:26.572913 systemd-logind[1465]: Session 3 logged out. Waiting for processes to exit. Sep 12 16:50:26.579057 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:52256.service - OpenSSH per-connection server daemon (10.0.0.1:52256). Sep 12 16:50:26.580096 systemd-logind[1465]: Removed session 3. Sep 12 16:50:26.614475 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 52256 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:50:26.615510 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:50:26.619751 systemd-logind[1465]: New session 4 of user core. Sep 12 16:50:26.635840 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 16:50:26.687152 sshd[1611]: Connection closed by 10.0.0.1 port 52256 Sep 12 16:50:26.687473 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Sep 12 16:50:26.703547 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:52256.service: Deactivated successfully. Sep 12 16:50:26.704872 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 16:50:26.706063 systemd-logind[1465]: Session 4 logged out. Waiting for processes to exit. Sep 12 16:50:26.707178 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:52266.service - OpenSSH per-connection server daemon (10.0.0.1:52266). Sep 12 16:50:26.707856 systemd-logind[1465]: Removed session 4. Sep 12 16:50:26.745670 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 52266 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:50:26.746756 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:50:26.750991 systemd-logind[1465]: New session 5 of user core. Sep 12 16:50:26.758850 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 16:50:26.816163 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 16:50:26.816449 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 16:50:26.829570 sudo[1620]: pam_unix(sudo:session): session closed for user root Sep 12 16:50:26.830951 sshd[1619]: Connection closed by 10.0.0.1 port 52266 Sep 12 16:50:26.831261 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Sep 12 16:50:26.843673 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:52266.service: Deactivated successfully. Sep 12 16:50:26.846962 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 16:50:26.848180 systemd-logind[1465]: Session 5 logged out. Waiting for processes to exit. Sep 12 16:50:26.849405 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:52270.service - OpenSSH per-connection server daemon (10.0.0.1:52270). Sep 12 16:50:26.851056 systemd-logind[1465]: Removed session 5. Sep 12 16:50:26.888634 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 52270 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:50:26.889690 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:50:26.893598 systemd-logind[1465]: New session 6 of user core. Sep 12 16:50:26.902819 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 16:50:26.953103 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 16:50:26.953393 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 16:50:26.956450 sudo[1630]: pam_unix(sudo:session): session closed for user root Sep 12 16:50:26.961175 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 16:50:26.961462 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 16:50:26.986045 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 16:50:27.008236 augenrules[1652]: No rules Sep 12 16:50:27.009581 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 16:50:27.010785 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 16:50:27.011964 sudo[1629]: pam_unix(sudo:session): session closed for user root Sep 12 16:50:27.013776 sshd[1628]: Connection closed by 10.0.0.1 port 52270 Sep 12 16:50:27.014069 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Sep 12 16:50:27.024217 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:52270.service: Deactivated successfully. Sep 12 16:50:27.025639 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 16:50:27.026265 systemd-logind[1465]: Session 6 logged out. Waiting for processes to exit. Sep 12 16:50:27.028091 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:52274.service - OpenSSH per-connection server daemon (10.0.0.1:52274). Sep 12 16:50:27.028833 systemd-logind[1465]: Removed session 6. Sep 12 16:50:27.067220 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 52274 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:50:27.068303 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:50:27.072304 systemd-logind[1465]: New session 7 of user core. Sep 12 16:50:27.080829 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 16:50:27.131253 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 16:50:27.131845 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 16:50:27.410919 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 16:50:27.411013 (dockerd)[1683]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 16:50:27.608856 dockerd[1683]: time="2025-09-12T16:50:27.608515283Z" level=info msg="Starting up" Sep 12 16:50:27.753498 dockerd[1683]: time="2025-09-12T16:50:27.753403523Z" level=info msg="Loading containers: start." Sep 12 16:50:27.886715 kernel: Initializing XFRM netlink socket Sep 12 16:50:27.964568 systemd-networkd[1398]: docker0: Link UP Sep 12 16:50:28.011082 dockerd[1683]: time="2025-09-12T16:50:28.010961883Z" level=info msg="Loading containers: done." Sep 12 16:50:28.023788 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck126498401-merged.mount: Deactivated successfully. Sep 12 16:50:28.024797 dockerd[1683]: time="2025-09-12T16:50:28.024737803Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 16:50:28.024848 dockerd[1683]: time="2025-09-12T16:50:28.024832523Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 16:50:28.025033 dockerd[1683]: time="2025-09-12T16:50:28.025003203Z" level=info msg="Daemon has completed initialization" Sep 12 16:50:28.051507 dockerd[1683]: time="2025-09-12T16:50:28.051397763Z" level=info msg="API listen on /run/docker.sock" Sep 12 16:50:28.051564 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 16:50:28.660947 containerd[1480]: time="2025-09-12T16:50:28.660885163Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 16:50:29.232401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3758450935.mount: Deactivated successfully. Sep 12 16:50:30.443161 containerd[1480]: time="2025-09-12T16:50:30.443109083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:30.443552 containerd[1480]: time="2025-09-12T16:50:30.443503043Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 12 16:50:30.444370 containerd[1480]: time="2025-09-12T16:50:30.444312603Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:30.447270 containerd[1480]: time="2025-09-12T16:50:30.447219803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:30.448765 containerd[1480]: time="2025-09-12T16:50:30.448401883Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.78745364s" Sep 12 16:50:30.448765 containerd[1480]: time="2025-09-12T16:50:30.448441283Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 16:50:30.449168 containerd[1480]: time="2025-09-12T16:50:30.449129643Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 16:50:31.603143 containerd[1480]: time="2025-09-12T16:50:31.603088723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:31.604022 containerd[1480]: time="2025-09-12T16:50:31.603828643Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 12 16:50:31.604725 containerd[1480]: time="2025-09-12T16:50:31.604682163Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:31.607940 containerd[1480]: time="2025-09-12T16:50:31.607897523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:31.609041 containerd[1480]: time="2025-09-12T16:50:31.608929443Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.15976644s" Sep 12 16:50:31.609041 containerd[1480]: time="2025-09-12T16:50:31.608957483Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 16:50:31.609474 containerd[1480]: time="2025-09-12T16:50:31.609451803Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 16:50:32.641386 containerd[1480]: time="2025-09-12T16:50:32.641320803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:32.641913 containerd[1480]: time="2025-09-12T16:50:32.641865203Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 12 16:50:32.642940 containerd[1480]: time="2025-09-12T16:50:32.642907163Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:32.646164 containerd[1480]: time="2025-09-12T16:50:32.646136363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:32.647231 containerd[1480]: time="2025-09-12T16:50:32.647199083Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.03771604s" Sep 12 16:50:32.647258 containerd[1480]: time="2025-09-12T16:50:32.647232723Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 16:50:32.647711 containerd[1480]: time="2025-09-12T16:50:32.647649843Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 16:50:32.835526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 16:50:32.846849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:50:32.947206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:50:32.951140 (kubelet)[1955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 16:50:33.004833 kubelet[1955]: E0912 16:50:33.004793 1955 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 16:50:33.008003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 16:50:33.008149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 16:50:33.008637 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107.8M memory peak. Sep 12 16:50:33.665828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753740608.mount: Deactivated successfully. Sep 12 16:50:34.059078 containerd[1480]: time="2025-09-12T16:50:34.058951643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:34.059563 containerd[1480]: time="2025-09-12T16:50:34.059512843Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 12 16:50:34.060334 containerd[1480]: time="2025-09-12T16:50:34.060291483Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:34.062574 containerd[1480]: time="2025-09-12T16:50:34.062542723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:34.063420 containerd[1480]: time="2025-09-12T16:50:34.063379643Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.41558444s" Sep 12 16:50:34.063456 containerd[1480]: time="2025-09-12T16:50:34.063421483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 16:50:34.063885 containerd[1480]: time="2025-09-12T16:50:34.063845083Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 16:50:34.601727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927235607.mount: Deactivated successfully. Sep 12 16:50:35.280820 containerd[1480]: time="2025-09-12T16:50:35.280774043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:35.281726 containerd[1480]: time="2025-09-12T16:50:35.281320963Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 12 16:50:35.283938 containerd[1480]: time="2025-09-12T16:50:35.283884003Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:35.286882 containerd[1480]: time="2025-09-12T16:50:35.286850483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:35.289142 containerd[1480]: time="2025-09-12T16:50:35.289103563Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.22522684s" Sep 12 16:50:35.289142 containerd[1480]: time="2025-09-12T16:50:35.289140603Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 16:50:35.289539 containerd[1480]: time="2025-09-12T16:50:35.289517843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 16:50:35.715163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048974223.mount: Deactivated successfully. Sep 12 16:50:35.718891 containerd[1480]: time="2025-09-12T16:50:35.718854683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:35.720374 containerd[1480]: time="2025-09-12T16:50:35.720311083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 16:50:35.721307 containerd[1480]: time="2025-09-12T16:50:35.721266243Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:35.723093 containerd[1480]: time="2025-09-12T16:50:35.723064683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:35.723885 containerd[1480]: time="2025-09-12T16:50:35.723851283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 434.30272ms" Sep 12 16:50:35.723929 containerd[1480]: time="2025-09-12T16:50:35.723882403Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 16:50:35.724261 containerd[1480]: time="2025-09-12T16:50:35.724233083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 16:50:36.228708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4193367815.mount: Deactivated successfully. Sep 12 16:50:38.190649 containerd[1480]: time="2025-09-12T16:50:38.190591443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:38.235943 containerd[1480]: time="2025-09-12T16:50:38.235870523Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 12 16:50:38.388789 containerd[1480]: time="2025-09-12T16:50:38.388742963Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:38.392184 containerd[1480]: time="2025-09-12T16:50:38.392137443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:50:38.393461 containerd[1480]: time="2025-09-12T16:50:38.393376603Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.66911436s" Sep 12 16:50:38.393461 containerd[1480]: time="2025-09-12T16:50:38.393407003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 16:50:43.258641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 16:50:43.270870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:50:43.376132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:50:43.379197 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 16:50:43.411638 kubelet[2111]: E0912 16:50:43.411573 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 16:50:43.414097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 16:50:43.414241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 16:50:43.414524 systemd[1]: kubelet.service: Consumed 122ms CPU time, 106.9M memory peak. Sep 12 16:50:45.221501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:50:45.222016 systemd[1]: kubelet.service: Consumed 122ms CPU time, 106.9M memory peak. Sep 12 16:50:45.232883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:50:45.251805 systemd[1]: Reload requested from client PID 2127 ('systemctl') (unit session-7.scope)... Sep 12 16:50:45.251820 systemd[1]: Reloading... Sep 12 16:50:45.320728 zram_generator::config[2169]: No configuration found. Sep 12 16:50:45.538933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 16:50:45.610299 systemd[1]: Reloading finished in 358 ms. Sep 12 16:50:45.647373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:50:45.650463 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 16:50:45.651145 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:50:45.651907 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 16:50:45.652166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:50:45.652212 systemd[1]: kubelet.service: Consumed 81ms CPU time, 95.3M memory peak. Sep 12 16:50:45.653644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:50:45.752207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:50:45.755944 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 16:50:45.788511 kubelet[2219]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 16:50:45.788511 kubelet[2219]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 16:50:45.788511 kubelet[2219]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 16:50:45.788801 kubelet[2219]: I0912 16:50:45.788562 2219 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 16:50:47.149562 kubelet[2219]: I0912 16:50:47.149418 2219 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 16:50:47.149562 kubelet[2219]: I0912 16:50:47.149447 2219 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 16:50:47.149909 kubelet[2219]: I0912 16:50:47.149710 2219 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 16:50:47.170489 kubelet[2219]: E0912 16:50:47.170443 2219 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:50:47.171976 kubelet[2219]: I0912 16:50:47.171945 2219 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 16:50:47.176185 kubelet[2219]: E0912 16:50:47.176161 2219 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 16:50:47.176185 kubelet[2219]: I0912 16:50:47.176184 2219 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 16:50:47.179211 kubelet[2219]: I0912 16:50:47.179184 2219 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 16:50:47.179830 kubelet[2219]: I0912 16:50:47.179784 2219 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 16:50:47.179980 kubelet[2219]: I0912 16:50:47.179823 2219 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 16:50:47.180071 kubelet[2219]: I0912 16:50:47.180047 2219 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 16:50:47.180071 kubelet[2219]: I0912 16:50:47.180057 2219 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 16:50:47.180266 kubelet[2219]: I0912 16:50:47.180239 2219 state_mem.go:36] "Initialized new in-memory state store" Sep 12 16:50:47.183003 kubelet[2219]: I0912 16:50:47.182971 2219 kubelet.go:446] "Attempting to sync node with API server" Sep 12 16:50:47.183003 kubelet[2219]: I0912 16:50:47.182998 2219 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 16:50:47.183058 kubelet[2219]: I0912 16:50:47.183015 2219 kubelet.go:352] "Adding apiserver pod source" Sep 12 16:50:47.183058 kubelet[2219]: I0912 16:50:47.183025 2219 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 16:50:47.187131 kubelet[2219]: W0912 16:50:47.187081 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 16:50:47.187202 kubelet[2219]: E0912 16:50:47.187141 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:50:47.187202 kubelet[2219]: W0912 16:50:47.187178 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 16:50:47.187258 kubelet[2219]: E0912 16:50:47.187231 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:50:47.188650 kubelet[2219]: I0912 16:50:47.187550 2219 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 16:50:47.188650 kubelet[2219]: I0912 16:50:47.188175 2219 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 16:50:47.188650 kubelet[2219]: W0912 16:50:47.188302 2219 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 16:50:47.189385 kubelet[2219]: I0912 16:50:47.189365 2219 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 16:50:47.189469 kubelet[2219]: I0912 16:50:47.189460 2219 server.go:1287] "Started kubelet" Sep 12 16:50:47.190526 kubelet[2219]: I0912 16:50:47.190499 2219 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 16:50:47.191539 kubelet[2219]: I0912 16:50:47.191260 2219 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 16:50:47.191995 kubelet[2219]: I0912 16:50:47.191975 2219 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 16:50:47.194259 kubelet[2219]: I0912 16:50:47.194234 2219 server.go:479] "Adding debug handlers to kubelet server" Sep 12 16:50:47.194313 kubelet[2219]: I0912 16:50:47.194271 2219 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 16:50:47.195406 kubelet[2219]: E0912 16:50:47.195017 2219 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864970f72e7ee9b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 16:50:47.189442203 +0000 UTC m=+1.430519561,LastTimestamp:2025-09-12 16:50:47.189442203 +0000 UTC m=+1.430519561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 16:50:47.195741 kubelet[2219]: I0912 16:50:47.195552 2219 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 16:50:47.196539 kubelet[2219]: E0912 16:50:47.196513 2219 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 16:50:47.196667 kubelet[2219]: E0912 16:50:47.196650 2219 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 16:50:47.196736 kubelet[2219]: I0912 16:50:47.196678 2219 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 16:50:47.196850 kubelet[2219]: I0912 16:50:47.196828 2219 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 16:50:47.196895 kubelet[2219]: I0912 16:50:47.196882 2219 reconciler.go:26] "Reconciler: start to sync state" Sep 12 16:50:47.197304 kubelet[2219]: W0912 16:50:47.197143 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 16:50:47.197304 kubelet[2219]: E0912 16:50:47.197190 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:50:47.197393 kubelet[2219]: E0912 16:50:47.197303 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Sep 12 16:50:47.197393 kubelet[2219]: I0912 16:50:47.197333 2219 factory.go:221] Registration of the systemd container factory successfully Sep 12 16:50:47.197433 kubelet[2219]: I0912 16:50:47.197392 2219 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 16:50:47.199579 kubelet[2219]: I0912 16:50:47.199557 2219 factory.go:221] Registration of the containerd container factory successfully Sep 12 16:50:47.209915 kubelet[2219]: I0912 16:50:47.209872 2219 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 16:50:47.211387 kubelet[2219]: I0912 16:50:47.211353 2219 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 16:50:47.211387 kubelet[2219]: I0912 16:50:47.211381 2219 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 16:50:47.211459 kubelet[2219]: I0912 16:50:47.211398 2219 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 16:50:47.211459 kubelet[2219]: I0912 16:50:47.211407 2219 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 16:50:47.211459 kubelet[2219]: E0912 16:50:47.211441 2219 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 16:50:47.214096 kubelet[2219]: W0912 16:50:47.214070 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 16:50:47.214165 kubelet[2219]: E0912 16:50:47.214108 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:50:47.214221 kubelet[2219]: I0912 16:50:47.214200 2219 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 16:50:47.214246 kubelet[2219]: I0912 16:50:47.214213 2219 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 16:50:47.214246 kubelet[2219]: I0912 16:50:47.214237 2219 state_mem.go:36] "Initialized new in-memory state store" Sep 12 16:50:47.290352 kubelet[2219]: I0912 16:50:47.290319 2219 policy_none.go:49] "None policy: Start" Sep 12 16:50:47.290352 kubelet[2219]: I0912 16:50:47.290360 2219 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 16:50:47.290492 kubelet[2219]: I0912 16:50:47.290373 2219 state_mem.go:35] "Initializing new in-memory state store" Sep 12 16:50:47.295268 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 16:50:47.303055 kubelet[2219]: E0912 16:50:47.296733 2219 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 16:50:47.306096 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 16:50:47.308475 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 16:50:47.312198 kubelet[2219]: E0912 16:50:47.312152 2219 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 16:50:47.322462 kubelet[2219]: I0912 16:50:47.322432 2219 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 16:50:47.322631 kubelet[2219]: I0912 16:50:47.322600 2219 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 16:50:47.322823 kubelet[2219]: I0912 16:50:47.322620 2219 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 16:50:47.322852 kubelet[2219]: I0912 16:50:47.322840 2219 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 16:50:47.323860 kubelet[2219]: E0912 16:50:47.323822 2219 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 16:50:47.323860 kubelet[2219]: E0912 16:50:47.323861 2219 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 16:50:47.397958 kubelet[2219]: E0912 16:50:47.397926 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Sep 12 16:50:47.424337 kubelet[2219]: I0912 16:50:47.423732 2219 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 16:50:47.424337 kubelet[2219]: E0912 16:50:47.424022 2219 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 12 16:50:47.519659 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 16:50:47.529328 kubelet[2219]: E0912 16:50:47.529293 2219 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 16:50:47.531609 systemd[1]: Created slice kubepods-burstable-pod580443e352d12f13348a4700f95c9ad2.slice - libcontainer container kubepods-burstable-pod580443e352d12f13348a4700f95c9ad2.slice. Sep 12 16:50:47.541619 kubelet[2219]: E0912 16:50:47.541598 2219 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 16:50:47.543765 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 16:50:47.545309 kubelet[2219]: E0912 16:50:47.545148 2219 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 16:50:47.599479 kubelet[2219]: I0912 16:50:47.599443 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:47.599479 kubelet[2219]: I0912 16:50:47.599472 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:47.599617 kubelet[2219]: I0912 16:50:47.599493 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:47.599617 kubelet[2219]: I0912 16:50:47.599508 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:47.599617 kubelet[2219]: I0912 16:50:47.599526 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 16:50:47.599617 kubelet[2219]: I0912 16:50:47.599541 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/580443e352d12f13348a4700f95c9ad2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"580443e352d12f13348a4700f95c9ad2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:47.599617 kubelet[2219]: I0912 16:50:47.599561 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/580443e352d12f13348a4700f95c9ad2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"580443e352d12f13348a4700f95c9ad2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:47.599755 kubelet[2219]: I0912 16:50:47.599577 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/580443e352d12f13348a4700f95c9ad2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"580443e352d12f13348a4700f95c9ad2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:47.599755 kubelet[2219]: I0912 16:50:47.599618 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:47.625292 kubelet[2219]: I0912 16:50:47.625245 2219 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 16:50:47.625574 kubelet[2219]: E0912 16:50:47.625549 2219 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 12 16:50:47.799202 kubelet[2219]: E0912 16:50:47.799115 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Sep 12 16:50:47.830520 kubelet[2219]: E0912 16:50:47.830466 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:47.831086 containerd[1480]: time="2025-09-12T16:50:47.831044883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 16:50:47.842294 kubelet[2219]: E0912 16:50:47.842196 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:47.842621 containerd[1480]: time="2025-09-12T16:50:47.842578963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:580443e352d12f13348a4700f95c9ad2,Namespace:kube-system,Attempt:0,}" Sep 12 16:50:47.846021 kubelet[2219]: E0912 16:50:47.845996 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:47.846342 containerd[1480]: time="2025-09-12T16:50:47.846303163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 16:50:48.024093 kubelet[2219]: W0912 16:50:48.024020 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 16:50:48.024093 kubelet[2219]: E0912 16:50:48.024062 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:50:48.029229 kubelet[2219]: I0912 16:50:48.028945 2219 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 16:50:48.029353 kubelet[2219]: E0912 16:50:48.029320 2219 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 12 16:50:48.273069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount664662926.mount: Deactivated successfully. Sep 12 16:50:48.278857 containerd[1480]: time="2025-09-12T16:50:48.278806043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 16:50:48.281556 containerd[1480]: time="2025-09-12T16:50:48.281517363Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 16:50:48.282269 containerd[1480]: time="2025-09-12T16:50:48.282215483Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 16:50:48.283548 containerd[1480]: time="2025-09-12T16:50:48.283502323Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 16:50:48.285125 containerd[1480]: time="2025-09-12T16:50:48.285093163Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 16:50:48.286010 containerd[1480]: time="2025-09-12T16:50:48.285897003Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 16:50:48.286371 containerd[1480]: time="2025-09-12T16:50:48.286301683Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 443.66288ms" Sep 12 16:50:48.287061 containerd[1480]: time="2025-09-12T16:50:48.286706483Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 12 16:50:48.287168 containerd[1480]: time="2025-09-12T16:50:48.287146003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 16:50:48.292227 containerd[1480]: time="2025-09-12T16:50:48.292180363Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 445.8178ms" Sep 12 16:50:48.293849 containerd[1480]: time="2025-09-12T16:50:48.293811843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 462.68708ms" Sep 12 16:50:48.375274 containerd[1480]: time="2025-09-12T16:50:48.375170003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:50:48.375274 containerd[1480]: time="2025-09-12T16:50:48.375246603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:50:48.375274 containerd[1480]: time="2025-09-12T16:50:48.375264083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:50:48.375450 containerd[1480]: time="2025-09-12T16:50:48.375343843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:50:48.376529 containerd[1480]: time="2025-09-12T16:50:48.375995643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:50:48.376529 containerd[1480]: time="2025-09-12T16:50:48.376431923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:50:48.376529 containerd[1480]: time="2025-09-12T16:50:48.376463243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:50:48.377172 containerd[1480]: time="2025-09-12T16:50:48.377115443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:50:48.377710 containerd[1480]: time="2025-09-12T16:50:48.377487683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:50:48.377710 containerd[1480]: time="2025-09-12T16:50:48.377535243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:50:48.377710 containerd[1480]: time="2025-09-12T16:50:48.377545923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:50:48.377710 containerd[1480]: time="2025-09-12T16:50:48.377613003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:50:48.400863 systemd[1]: Started cri-containerd-25017a3da3f2ab3e822d1f7e08bc1fb944274a1de1584cf652b6c0807cd0a8ca.scope - libcontainer container 25017a3da3f2ab3e822d1f7e08bc1fb944274a1de1584cf652b6c0807cd0a8ca. Sep 12 16:50:48.401903 systemd[1]: Started cri-containerd-69c5ce92a1a1119ef798aca0ba243f3de2f0185ff33a66a3655ced7191cabc6c.scope - libcontainer container 69c5ce92a1a1119ef798aca0ba243f3de2f0185ff33a66a3655ced7191cabc6c. Sep 12 16:50:48.403127 systemd[1]: Started cri-containerd-934ff07880359bee75efa81e07660ed871e779813784bbbe3417cee231f56bae.scope - libcontainer container 934ff07880359bee75efa81e07660ed871e779813784bbbe3417cee231f56bae. Sep 12 16:50:48.438049 containerd[1480]: time="2025-09-12T16:50:48.438012843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"934ff07880359bee75efa81e07660ed871e779813784bbbe3417cee231f56bae\"" Sep 12 16:50:48.438834 kubelet[2219]: E0912 16:50:48.438799 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:48.439135 containerd[1480]: time="2025-09-12T16:50:48.439091563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:580443e352d12f13348a4700f95c9ad2,Namespace:kube-system,Attempt:0,} returns sandbox id \"69c5ce92a1a1119ef798aca0ba243f3de2f0185ff33a66a3655ced7191cabc6c\"" Sep 12 16:50:48.440184 kubelet[2219]: E0912 16:50:48.440058 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:48.441798 containerd[1480]: time="2025-09-12T16:50:48.441681003Z" level=info msg="CreateContainer within sandbox \"934ff07880359bee75efa81e07660ed871e779813784bbbe3417cee231f56bae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 16:50:48.442134 containerd[1480]: time="2025-09-12T16:50:48.442028643Z" level=info msg="CreateContainer within sandbox \"69c5ce92a1a1119ef798aca0ba243f3de2f0185ff33a66a3655ced7191cabc6c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 16:50:48.444847 containerd[1480]: time="2025-09-12T16:50:48.444816363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"25017a3da3f2ab3e822d1f7e08bc1fb944274a1de1584cf652b6c0807cd0a8ca\"" Sep 12 16:50:48.445398 kubelet[2219]: E0912 16:50:48.445377 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:48.447042 containerd[1480]: time="2025-09-12T16:50:48.447014203Z" level=info msg="CreateContainer within sandbox \"25017a3da3f2ab3e822d1f7e08bc1fb944274a1de1584cf652b6c0807cd0a8ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 16:50:48.457921 containerd[1480]: time="2025-09-12T16:50:48.457848283Z" level=info msg="CreateContainer within sandbox \"934ff07880359bee75efa81e07660ed871e779813784bbbe3417cee231f56bae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2dc6dfc5839b97702d64336abe2d2450c98b09cc52df0d6b791c42d07af3e391\"" Sep 12 16:50:48.458586 containerd[1480]: time="2025-09-12T16:50:48.458559443Z" level=info msg="StartContainer for \"2dc6dfc5839b97702d64336abe2d2450c98b09cc52df0d6b791c42d07af3e391\"" Sep 12 16:50:48.462871 containerd[1480]: time="2025-09-12T16:50:48.462839243Z" level=info msg="CreateContainer within sandbox \"25017a3da3f2ab3e822d1f7e08bc1fb944274a1de1584cf652b6c0807cd0a8ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cd22decbb60c88daa0ddc189d08550e0b7a87085349f9167b2cb7726aceb3580\"" Sep 12 16:50:48.463387 containerd[1480]: time="2025-09-12T16:50:48.463273523Z" level=info msg="StartContainer for \"cd22decbb60c88daa0ddc189d08550e0b7a87085349f9167b2cb7726aceb3580\"" Sep 12 16:50:48.464363 containerd[1480]: time="2025-09-12T16:50:48.464314163Z" level=info msg="CreateContainer within sandbox \"69c5ce92a1a1119ef798aca0ba243f3de2f0185ff33a66a3655ced7191cabc6c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fe681a05ab952702d45bb12748966fc0ef80d2cb2326a6e48cafdceeb6ff413d\"" Sep 12 16:50:48.464723 containerd[1480]: time="2025-09-12T16:50:48.464676083Z" level=info msg="StartContainer for \"fe681a05ab952702d45bb12748966fc0ef80d2cb2326a6e48cafdceeb6ff413d\"" Sep 12 16:50:48.484543 kubelet[2219]: W0912 16:50:48.484496 2219 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 16:50:48.484628 kubelet[2219]: E0912 16:50:48.484554 2219 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 16:50:48.484876 systemd[1]: Started cri-containerd-2dc6dfc5839b97702d64336abe2d2450c98b09cc52df0d6b791c42d07af3e391.scope - libcontainer container 2dc6dfc5839b97702d64336abe2d2450c98b09cc52df0d6b791c42d07af3e391. Sep 12 16:50:48.487785 systemd[1]: Started cri-containerd-cd22decbb60c88daa0ddc189d08550e0b7a87085349f9167b2cb7726aceb3580.scope - libcontainer container cd22decbb60c88daa0ddc189d08550e0b7a87085349f9167b2cb7726aceb3580. Sep 12 16:50:48.489061 systemd[1]: Started cri-containerd-fe681a05ab952702d45bb12748966fc0ef80d2cb2326a6e48cafdceeb6ff413d.scope - libcontainer container fe681a05ab952702d45bb12748966fc0ef80d2cb2326a6e48cafdceeb6ff413d. Sep 12 16:50:48.526653 containerd[1480]: time="2025-09-12T16:50:48.526539483Z" level=info msg="StartContainer for \"2dc6dfc5839b97702d64336abe2d2450c98b09cc52df0d6b791c42d07af3e391\" returns successfully" Sep 12 16:50:48.533259 containerd[1480]: time="2025-09-12T16:50:48.533218203Z" level=info msg="StartContainer for \"cd22decbb60c88daa0ddc189d08550e0b7a87085349f9167b2cb7726aceb3580\" returns successfully" Sep 12 16:50:48.542493 containerd[1480]: time="2025-09-12T16:50:48.542308803Z" level=info msg="StartContainer for \"fe681a05ab952702d45bb12748966fc0ef80d2cb2326a6e48cafdceeb6ff413d\" returns successfully" Sep 12 16:50:48.602886 kubelet[2219]: E0912 16:50:48.602839 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" Sep 12 16:50:48.832789 kubelet[2219]: I0912 16:50:48.830847 2219 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 16:50:49.220369 kubelet[2219]: E0912 16:50:49.220213 2219 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 16:50:49.220369 kubelet[2219]: E0912 16:50:49.220331 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:49.223569 kubelet[2219]: E0912 16:50:49.223540 2219 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 16:50:49.223669 kubelet[2219]: E0912 16:50:49.223655 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:49.225799 kubelet[2219]: E0912 16:50:49.225781 2219 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 16:50:49.225894 kubelet[2219]: E0912 16:50:49.225880 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:50.110255 kubelet[2219]: I0912 16:50:50.110064 2219 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 16:50:50.110255 kubelet[2219]: E0912 16:50:50.110101 2219 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 16:50:50.122311 kubelet[2219]: E0912 16:50:50.122272 2219 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 16:50:50.222377 kubelet[2219]: E0912 16:50:50.222325 2219 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 16:50:50.227668 kubelet[2219]: E0912 16:50:50.227591 2219 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 16:50:50.227757 kubelet[2219]: E0912 16:50:50.227730 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:50.228264 kubelet[2219]: E0912 16:50:50.228243 2219 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 16:50:50.228586 kubelet[2219]: E0912 16:50:50.228528 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:50.322936 kubelet[2219]: E0912 16:50:50.322899 2219 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 16:50:50.398177 kubelet[2219]: I0912 16:50:50.397743 2219 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 16:50:50.404731 kubelet[2219]: E0912 16:50:50.404687 2219 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 16:50:50.404731 kubelet[2219]: I0912 16:50:50.404733 2219 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:50.406229 kubelet[2219]: E0912 16:50:50.406206 2219 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:50.406229 kubelet[2219]: I0912 16:50:50.406227 2219 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:50.407636 kubelet[2219]: E0912 16:50:50.407614 2219 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:51.185589 kubelet[2219]: I0912 16:50:51.185362 2219 apiserver.go:52] "Watching apiserver" Sep 12 16:50:51.197782 kubelet[2219]: I0912 16:50:51.197755 2219 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 16:50:51.228764 kubelet[2219]: I0912 16:50:51.228171 2219 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:51.233253 kubelet[2219]: E0912 16:50:51.233173 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:52.189714 systemd[1]: Reload requested from client PID 2501 ('systemctl') (unit session-7.scope)... Sep 12 16:50:52.189727 systemd[1]: Reloading... Sep 12 16:50:52.231225 kubelet[2219]: E0912 16:50:52.231203 2219 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:52.260809 zram_generator::config[2548]: No configuration found. Sep 12 16:50:52.335682 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 16:50:52.420247 systemd[1]: Reloading finished in 230 ms. Sep 12 16:50:52.442266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:50:52.461983 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 16:50:52.462330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:50:52.462441 systemd[1]: kubelet.service: Consumed 1.765s CPU time, 128.5M memory peak. Sep 12 16:50:52.473024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 16:50:52.576374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 16:50:52.580595 (kubelet)[2587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 16:50:52.625348 kubelet[2587]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 16:50:52.625348 kubelet[2587]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 16:50:52.625348 kubelet[2587]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 16:50:52.625684 kubelet[2587]: I0912 16:50:52.625415 2587 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 16:50:52.634085 kubelet[2587]: I0912 16:50:52.634051 2587 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 16:50:52.634085 kubelet[2587]: I0912 16:50:52.634076 2587 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 16:50:52.634333 kubelet[2587]: I0912 16:50:52.634306 2587 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 16:50:52.635506 kubelet[2587]: I0912 16:50:52.635484 2587 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 16:50:52.638453 kubelet[2587]: I0912 16:50:52.638427 2587 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 16:50:52.641182 kubelet[2587]: E0912 16:50:52.641123 2587 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 16:50:52.641182 kubelet[2587]: I0912 16:50:52.641152 2587 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 16:50:52.643814 kubelet[2587]: I0912 16:50:52.643795 2587 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 16:50:52.643999 kubelet[2587]: I0912 16:50:52.643975 2587 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 16:50:52.644162 kubelet[2587]: I0912 16:50:52.643998 2587 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 16:50:52.644245 kubelet[2587]: I0912 16:50:52.644171 2587 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 16:50:52.644245 kubelet[2587]: I0912 16:50:52.644181 2587 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 16:50:52.644245 kubelet[2587]: I0912 16:50:52.644231 2587 state_mem.go:36] "Initialized new in-memory state store" Sep 12 16:50:52.644808 kubelet[2587]: I0912 16:50:52.644352 2587 kubelet.go:446] "Attempting to sync node with API server" Sep 12 16:50:52.644808 kubelet[2587]: I0912 16:50:52.644371 2587 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 16:50:52.644808 kubelet[2587]: I0912 16:50:52.644405 2587 kubelet.go:352] "Adding apiserver pod source" Sep 12 16:50:52.644808 kubelet[2587]: I0912 16:50:52.644419 2587 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 16:50:52.652320 kubelet[2587]: I0912 16:50:52.652292 2587 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 16:50:52.652998 kubelet[2587]: I0912 16:50:52.652789 2587 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 16:50:52.653855 kubelet[2587]: I0912 16:50:52.653831 2587 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 16:50:52.653966 kubelet[2587]: I0912 16:50:52.653870 2587 server.go:1287] "Started kubelet" Sep 12 16:50:52.654312 kubelet[2587]: I0912 16:50:52.654269 2587 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 16:50:52.654648 kubelet[2587]: I0912 16:50:52.654607 2587 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 16:50:52.654856 kubelet[2587]: I0912 16:50:52.654838 2587 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 16:50:52.656125 kubelet[2587]: I0912 16:50:52.656100 2587 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 16:50:52.656777 kubelet[2587]: I0912 16:50:52.656756 2587 server.go:479] "Adding debug handlers to kubelet server" Sep 12 16:50:52.658958 kubelet[2587]: I0912 16:50:52.658931 2587 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 16:50:52.659409 kubelet[2587]: E0912 16:50:52.659386 2587 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 16:50:52.659447 kubelet[2587]: I0912 16:50:52.659423 2587 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 16:50:52.659593 kubelet[2587]: I0912 16:50:52.659575 2587 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 16:50:52.659734 kubelet[2587]: I0912 16:50:52.659719 2587 reconciler.go:26] "Reconciler: start to sync state" Sep 12 16:50:52.660707 kubelet[2587]: I0912 16:50:52.660677 2587 factory.go:221] Registration of the systemd container factory successfully Sep 12 16:50:52.660830 kubelet[2587]: I0912 16:50:52.660790 2587 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 16:50:52.661290 kubelet[2587]: E0912 16:50:52.661212 2587 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 16:50:52.661643 kubelet[2587]: I0912 16:50:52.661623 2587 factory.go:221] Registration of the containerd container factory successfully Sep 12 16:50:52.668944 kubelet[2587]: I0912 16:50:52.668906 2587 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 16:50:52.669869 kubelet[2587]: I0912 16:50:52.669764 2587 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 16:50:52.669869 kubelet[2587]: I0912 16:50:52.669784 2587 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 16:50:52.669869 kubelet[2587]: I0912 16:50:52.669799 2587 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 16:50:52.669869 kubelet[2587]: I0912 16:50:52.669806 2587 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 16:50:52.670163 kubelet[2587]: E0912 16:50:52.670111 2587 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 16:50:52.697577 kubelet[2587]: I0912 16:50:52.697487 2587 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 16:50:52.698573 kubelet[2587]: I0912 16:50:52.697683 2587 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 16:50:52.698573 kubelet[2587]: I0912 16:50:52.697727 2587 state_mem.go:36] "Initialized new in-memory state store" Sep 12 16:50:52.698573 kubelet[2587]: I0912 16:50:52.697861 2587 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 16:50:52.698573 kubelet[2587]: I0912 16:50:52.697872 2587 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 16:50:52.698573 kubelet[2587]: I0912 16:50:52.697889 2587 policy_none.go:49] "None policy: Start" Sep 12 16:50:52.698573 kubelet[2587]: I0912 16:50:52.697898 2587 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 16:50:52.698573 kubelet[2587]: I0912 16:50:52.697907 2587 state_mem.go:35] "Initializing new in-memory state store" Sep 12 16:50:52.698573 kubelet[2587]: I0912 16:50:52.697996 2587 state_mem.go:75] "Updated machine memory state" Sep 12 16:50:52.702804 kubelet[2587]: I0912 16:50:52.702781 2587 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 16:50:52.703478 kubelet[2587]: I0912 16:50:52.702931 2587 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 16:50:52.703478 kubelet[2587]: I0912 16:50:52.702948 2587 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 16:50:52.703478 kubelet[2587]: I0912 16:50:52.703163 2587 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 16:50:52.703905 kubelet[2587]: E0912 16:50:52.703881 2587 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 16:50:52.771458 kubelet[2587]: I0912 16:50:52.771414 2587 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 16:50:52.771923 kubelet[2587]: I0912 16:50:52.771897 2587 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:52.772239 kubelet[2587]: I0912 16:50:52.771898 2587 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:52.777521 kubelet[2587]: E0912 16:50:52.777479 2587 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:52.807035 kubelet[2587]: I0912 16:50:52.806974 2587 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 16:50:52.813462 kubelet[2587]: I0912 16:50:52.812879 2587 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 16:50:52.813462 kubelet[2587]: I0912 16:50:52.812950 2587 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 16:50:52.860533 kubelet[2587]: I0912 16:50:52.860503 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/580443e352d12f13348a4700f95c9ad2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"580443e352d12f13348a4700f95c9ad2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:52.860533 kubelet[2587]: I0912 16:50:52.860535 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:52.860533 kubelet[2587]: I0912 16:50:52.860555 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:52.860533 kubelet[2587]: I0912 16:50:52.860581 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:52.860533 kubelet[2587]: I0912 16:50:52.860598 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:52.860829 kubelet[2587]: I0912 16:50:52.860618 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/580443e352d12f13348a4700f95c9ad2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"580443e352d12f13348a4700f95c9ad2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:52.860829 kubelet[2587]: I0912 16:50:52.860649 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/580443e352d12f13348a4700f95c9ad2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"580443e352d12f13348a4700f95c9ad2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:52.860829 kubelet[2587]: I0912 16:50:52.860668 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 16:50:52.860829 kubelet[2587]: I0912 16:50:52.860683 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 16:50:53.075783 kubelet[2587]: E0912 16:50:53.075752 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:53.077755 kubelet[2587]: E0912 16:50:53.077732 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:53.079034 kubelet[2587]: E0912 16:50:53.079013 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:53.188003 sudo[2624]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 16:50:53.188292 sudo[2624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 16:50:53.617058 sudo[2624]: pam_unix(sudo:session): session closed for user root Sep 12 16:50:53.644991 kubelet[2587]: I0912 16:50:53.644885 2587 apiserver.go:52] "Watching apiserver" Sep 12 16:50:53.660440 kubelet[2587]: I0912 16:50:53.660403 2587 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 16:50:53.689638 kubelet[2587]: I0912 16:50:53.688440 2587 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:53.689638 kubelet[2587]: E0912 16:50:53.689114 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:53.689638 kubelet[2587]: E0912 16:50:53.689359 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:53.693974 kubelet[2587]: E0912 16:50:53.693938 2587 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 16:50:53.694091 kubelet[2587]: E0912 16:50:53.694071 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:53.711326 kubelet[2587]: I0912 16:50:53.711275 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.711246852 podStartE2EDuration="1.711246852s" podCreationTimestamp="2025-09-12 16:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:50:53.70876979 +0000 UTC m=+1.123965868" watchObservedRunningTime="2025-09-12 16:50:53.711246852 +0000 UTC m=+1.126442930" Sep 12 16:50:53.741787 kubelet[2587]: I0912 16:50:53.741733 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.741715647 podStartE2EDuration="2.741715647s" podCreationTimestamp="2025-09-12 16:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:50:53.740677317 +0000 UTC m=+1.155873395" watchObservedRunningTime="2025-09-12 16:50:53.741715647 +0000 UTC m=+1.156911725" Sep 12 16:50:53.741940 kubelet[2587]: I0912 16:50:53.741832 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.741828688 podStartE2EDuration="1.741828688s" podCreationTimestamp="2025-09-12 16:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:50:53.729385696 +0000 UTC m=+1.144581734" watchObservedRunningTime="2025-09-12 16:50:53.741828688 +0000 UTC m=+1.157024766" Sep 12 16:50:54.689658 kubelet[2587]: E0912 16:50:54.689619 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:54.690005 kubelet[2587]: E0912 16:50:54.689680 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:55.121932 sudo[1664]: pam_unix(sudo:session): session closed for user root Sep 12 16:50:55.122998 sshd[1663]: Connection closed by 10.0.0.1 port 52274 Sep 12 16:50:55.123524 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Sep 12 16:50:55.126625 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:52274.service: Deactivated successfully. Sep 12 16:50:55.128475 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 16:50:55.128776 systemd[1]: session-7.scope: Consumed 8.557s CPU time, 258.2M memory peak. Sep 12 16:50:55.129647 systemd-logind[1465]: Session 7 logged out. Waiting for processes to exit. Sep 12 16:50:55.130591 systemd-logind[1465]: Removed session 7. Sep 12 16:50:57.294039 kubelet[2587]: E0912 16:50:57.294005 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:58.836153 kubelet[2587]: I0912 16:50:58.836096 2587 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 16:50:58.836846 containerd[1480]: time="2025-09-12T16:50:58.836810384Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 16:50:58.837755 kubelet[2587]: I0912 16:50:58.837318 2587 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 16:50:59.250470 kubelet[2587]: E0912 16:50:59.250356 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:59.696895 kubelet[2587]: E0912 16:50:59.696866 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:50:59.791144 systemd[1]: Created slice kubepods-besteffort-pode256727e_a96a_413a_a6cc_4ee190c81839.slice - libcontainer container kubepods-besteffort-pode256727e_a96a_413a_a6cc_4ee190c81839.slice. Sep 12 16:50:59.802422 kubelet[2587]: I0912 16:50:59.802392 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e256727e-a96a-413a-a6cc-4ee190c81839-kube-proxy\") pod \"kube-proxy-7w7cs\" (UID: \"e256727e-a96a-413a-a6cc-4ee190c81839\") " pod="kube-system/kube-proxy-7w7cs" Sep 12 16:50:59.802546 kubelet[2587]: I0912 16:50:59.802530 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e256727e-a96a-413a-a6cc-4ee190c81839-xtables-lock\") pod \"kube-proxy-7w7cs\" (UID: \"e256727e-a96a-413a-a6cc-4ee190c81839\") " pod="kube-system/kube-proxy-7w7cs" Sep 12 16:50:59.802630 kubelet[2587]: I0912 16:50:59.802616 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zspb\" (UniqueName: \"kubernetes.io/projected/e256727e-a96a-413a-a6cc-4ee190c81839-kube-api-access-9zspb\") pod \"kube-proxy-7w7cs\" (UID: \"e256727e-a96a-413a-a6cc-4ee190c81839\") " pod="kube-system/kube-proxy-7w7cs" Sep 12 16:50:59.802711 kubelet[2587]: I0912 16:50:59.802683 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e256727e-a96a-413a-a6cc-4ee190c81839-lib-modules\") pod \"kube-proxy-7w7cs\" (UID: \"e256727e-a96a-413a-a6cc-4ee190c81839\") " pod="kube-system/kube-proxy-7w7cs" Sep 12 16:50:59.808407 systemd[1]: Created slice kubepods-burstable-pod8c68a7ab_25ab_4933_afc0_2c2eeaa2d9df.slice - libcontainer container kubepods-burstable-pod8c68a7ab_25ab_4933_afc0_2c2eeaa2d9df.slice. Sep 12 16:50:59.886031 systemd[1]: Created slice kubepods-besteffort-pod7be50088_be0f_413c_b07c_f4e7ab1ea22e.slice - libcontainer container kubepods-besteffort-pod7be50088_be0f_413c_b07c_f4e7ab1ea22e.slice. Sep 12 16:50:59.903319 kubelet[2587]: I0912 16:50:59.903047 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-host-proc-sys-kernel\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.903319 kubelet[2587]: I0912 16:50:59.903092 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-hostproc\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.903319 kubelet[2587]: I0912 16:50:59.903125 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-bpf-maps\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.903319 kubelet[2587]: I0912 16:50:59.903143 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-config-path\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.903319 kubelet[2587]: I0912 16:50:59.903162 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-host-proc-sys-net\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.903319 kubelet[2587]: I0912 16:50:59.903181 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cni-path\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.903892 kubelet[2587]: I0912 16:50:59.903195 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-hubble-tls\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.903892 kubelet[2587]: I0912 16:50:59.903238 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-etc-cni-netd\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.903892 kubelet[2587]: I0912 16:50:59.903267 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-cgroup\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.903892 kubelet[2587]: I0912 16:50:59.903286 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpn9v\" (UniqueName: \"kubernetes.io/projected/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-kube-api-access-bpn9v\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.903892 kubelet[2587]: I0912 16:50:59.903306 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xkx6\" (UniqueName: \"kubernetes.io/projected/7be50088-be0f-413c-b07c-f4e7ab1ea22e-kube-api-access-5xkx6\") pod \"cilium-operator-6c4d7847fc-snzfl\" (UID: \"7be50088-be0f-413c-b07c-f4e7ab1ea22e\") " pod="kube-system/cilium-operator-6c4d7847fc-snzfl" Sep 12 16:50:59.904012 kubelet[2587]: I0912 16:50:59.903325 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-run\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.904012 kubelet[2587]: I0912 16:50:59.903343 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-lib-modules\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.904012 kubelet[2587]: I0912 16:50:59.903367 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-xtables-lock\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.904012 kubelet[2587]: I0912 16:50:59.903391 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-clustermesh-secrets\") pod \"cilium-nbdfw\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " pod="kube-system/cilium-nbdfw" Sep 12 16:50:59.904012 kubelet[2587]: I0912 16:50:59.903409 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7be50088-be0f-413c-b07c-f4e7ab1ea22e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-snzfl\" (UID: \"7be50088-be0f-413c-b07c-f4e7ab1ea22e\") " pod="kube-system/cilium-operator-6c4d7847fc-snzfl" Sep 12 16:51:00.102216 kubelet[2587]: E0912 16:51:00.102182 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:00.102763 containerd[1480]: time="2025-09-12T16:51:00.102715068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7w7cs,Uid:e256727e-a96a-413a-a6cc-4ee190c81839,Namespace:kube-system,Attempt:0,}" Sep 12 16:51:00.112321 kubelet[2587]: E0912 16:51:00.112297 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:00.112949 containerd[1480]: time="2025-09-12T16:51:00.112610045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nbdfw,Uid:8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df,Namespace:kube-system,Attempt:0,}" Sep 12 16:51:00.124335 containerd[1480]: time="2025-09-12T16:51:00.123317986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:51:00.124335 containerd[1480]: time="2025-09-12T16:51:00.123385387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:51:00.124335 containerd[1480]: time="2025-09-12T16:51:00.123400067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:00.124335 containerd[1480]: time="2025-09-12T16:51:00.123994630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:00.131116 containerd[1480]: time="2025-09-12T16:51:00.130996030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:51:00.131265 containerd[1480]: time="2025-09-12T16:51:00.131094711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:51:00.132885 containerd[1480]: time="2025-09-12T16:51:00.132053796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:00.132885 containerd[1480]: time="2025-09-12T16:51:00.132178557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:00.147855 systemd[1]: Started cri-containerd-ab9a9a714a42fe74b80e4e0f1aa1e85eedc70c15db1311c4362b3e955c3a1ba7.scope - libcontainer container ab9a9a714a42fe74b80e4e0f1aa1e85eedc70c15db1311c4362b3e955c3a1ba7. Sep 12 16:51:00.150245 systemd[1]: Started cri-containerd-aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180.scope - libcontainer container aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180. Sep 12 16:51:00.173403 containerd[1480]: time="2025-09-12T16:51:00.173035111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7w7cs,Uid:e256727e-a96a-413a-a6cc-4ee190c81839,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab9a9a714a42fe74b80e4e0f1aa1e85eedc70c15db1311c4362b3e955c3a1ba7\"" Sep 12 16:51:00.173758 containerd[1480]: time="2025-09-12T16:51:00.173731515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nbdfw,Uid:8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df,Namespace:kube-system,Attempt:0,} returns sandbox id \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\"" Sep 12 16:51:00.174110 kubelet[2587]: E0912 16:51:00.173948 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:00.174663 kubelet[2587]: E0912 16:51:00.174625 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:00.175597 containerd[1480]: time="2025-09-12T16:51:00.175559526Z" level=info msg="CreateContainer within sandbox \"ab9a9a714a42fe74b80e4e0f1aa1e85eedc70c15db1311c4362b3e955c3a1ba7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 16:51:00.176208 containerd[1480]: time="2025-09-12T16:51:00.176173089Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 16:51:00.188836 kubelet[2587]: E0912 16:51:00.188812 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:00.189185 containerd[1480]: time="2025-09-12T16:51:00.189156324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-snzfl,Uid:7be50088-be0f-413c-b07c-f4e7ab1ea22e,Namespace:kube-system,Attempt:0,}" Sep 12 16:51:00.200541 containerd[1480]: time="2025-09-12T16:51:00.200482588Z" level=info msg="CreateContainer within sandbox \"ab9a9a714a42fe74b80e4e0f1aa1e85eedc70c15db1311c4362b3e955c3a1ba7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f73978d0989936d871f043963d81871d3bfb25831850f681e9417be21b735833\"" Sep 12 16:51:00.201004 containerd[1480]: time="2025-09-12T16:51:00.200964111Z" level=info msg="StartContainer for \"f73978d0989936d871f043963d81871d3bfb25831850f681e9417be21b735833\"" Sep 12 16:51:00.213100 containerd[1480]: time="2025-09-12T16:51:00.213017340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:51:00.213100 containerd[1480]: time="2025-09-12T16:51:00.213078261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:51:00.213100 containerd[1480]: time="2025-09-12T16:51:00.213089781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:00.213262 containerd[1480]: time="2025-09-12T16:51:00.213171061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:00.224837 systemd[1]: Started cri-containerd-f73978d0989936d871f043963d81871d3bfb25831850f681e9417be21b735833.scope - libcontainer container f73978d0989936d871f043963d81871d3bfb25831850f681e9417be21b735833. Sep 12 16:51:00.227982 systemd[1]: Started cri-containerd-8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250.scope - libcontainer container 8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250. Sep 12 16:51:00.254355 containerd[1480]: time="2025-09-12T16:51:00.254319977Z" level=info msg="StartContainer for \"f73978d0989936d871f043963d81871d3bfb25831850f681e9417be21b735833\" returns successfully" Sep 12 16:51:00.264167 containerd[1480]: time="2025-09-12T16:51:00.264081393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-snzfl,Uid:7be50088-be0f-413c-b07c-f4e7ab1ea22e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250\"" Sep 12 16:51:00.264877 kubelet[2587]: E0912 16:51:00.264742 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:00.566593 kubelet[2587]: E0912 16:51:00.566266 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:00.701270 kubelet[2587]: E0912 16:51:00.701235 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:00.701388 kubelet[2587]: E0912 16:51:00.701277 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:00.702147 kubelet[2587]: E0912 16:51:00.701648 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:00.710110 kubelet[2587]: I0912 16:51:00.710055 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7w7cs" podStartSLOduration=1.710042027 podStartE2EDuration="1.710042027s" podCreationTimestamp="2025-09-12 16:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:51:00.710016947 +0000 UTC m=+8.125213025" watchObservedRunningTime="2025-09-12 16:51:00.710042027 +0000 UTC m=+8.125238065" Sep 12 16:51:05.642762 update_engine[1468]: I20250912 16:51:05.642436 1468 update_attempter.cc:509] Updating boot flags... Sep 12 16:51:05.722862 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2964) Sep 12 16:51:05.778670 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2968) Sep 12 16:51:07.301852 kubelet[2587]: E0912 16:51:07.301820 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:09.824980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556521339.mount: Deactivated successfully. Sep 12 16:51:13.709775 containerd[1480]: time="2025-09-12T16:51:13.709725731Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:13.710716 containerd[1480]: time="2025-09-12T16:51:13.710455853Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 16:51:13.718138 containerd[1480]: time="2025-09-12T16:51:13.718113552Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:13.719716 containerd[1480]: time="2025-09-12T16:51:13.719673316Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.543464587s" Sep 12 16:51:13.719784 containerd[1480]: time="2025-09-12T16:51:13.719718636Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 16:51:13.724799 containerd[1480]: time="2025-09-12T16:51:13.724552208Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 16:51:13.726999 containerd[1480]: time="2025-09-12T16:51:13.726906294Z" level=info msg="CreateContainer within sandbox \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 16:51:13.767147 containerd[1480]: time="2025-09-12T16:51:13.767098673Z" level=info msg="CreateContainer within sandbox \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9\"" Sep 12 16:51:13.768409 containerd[1480]: time="2025-09-12T16:51:13.767653835Z" level=info msg="StartContainer for \"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9\"" Sep 12 16:51:13.794837 systemd[1]: Started cri-containerd-e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9.scope - libcontainer container e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9. Sep 12 16:51:13.815472 containerd[1480]: time="2025-09-12T16:51:13.815431393Z" level=info msg="StartContainer for \"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9\" returns successfully" Sep 12 16:51:13.826365 systemd[1]: cri-containerd-e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9.scope: Deactivated successfully. Sep 12 16:51:14.009817 containerd[1480]: time="2025-09-12T16:51:13.999574089Z" level=info msg="shim disconnected" id=e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9 namespace=k8s.io Sep 12 16:51:14.009817 containerd[1480]: time="2025-09-12T16:51:14.009499592Z" level=warning msg="cleaning up after shim disconnected" id=e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9 namespace=k8s.io Sep 12 16:51:14.009817 containerd[1480]: time="2025-09-12T16:51:14.009511752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:51:14.730916 kubelet[2587]: E0912 16:51:14.730868 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:14.733418 containerd[1480]: time="2025-09-12T16:51:14.733384032Z" level=info msg="CreateContainer within sandbox \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 16:51:14.745944 containerd[1480]: time="2025-09-12T16:51:14.745900021Z" level=info msg="CreateContainer within sandbox \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476\"" Sep 12 16:51:14.746406 containerd[1480]: time="2025-09-12T16:51:14.746364142Z" level=info msg="StartContainer for \"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476\"" Sep 12 16:51:14.755007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9-rootfs.mount: Deactivated successfully. Sep 12 16:51:14.775847 systemd[1]: Started cri-containerd-38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476.scope - libcontainer container 38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476. Sep 12 16:51:14.795951 containerd[1480]: time="2025-09-12T16:51:14.795894097Z" level=info msg="StartContainer for \"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476\" returns successfully" Sep 12 16:51:14.807593 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 16:51:14.807960 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 16:51:14.808498 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 16:51:14.818018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 16:51:14.819962 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 16:51:14.820568 systemd[1]: cri-containerd-38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476.scope: Deactivated successfully. Sep 12 16:51:14.829048 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 16:51:14.852305 containerd[1480]: time="2025-09-12T16:51:14.852235148Z" level=info msg="shim disconnected" id=38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476 namespace=k8s.io Sep 12 16:51:14.852305 containerd[1480]: time="2025-09-12T16:51:14.852286188Z" level=warning msg="cleaning up after shim disconnected" id=38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476 namespace=k8s.io Sep 12 16:51:14.852569 containerd[1480]: time="2025-09-12T16:51:14.852294228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:51:15.502174 containerd[1480]: time="2025-09-12T16:51:15.502127223Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:15.502652 containerd[1480]: time="2025-09-12T16:51:15.502575704Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 16:51:15.503667 containerd[1480]: time="2025-09-12T16:51:15.503641227Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 16:51:15.509305 containerd[1480]: time="2025-09-12T16:51:15.509171079Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.784584831s" Sep 12 16:51:15.509305 containerd[1480]: time="2025-09-12T16:51:15.509215079Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 16:51:15.510929 containerd[1480]: time="2025-09-12T16:51:15.510901323Z" level=info msg="CreateContainer within sandbox \"8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 16:51:15.529932 containerd[1480]: time="2025-09-12T16:51:15.529820524Z" level=info msg="CreateContainer within sandbox \"8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\"" Sep 12 16:51:15.530616 containerd[1480]: time="2025-09-12T16:51:15.530433205Z" level=info msg="StartContainer for \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\"" Sep 12 16:51:15.557875 systemd[1]: Started cri-containerd-381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8.scope - libcontainer container 381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8. Sep 12 16:51:15.580118 containerd[1480]: time="2025-09-12T16:51:15.580079593Z" level=info msg="StartContainer for \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\" returns successfully" Sep 12 16:51:15.735801 kubelet[2587]: E0912 16:51:15.735768 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:15.738837 kubelet[2587]: E0912 16:51:15.738804 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:15.741580 containerd[1480]: time="2025-09-12T16:51:15.741504584Z" level=info msg="CreateContainer within sandbox \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 16:51:15.748449 kubelet[2587]: I0912 16:51:15.748382 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-snzfl" podStartSLOduration=1.5047187640000002 podStartE2EDuration="16.748366359s" podCreationTimestamp="2025-09-12 16:50:59 +0000 UTC" firstStartedPulling="2025-09-12 16:51:00.266149165 +0000 UTC m=+7.681345243" lastFinishedPulling="2025-09-12 16:51:15.5097968 +0000 UTC m=+22.924992838" observedRunningTime="2025-09-12 16:51:15.748238239 +0000 UTC m=+23.163434357" watchObservedRunningTime="2025-09-12 16:51:15.748366359 +0000 UTC m=+23.163562477" Sep 12 16:51:15.758159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476-rootfs.mount: Deactivated successfully. Sep 12 16:51:15.768407 containerd[1480]: time="2025-09-12T16:51:15.768363243Z" level=info msg="CreateContainer within sandbox \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67\"" Sep 12 16:51:15.768965 containerd[1480]: time="2025-09-12T16:51:15.768939644Z" level=info msg="StartContainer for \"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67\"" Sep 12 16:51:15.802869 systemd[1]: Started cri-containerd-d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67.scope - libcontainer container d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67. Sep 12 16:51:15.831555 containerd[1480]: time="2025-09-12T16:51:15.831490900Z" level=info msg="StartContainer for \"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67\" returns successfully" Sep 12 16:51:15.834122 systemd[1]: cri-containerd-d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67.scope: Deactivated successfully. Sep 12 16:51:15.917610 containerd[1480]: time="2025-09-12T16:51:15.917504847Z" level=info msg="shim disconnected" id=d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67 namespace=k8s.io Sep 12 16:51:15.917991 containerd[1480]: time="2025-09-12T16:51:15.917675128Z" level=warning msg="cleaning up after shim disconnected" id=d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67 namespace=k8s.io Sep 12 16:51:15.917991 containerd[1480]: time="2025-09-12T16:51:15.917687008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:51:16.745809 kubelet[2587]: E0912 16:51:16.745776 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:16.747203 kubelet[2587]: E0912 16:51:16.745914 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:16.748842 containerd[1480]: time="2025-09-12T16:51:16.748468514Z" level=info msg="CreateContainer within sandbox \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 16:51:16.756119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67-rootfs.mount: Deactivated successfully. Sep 12 16:51:16.766962 containerd[1480]: time="2025-09-12T16:51:16.766890991Z" level=info msg="CreateContainer within sandbox \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565\"" Sep 12 16:51:16.768660 containerd[1480]: time="2025-09-12T16:51:16.768631475Z" level=info msg="StartContainer for \"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565\"" Sep 12 16:51:16.792849 systemd[1]: Started cri-containerd-91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565.scope - libcontainer container 91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565. Sep 12 16:51:16.811115 systemd[1]: cri-containerd-91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565.scope: Deactivated successfully. Sep 12 16:51:16.814237 containerd[1480]: time="2025-09-12T16:51:16.814118127Z" level=info msg="StartContainer for \"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565\" returns successfully" Sep 12 16:51:16.834920 containerd[1480]: time="2025-09-12T16:51:16.834868450Z" level=info msg="shim disconnected" id=91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565 namespace=k8s.io Sep 12 16:51:16.835250 containerd[1480]: time="2025-09-12T16:51:16.835089010Z" level=warning msg="cleaning up after shim disconnected" id=91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565 namespace=k8s.io Sep 12 16:51:16.835250 containerd[1480]: time="2025-09-12T16:51:16.835103730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:51:17.756072 kubelet[2587]: E0912 16:51:17.754330 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:17.755035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565-rootfs.mount: Deactivated successfully. Sep 12 16:51:17.758648 containerd[1480]: time="2025-09-12T16:51:17.758488438Z" level=info msg="CreateContainer within sandbox \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 16:51:17.783528 containerd[1480]: time="2025-09-12T16:51:17.783471526Z" level=info msg="CreateContainer within sandbox \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\"" Sep 12 16:51:17.785810 containerd[1480]: time="2025-09-12T16:51:17.784908609Z" level=info msg="StartContainer for \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\"" Sep 12 16:51:17.809624 systemd[1]: run-containerd-runc-k8s.io-3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798-runc.9UxhmZ.mount: Deactivated successfully. Sep 12 16:51:17.818882 systemd[1]: Started cri-containerd-3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798.scope - libcontainer container 3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798. Sep 12 16:51:17.847975 containerd[1480]: time="2025-09-12T16:51:17.847914089Z" level=info msg="StartContainer for \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\" returns successfully" Sep 12 16:51:17.994320 kubelet[2587]: I0912 16:51:17.994292 2587 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 16:51:18.022039 systemd[1]: Created slice kubepods-burstable-pod4ed1ef38_8cef_410b_8cdd_7e95f71194ed.slice - libcontainer container kubepods-burstable-pod4ed1ef38_8cef_410b_8cdd_7e95f71194ed.slice. Sep 12 16:51:18.025440 kubelet[2587]: I0912 16:51:18.025414 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh6pd\" (UniqueName: \"kubernetes.io/projected/d090601b-5e4b-4361-886d-1a390ef1905c-kube-api-access-mh6pd\") pod \"coredns-668d6bf9bc-c68qr\" (UID: \"d090601b-5e4b-4361-886d-1a390ef1905c\") " pod="kube-system/coredns-668d6bf9bc-c68qr" Sep 12 16:51:18.025631 kubelet[2587]: I0912 16:51:18.025610 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxkgq\" (UniqueName: \"kubernetes.io/projected/4ed1ef38-8cef-410b-8cdd-7e95f71194ed-kube-api-access-fxkgq\") pod \"coredns-668d6bf9bc-vz5mn\" (UID: \"4ed1ef38-8cef-410b-8cdd-7e95f71194ed\") " pod="kube-system/coredns-668d6bf9bc-vz5mn" Sep 12 16:51:18.025727 kubelet[2587]: I0912 16:51:18.025712 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ed1ef38-8cef-410b-8cdd-7e95f71194ed-config-volume\") pod \"coredns-668d6bf9bc-vz5mn\" (UID: \"4ed1ef38-8cef-410b-8cdd-7e95f71194ed\") " pod="kube-system/coredns-668d6bf9bc-vz5mn" Sep 12 16:51:18.025894 kubelet[2587]: I0912 16:51:18.025877 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d090601b-5e4b-4361-886d-1a390ef1905c-config-volume\") pod \"coredns-668d6bf9bc-c68qr\" (UID: \"d090601b-5e4b-4361-886d-1a390ef1905c\") " pod="kube-system/coredns-668d6bf9bc-c68qr" Sep 12 16:51:18.030408 systemd[1]: Created slice kubepods-burstable-podd090601b_5e4b_4361_886d_1a390ef1905c.slice - libcontainer container kubepods-burstable-podd090601b_5e4b_4361_886d_1a390ef1905c.slice. Sep 12 16:51:18.328061 kubelet[2587]: E0912 16:51:18.327938 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:18.328927 containerd[1480]: time="2025-09-12T16:51:18.328883050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vz5mn,Uid:4ed1ef38-8cef-410b-8cdd-7e95f71194ed,Namespace:kube-system,Attempt:0,}" Sep 12 16:51:18.333582 kubelet[2587]: E0912 16:51:18.333547 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:18.334975 containerd[1480]: time="2025-09-12T16:51:18.334575580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c68qr,Uid:d090601b-5e4b-4361-886d-1a390ef1905c,Namespace:kube-system,Attempt:0,}" Sep 12 16:51:18.559187 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:38822.service - OpenSSH per-connection server daemon (10.0.0.1:38822). Sep 12 16:51:18.601969 sshd[3441]: Accepted publickey for core from 10.0.0.1 port 38822 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:18.603079 sshd-session[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:18.607362 systemd-logind[1465]: New session 8 of user core. Sep 12 16:51:18.618832 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 16:51:18.746073 sshd[3443]: Connection closed by 10.0.0.1 port 38822 Sep 12 16:51:18.746395 sshd-session[3441]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:18.749646 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:38822.service: Deactivated successfully. Sep 12 16:51:18.751339 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 16:51:18.751916 systemd-logind[1465]: Session 8 logged out. Waiting for processes to exit. Sep 12 16:51:18.753406 systemd-logind[1465]: Removed session 8. Sep 12 16:51:18.758656 kubelet[2587]: E0912 16:51:18.758596 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:18.776825 kubelet[2587]: I0912 16:51:18.776776 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nbdfw" podStartSLOduration=6.229619455 podStartE2EDuration="19.776759733s" podCreationTimestamp="2025-09-12 16:50:59 +0000 UTC" firstStartedPulling="2025-09-12 16:51:00.175506325 +0000 UTC m=+7.590702403" lastFinishedPulling="2025-09-12 16:51:13.722646603 +0000 UTC m=+21.137842681" observedRunningTime="2025-09-12 16:51:18.776384932 +0000 UTC m=+26.191581090" watchObservedRunningTime="2025-09-12 16:51:18.776759733 +0000 UTC m=+26.191955771" Sep 12 16:51:19.766661 kubelet[2587]: E0912 16:51:19.766558 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:19.865854 systemd-networkd[1398]: cilium_host: Link UP Sep 12 16:51:19.865968 systemd-networkd[1398]: cilium_net: Link UP Sep 12 16:51:19.866078 systemd-networkd[1398]: cilium_net: Gained carrier Sep 12 16:51:19.866197 systemd-networkd[1398]: cilium_host: Gained carrier Sep 12 16:51:19.936382 systemd-networkd[1398]: cilium_vxlan: Link UP Sep 12 16:51:19.936392 systemd-networkd[1398]: cilium_vxlan: Gained carrier Sep 12 16:51:20.196723 kernel: NET: Registered PF_ALG protocol family Sep 12 16:51:20.572874 systemd-networkd[1398]: cilium_net: Gained IPv6LL Sep 12 16:51:20.701518 systemd-networkd[1398]: cilium_host: Gained IPv6LL Sep 12 16:51:20.751140 systemd-networkd[1398]: lxc_health: Link UP Sep 12 16:51:20.752114 systemd-networkd[1398]: lxc_health: Gained carrier Sep 12 16:51:20.768414 kubelet[2587]: E0912 16:51:20.768384 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:20.912878 kernel: eth0: renamed from tmpee082 Sep 12 16:51:20.912319 systemd-networkd[1398]: lxcfe7370f48f4d: Link UP Sep 12 16:51:20.917897 systemd-networkd[1398]: lxcfe7370f48f4d: Gained carrier Sep 12 16:51:20.935525 systemd-networkd[1398]: lxc6425352ca760: Link UP Sep 12 16:51:20.938719 kernel: eth0: renamed from tmpddef0 Sep 12 16:51:20.944580 systemd-networkd[1398]: lxc6425352ca760: Gained carrier Sep 12 16:51:21.213439 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Sep 12 16:51:21.770250 kubelet[2587]: E0912 16:51:21.770223 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:22.045860 systemd-networkd[1398]: lxcfe7370f48f4d: Gained IPv6LL Sep 12 16:51:22.428846 systemd-networkd[1398]: lxc6425352ca760: Gained IPv6LL Sep 12 16:51:22.620927 systemd-networkd[1398]: lxc_health: Gained IPv6LL Sep 12 16:51:22.773535 kubelet[2587]: E0912 16:51:22.773502 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:23.759246 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:41114.service - OpenSSH per-connection server daemon (10.0.0.1:41114). Sep 12 16:51:23.775852 kubelet[2587]: E0912 16:51:23.775390 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:23.806382 sshd[3838]: Accepted publickey for core from 10.0.0.1 port 41114 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:23.807254 sshd-session[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:23.812558 systemd-logind[1465]: New session 9 of user core. Sep 12 16:51:23.823876 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 16:51:23.949092 sshd[3840]: Connection closed by 10.0.0.1 port 41114 Sep 12 16:51:23.949439 sshd-session[3838]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:23.952269 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:41114.service: Deactivated successfully. Sep 12 16:51:23.954172 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 16:51:23.956092 systemd-logind[1465]: Session 9 logged out. Waiting for processes to exit. Sep 12 16:51:23.957308 systemd-logind[1465]: Removed session 9. Sep 12 16:51:24.442500 containerd[1480]: time="2025-09-12T16:51:24.441817408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:51:24.442500 containerd[1480]: time="2025-09-12T16:51:24.441873928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:51:24.442500 containerd[1480]: time="2025-09-12T16:51:24.441888088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:24.442500 containerd[1480]: time="2025-09-12T16:51:24.442378569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:24.447792 containerd[1480]: time="2025-09-12T16:51:24.447516935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:51:24.447792 containerd[1480]: time="2025-09-12T16:51:24.447575935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:51:24.447792 containerd[1480]: time="2025-09-12T16:51:24.447590295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:24.448161 containerd[1480]: time="2025-09-12T16:51:24.448058016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:51:24.463851 systemd[1]: Started cri-containerd-ee0820b6b316b9b0505dee1d1b727314dcd7b3cd184888f97d4301edc011f4f2.scope - libcontainer container ee0820b6b316b9b0505dee1d1b727314dcd7b3cd184888f97d4301edc011f4f2. Sep 12 16:51:24.470057 systemd[1]: Started cri-containerd-ddef0138c284056471fc54861f947f9b393f1e769e666dc67e10a4f831dcf0ce.scope - libcontainer container ddef0138c284056471fc54861f947f9b393f1e769e666dc67e10a4f831dcf0ce. Sep 12 16:51:24.480575 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 16:51:24.481491 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 16:51:24.499991 containerd[1480]: time="2025-09-12T16:51:24.499895279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c68qr,Uid:d090601b-5e4b-4361-886d-1a390ef1905c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddef0138c284056471fc54861f947f9b393f1e769e666dc67e10a4f831dcf0ce\"" Sep 12 16:51:24.500853 kubelet[2587]: E0912 16:51:24.500831 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:24.505919 containerd[1480]: time="2025-09-12T16:51:24.505879966Z" level=info msg="CreateContainer within sandbox \"ddef0138c284056471fc54861f947f9b393f1e769e666dc67e10a4f831dcf0ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 16:51:24.506765 containerd[1480]: time="2025-09-12T16:51:24.506737167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vz5mn,Uid:4ed1ef38-8cef-410b-8cdd-7e95f71194ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee0820b6b316b9b0505dee1d1b727314dcd7b3cd184888f97d4301edc011f4f2\"" Sep 12 16:51:24.510456 kubelet[2587]: E0912 16:51:24.510298 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:24.512293 containerd[1480]: time="2025-09-12T16:51:24.512119774Z" level=info msg="CreateContainer within sandbox \"ee0820b6b316b9b0505dee1d1b727314dcd7b3cd184888f97d4301edc011f4f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 16:51:24.521334 containerd[1480]: time="2025-09-12T16:51:24.521291905Z" level=info msg="CreateContainer within sandbox \"ddef0138c284056471fc54861f947f9b393f1e769e666dc67e10a4f831dcf0ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"792c2b7c684f1ee1cbc8cc6b315f76a2d6f46618adbc4ca76fb4c30a7a0d0d4d\"" Sep 12 16:51:24.521841 containerd[1480]: time="2025-09-12T16:51:24.521812946Z" level=info msg="StartContainer for \"792c2b7c684f1ee1cbc8cc6b315f76a2d6f46618adbc4ca76fb4c30a7a0d0d4d\"" Sep 12 16:51:24.526952 containerd[1480]: time="2025-09-12T16:51:24.526833152Z" level=info msg="CreateContainer within sandbox \"ee0820b6b316b9b0505dee1d1b727314dcd7b3cd184888f97d4301edc011f4f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3b1d78f7881d2ee4e37374a88c7e74484c364f3eb260e1858870645670c7d01\"" Sep 12 16:51:24.527584 containerd[1480]: time="2025-09-12T16:51:24.527559633Z" level=info msg="StartContainer for \"b3b1d78f7881d2ee4e37374a88c7e74484c364f3eb260e1858870645670c7d01\"" Sep 12 16:51:24.548858 systemd[1]: Started cri-containerd-792c2b7c684f1ee1cbc8cc6b315f76a2d6f46618adbc4ca76fb4c30a7a0d0d4d.scope - libcontainer container 792c2b7c684f1ee1cbc8cc6b315f76a2d6f46618adbc4ca76fb4c30a7a0d0d4d. Sep 12 16:51:24.551557 systemd[1]: Started cri-containerd-b3b1d78f7881d2ee4e37374a88c7e74484c364f3eb260e1858870645670c7d01.scope - libcontainer container b3b1d78f7881d2ee4e37374a88c7e74484c364f3eb260e1858870645670c7d01. Sep 12 16:51:24.577895 containerd[1480]: time="2025-09-12T16:51:24.577843214Z" level=info msg="StartContainer for \"792c2b7c684f1ee1cbc8cc6b315f76a2d6f46618adbc4ca76fb4c30a7a0d0d4d\" returns successfully" Sep 12 16:51:24.578018 containerd[1480]: time="2025-09-12T16:51:24.577871494Z" level=info msg="StartContainer for \"b3b1d78f7881d2ee4e37374a88c7e74484c364f3eb260e1858870645670c7d01\" returns successfully" Sep 12 16:51:24.778336 kubelet[2587]: E0912 16:51:24.778239 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:24.782372 kubelet[2587]: E0912 16:51:24.782062 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:24.789088 kubelet[2587]: I0912 16:51:24.788834 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vz5mn" podStartSLOduration=25.788821191 podStartE2EDuration="25.788821191s" podCreationTimestamp="2025-09-12 16:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:51:24.78844979 +0000 UTC m=+32.203645908" watchObservedRunningTime="2025-09-12 16:51:24.788821191 +0000 UTC m=+32.204017269" Sep 12 16:51:24.809933 kubelet[2587]: I0912 16:51:24.809869 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c68qr" podStartSLOduration=25.809854176 podStartE2EDuration="25.809854176s" podCreationTimestamp="2025-09-12 16:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:51:24.809515456 +0000 UTC m=+32.224711534" watchObservedRunningTime="2025-09-12 16:51:24.809854176 +0000 UTC m=+32.225050214" Sep 12 16:51:25.446736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3959384423.mount: Deactivated successfully. Sep 12 16:51:25.783465 kubelet[2587]: E0912 16:51:25.783225 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:25.783465 kubelet[2587]: E0912 16:51:25.783392 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:26.785029 kubelet[2587]: E0912 16:51:26.784991 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:26.785400 kubelet[2587]: E0912 16:51:26.785061 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:51:28.963998 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:41118.service - OpenSSH per-connection server daemon (10.0.0.1:41118). Sep 12 16:51:29.008068 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 41118 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:29.009385 sshd-session[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:29.013104 systemd-logind[1465]: New session 10 of user core. Sep 12 16:51:29.020910 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 16:51:29.130210 sshd[4025]: Connection closed by 10.0.0.1 port 41118 Sep 12 16:51:29.130686 sshd-session[4023]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:29.133682 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:41118.service: Deactivated successfully. Sep 12 16:51:29.136095 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 16:51:29.136745 systemd-logind[1465]: Session 10 logged out. Waiting for processes to exit. Sep 12 16:51:29.137418 systemd-logind[1465]: Removed session 10. Sep 12 16:51:34.145658 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:55402.service - OpenSSH per-connection server daemon (10.0.0.1:55402). Sep 12 16:51:34.187433 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 55402 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:34.188612 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:34.192730 systemd-logind[1465]: New session 11 of user core. Sep 12 16:51:34.203874 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 16:51:34.313256 sshd[4047]: Connection closed by 10.0.0.1 port 55402 Sep 12 16:51:34.313830 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:34.325845 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:55402.service: Deactivated successfully. Sep 12 16:51:34.327433 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 16:51:34.328732 systemd-logind[1465]: Session 11 logged out. Waiting for processes to exit. Sep 12 16:51:34.335928 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:55414.service - OpenSSH per-connection server daemon (10.0.0.1:55414). Sep 12 16:51:34.337546 systemd-logind[1465]: Removed session 11. Sep 12 16:51:34.376238 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 55414 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:34.377426 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:34.383490 systemd-logind[1465]: New session 12 of user core. Sep 12 16:51:34.392853 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 16:51:34.538968 sshd[4064]: Connection closed by 10.0.0.1 port 55414 Sep 12 16:51:34.539858 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:34.550835 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:55414.service: Deactivated successfully. Sep 12 16:51:34.554176 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 16:51:34.555683 systemd-logind[1465]: Session 12 logged out. Waiting for processes to exit. Sep 12 16:51:34.557965 systemd-logind[1465]: Removed session 12. Sep 12 16:51:34.573758 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:55416.service - OpenSSH per-connection server daemon (10.0.0.1:55416). Sep 12 16:51:34.619642 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 55416 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:34.620878 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:34.624592 systemd-logind[1465]: New session 13 of user core. Sep 12 16:51:34.631829 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 16:51:34.745532 sshd[4077]: Connection closed by 10.0.0.1 port 55416 Sep 12 16:51:34.746223 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:34.749435 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:55416.service: Deactivated successfully. Sep 12 16:51:34.751173 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 16:51:34.751742 systemd-logind[1465]: Session 13 logged out. Waiting for processes to exit. Sep 12 16:51:34.752412 systemd-logind[1465]: Removed session 13. Sep 12 16:51:39.759066 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:55424.service - OpenSSH per-connection server daemon (10.0.0.1:55424). Sep 12 16:51:39.798958 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 55424 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:39.800363 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:39.804751 systemd-logind[1465]: New session 14 of user core. Sep 12 16:51:39.816882 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 16:51:39.923506 sshd[4093]: Connection closed by 10.0.0.1 port 55424 Sep 12 16:51:39.924097 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:39.928020 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:55424.service: Deactivated successfully. Sep 12 16:51:39.930855 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 16:51:39.931966 systemd-logind[1465]: Session 14 logged out. Waiting for processes to exit. Sep 12 16:51:39.932962 systemd-logind[1465]: Removed session 14. Sep 12 16:51:44.935610 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:55446.service - OpenSSH per-connection server daemon (10.0.0.1:55446). Sep 12 16:51:44.975312 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 55446 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:44.976352 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:44.980651 systemd-logind[1465]: New session 15 of user core. Sep 12 16:51:44.994841 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 16:51:45.103517 sshd[4108]: Connection closed by 10.0.0.1 port 55446 Sep 12 16:51:45.103301 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:45.118850 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:55446.service: Deactivated successfully. Sep 12 16:51:45.120387 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 16:51:45.121052 systemd-logind[1465]: Session 15 logged out. Waiting for processes to exit. Sep 12 16:51:45.123757 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:55458.service - OpenSSH per-connection server daemon (10.0.0.1:55458). Sep 12 16:51:45.125075 systemd-logind[1465]: Removed session 15. Sep 12 16:51:45.163003 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 55458 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:45.164148 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:45.168634 systemd-logind[1465]: New session 16 of user core. Sep 12 16:51:45.176847 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 16:51:45.346792 sshd[4124]: Connection closed by 10.0.0.1 port 55458 Sep 12 16:51:45.347354 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:45.359783 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:55458.service: Deactivated successfully. Sep 12 16:51:45.361396 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 16:51:45.362034 systemd-logind[1465]: Session 16 logged out. Waiting for processes to exit. Sep 12 16:51:45.364135 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:55466.service - OpenSSH per-connection server daemon (10.0.0.1:55466). Sep 12 16:51:45.364903 systemd-logind[1465]: Removed session 16. Sep 12 16:51:45.411069 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 55466 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:45.412310 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:45.416405 systemd-logind[1465]: New session 17 of user core. Sep 12 16:51:45.435836 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 16:51:45.996545 sshd[4138]: Connection closed by 10.0.0.1 port 55466 Sep 12 16:51:45.997664 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:46.010921 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:55480.service - OpenSSH per-connection server daemon (10.0.0.1:55480). Sep 12 16:51:46.011322 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:55466.service: Deactivated successfully. Sep 12 16:51:46.017518 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 16:51:46.020961 systemd-logind[1465]: Session 17 logged out. Waiting for processes to exit. Sep 12 16:51:46.024581 systemd-logind[1465]: Removed session 17. Sep 12 16:51:46.051948 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 55480 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:46.053248 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:46.057617 systemd-logind[1465]: New session 18 of user core. Sep 12 16:51:46.066832 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 16:51:46.279315 sshd[4161]: Connection closed by 10.0.0.1 port 55480 Sep 12 16:51:46.279728 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:46.291575 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:55480.service: Deactivated successfully. Sep 12 16:51:46.293466 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 16:51:46.296623 systemd-logind[1465]: Session 18 logged out. Waiting for processes to exit. Sep 12 16:51:46.313962 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:55492.service - OpenSSH per-connection server daemon (10.0.0.1:55492). Sep 12 16:51:46.315237 systemd-logind[1465]: Removed session 18. Sep 12 16:51:46.349777 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 55492 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:46.351104 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:46.355168 systemd-logind[1465]: New session 19 of user core. Sep 12 16:51:46.364893 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 16:51:46.471313 sshd[4176]: Connection closed by 10.0.0.1 port 55492 Sep 12 16:51:46.471665 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:46.474836 systemd-logind[1465]: Session 19 logged out. Waiting for processes to exit. Sep 12 16:51:46.475148 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:55492.service: Deactivated successfully. Sep 12 16:51:46.476920 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 16:51:46.478877 systemd-logind[1465]: Removed session 19. Sep 12 16:51:51.483007 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:60880.service - OpenSSH per-connection server daemon (10.0.0.1:60880). Sep 12 16:51:51.522205 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 60880 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:51.523281 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:51.526806 systemd-logind[1465]: New session 20 of user core. Sep 12 16:51:51.536822 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 16:51:51.642296 sshd[4194]: Connection closed by 10.0.0.1 port 60880 Sep 12 16:51:51.642622 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:51.646382 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:60880.service: Deactivated successfully. Sep 12 16:51:51.650046 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 16:51:51.650652 systemd-logind[1465]: Session 20 logged out. Waiting for processes to exit. Sep 12 16:51:51.651399 systemd-logind[1465]: Removed session 20. Sep 12 16:51:56.669989 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:60882.service - OpenSSH per-connection server daemon (10.0.0.1:60882). Sep 12 16:51:56.706056 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 60882 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:51:56.707171 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:51:56.710556 systemd-logind[1465]: New session 21 of user core. Sep 12 16:51:56.720899 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 16:51:56.826745 sshd[4216]: Connection closed by 10.0.0.1 port 60882 Sep 12 16:51:56.826750 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Sep 12 16:51:56.829942 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:60882.service: Deactivated successfully. Sep 12 16:51:56.831825 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 16:51:56.833571 systemd-logind[1465]: Session 21 logged out. Waiting for processes to exit. Sep 12 16:51:56.834777 systemd-logind[1465]: Removed session 21. Sep 12 16:52:01.839264 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:46374.service - OpenSSH per-connection server daemon (10.0.0.1:46374). Sep 12 16:52:01.879310 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 46374 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:52:01.880511 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:52:01.884310 systemd-logind[1465]: New session 22 of user core. Sep 12 16:52:01.896827 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 16:52:02.003587 sshd[4234]: Connection closed by 10.0.0.1 port 46374 Sep 12 16:52:02.004890 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Sep 12 16:52:02.013876 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:46374.service: Deactivated successfully. Sep 12 16:52:02.015449 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 16:52:02.016719 systemd-logind[1465]: Session 22 logged out. Waiting for processes to exit. Sep 12 16:52:02.022932 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:46378.service - OpenSSH per-connection server daemon (10.0.0.1:46378). Sep 12 16:52:02.023741 systemd-logind[1465]: Removed session 22. Sep 12 16:52:02.059524 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 46378 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:52:02.060629 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:52:02.064468 systemd-logind[1465]: New session 23 of user core. Sep 12 16:52:02.074829 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 16:52:03.836313 containerd[1480]: time="2025-09-12T16:52:03.836270163Z" level=info msg="StopContainer for \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\" with timeout 30 (s)" Sep 12 16:52:03.836939 containerd[1480]: time="2025-09-12T16:52:03.836913690Z" level=info msg="Stop container \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\" with signal terminated" Sep 12 16:52:03.848137 systemd[1]: cri-containerd-381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8.scope: Deactivated successfully. Sep 12 16:52:03.872593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8-rootfs.mount: Deactivated successfully. Sep 12 16:52:03.876995 containerd[1480]: time="2025-09-12T16:52:03.876962652Z" level=info msg="StopContainer for \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\" with timeout 2 (s)" Sep 12 16:52:03.877444 containerd[1480]: time="2025-09-12T16:52:03.877417897Z" level=info msg="Stop container \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\" with signal terminated" Sep 12 16:52:03.878192 containerd[1480]: time="2025-09-12T16:52:03.878139024Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 16:52:03.880144 containerd[1480]: time="2025-09-12T16:52:03.880095324Z" level=info msg="shim disconnected" id=381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8 namespace=k8s.io Sep 12 16:52:03.880144 containerd[1480]: time="2025-09-12T16:52:03.880139164Z" level=warning msg="cleaning up after shim disconnected" id=381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8 namespace=k8s.io Sep 12 16:52:03.880144 containerd[1480]: time="2025-09-12T16:52:03.880147364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:03.884216 systemd-networkd[1398]: lxc_health: Link DOWN Sep 12 16:52:03.884222 systemd-networkd[1398]: lxc_health: Lost carrier Sep 12 16:52:03.896230 systemd[1]: cri-containerd-3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798.scope: Deactivated successfully. Sep 12 16:52:03.896739 systemd[1]: cri-containerd-3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798.scope: Consumed 6.120s CPU time, 126.2M memory peak, 132K read from disk, 12.9M written to disk. Sep 12 16:52:03.914222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798-rootfs.mount: Deactivated successfully. Sep 12 16:52:03.922113 containerd[1480]: time="2025-09-12T16:52:03.922053546Z" level=info msg="shim disconnected" id=3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798 namespace=k8s.io Sep 12 16:52:03.922113 containerd[1480]: time="2025-09-12T16:52:03.922106986Z" level=warning msg="cleaning up after shim disconnected" id=3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798 namespace=k8s.io Sep 12 16:52:03.922113 containerd[1480]: time="2025-09-12T16:52:03.922116066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:03.931823 containerd[1480]: time="2025-09-12T16:52:03.931765043Z" level=info msg="StopContainer for \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\" returns successfully" Sep 12 16:52:03.932629 containerd[1480]: time="2025-09-12T16:52:03.932601932Z" level=info msg="StopPodSandbox for \"8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250\"" Sep 12 16:52:03.932681 containerd[1480]: time="2025-09-12T16:52:03.932646692Z" level=info msg="Container to stop \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:52:03.934305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250-shm.mount: Deactivated successfully. Sep 12 16:52:03.938319 containerd[1480]: time="2025-09-12T16:52:03.938281269Z" level=info msg="StopContainer for \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\" returns successfully" Sep 12 16:52:03.938899 containerd[1480]: time="2025-09-12T16:52:03.938731993Z" level=info msg="StopPodSandbox for \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\"" Sep 12 16:52:03.938899 containerd[1480]: time="2025-09-12T16:52:03.938765714Z" level=info msg="Container to stop \"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:52:03.938899 containerd[1480]: time="2025-09-12T16:52:03.938779634Z" level=info msg="Container to stop \"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:52:03.938899 containerd[1480]: time="2025-09-12T16:52:03.938787874Z" level=info msg="Container to stop \"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:52:03.938899 containerd[1480]: time="2025-09-12T16:52:03.938797394Z" level=info msg="Container to stop \"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:52:03.938899 containerd[1480]: time="2025-09-12T16:52:03.938812234Z" level=info msg="Container to stop \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 16:52:03.940515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180-shm.mount: Deactivated successfully. Sep 12 16:52:03.941283 systemd[1]: cri-containerd-8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250.scope: Deactivated successfully. Sep 12 16:52:03.953213 systemd[1]: cri-containerd-aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180.scope: Deactivated successfully. Sep 12 16:52:03.982943 containerd[1480]: time="2025-09-12T16:52:03.982815756Z" level=info msg="shim disconnected" id=aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180 namespace=k8s.io Sep 12 16:52:03.982943 containerd[1480]: time="2025-09-12T16:52:03.982872797Z" level=warning msg="cleaning up after shim disconnected" id=aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180 namespace=k8s.io Sep 12 16:52:03.982943 containerd[1480]: time="2025-09-12T16:52:03.982881557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:03.983333 containerd[1480]: time="2025-09-12T16:52:03.983180680Z" level=info msg="shim disconnected" id=8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250 namespace=k8s.io Sep 12 16:52:03.983333 containerd[1480]: time="2025-09-12T16:52:03.983221441Z" level=warning msg="cleaning up after shim disconnected" id=8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250 namespace=k8s.io Sep 12 16:52:03.983333 containerd[1480]: time="2025-09-12T16:52:03.983229161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:04.004528 containerd[1480]: time="2025-09-12T16:52:04.004468333Z" level=info msg="TearDown network for sandbox \"8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250\" successfully" Sep 12 16:52:04.004528 containerd[1480]: time="2025-09-12T16:52:04.004507894Z" level=info msg="StopPodSandbox for \"8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250\" returns successfully" Sep 12 16:52:04.006447 containerd[1480]: time="2025-09-12T16:52:04.006388712Z" level=info msg="TearDown network for sandbox \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" successfully" Sep 12 16:52:04.006447 containerd[1480]: time="2025-09-12T16:52:04.006417912Z" level=info msg="StopPodSandbox for \"aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180\" returns successfully" Sep 12 16:52:04.088169 kubelet[2587]: I0912 16:52:04.088000 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-etc-cni-netd\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.088169 kubelet[2587]: I0912 16:52:04.088059 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpn9v\" (UniqueName: \"kubernetes.io/projected/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-kube-api-access-bpn9v\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.088169 kubelet[2587]: I0912 16:52:04.088085 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-hubble-tls\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.088169 kubelet[2587]: I0912 16:52:04.088105 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-xtables-lock\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.088169 kubelet[2587]: I0912 16:52:04.088124 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-config-path\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.088169 kubelet[2587]: I0912 16:52:04.088138 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cni-path\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.088681 kubelet[2587]: I0912 16:52:04.088155 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-clustermesh-secrets\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.088681 kubelet[2587]: I0912 16:52:04.088147 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:52:04.088681 kubelet[2587]: I0912 16:52:04.088174 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7be50088-be0f-413c-b07c-f4e7ab1ea22e-cilium-config-path\") pod \"7be50088-be0f-413c-b07c-f4e7ab1ea22e\" (UID: \"7be50088-be0f-413c-b07c-f4e7ab1ea22e\") " Sep 12 16:52:04.088681 kubelet[2587]: I0912 16:52:04.088190 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-cgroup\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.088681 kubelet[2587]: I0912 16:52:04.088209 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xkx6\" (UniqueName: \"kubernetes.io/projected/7be50088-be0f-413c-b07c-f4e7ab1ea22e-kube-api-access-5xkx6\") pod \"7be50088-be0f-413c-b07c-f4e7ab1ea22e\" (UID: \"7be50088-be0f-413c-b07c-f4e7ab1ea22e\") " Sep 12 16:52:04.088817 kubelet[2587]: I0912 16:52:04.088221 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cni-path" (OuterVolumeSpecName: "cni-path") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:52:04.088817 kubelet[2587]: I0912 16:52:04.088224 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-run\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.088817 kubelet[2587]: I0912 16:52:04.088248 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:52:04.088817 kubelet[2587]: I0912 16:52:04.088256 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-lib-modules\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.088817 kubelet[2587]: I0912 16:52:04.088276 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-host-proc-sys-kernel\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.088817 kubelet[2587]: I0912 16:52:04.088292 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-bpf-maps\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.089034 kubelet[2587]: I0912 16:52:04.088307 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-host-proc-sys-net\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.089034 kubelet[2587]: I0912 16:52:04.088324 2587 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-hostproc\") pod \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\" (UID: \"8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df\") " Sep 12 16:52:04.089034 kubelet[2587]: I0912 16:52:04.088362 2587 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.089034 kubelet[2587]: I0912 16:52:04.088371 2587 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.089034 kubelet[2587]: I0912 16:52:04.088383 2587 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.089034 kubelet[2587]: I0912 16:52:04.088400 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-hostproc" (OuterVolumeSpecName: "hostproc") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:52:04.089172 kubelet[2587]: I0912 16:52:04.088654 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:52:04.090968 kubelet[2587]: I0912 16:52:04.090652 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be50088-be0f-413c-b07c-f4e7ab1ea22e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7be50088-be0f-413c-b07c-f4e7ab1ea22e" (UID: "7be50088-be0f-413c-b07c-f4e7ab1ea22e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 16:52:04.090968 kubelet[2587]: I0912 16:52:04.090716 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:52:04.090968 kubelet[2587]: I0912 16:52:04.090732 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:52:04.091372 kubelet[2587]: I0912 16:52:04.091340 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:52:04.091748 kubelet[2587]: I0912 16:52:04.091371 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:52:04.091748 kubelet[2587]: I0912 16:52:04.091393 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 16:52:04.091748 kubelet[2587]: I0912 16:52:04.091440 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 16:52:04.091863 kubelet[2587]: I0912 16:52:04.091706 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 16:52:04.092115 kubelet[2587]: I0912 16:52:04.092081 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-kube-api-access-bpn9v" (OuterVolumeSpecName: "kube-api-access-bpn9v") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "kube-api-access-bpn9v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 16:52:04.093374 kubelet[2587]: I0912 16:52:04.093348 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be50088-be0f-413c-b07c-f4e7ab1ea22e-kube-api-access-5xkx6" (OuterVolumeSpecName: "kube-api-access-5xkx6") pod "7be50088-be0f-413c-b07c-f4e7ab1ea22e" (UID: "7be50088-be0f-413c-b07c-f4e7ab1ea22e"). InnerVolumeSpecName "kube-api-access-5xkx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 16:52:04.093620 kubelet[2587]: I0912 16:52:04.093581 2587 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" (UID: "8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 16:52:04.189015 kubelet[2587]: I0912 16:52:04.188896 2587 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189015 kubelet[2587]: I0912 16:52:04.188929 2587 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189015 kubelet[2587]: I0912 16:52:04.188939 2587 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7be50088-be0f-413c-b07c-f4e7ab1ea22e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189015 kubelet[2587]: I0912 16:52:04.188948 2587 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189015 kubelet[2587]: I0912 16:52:04.188959 2587 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5xkx6\" (UniqueName: \"kubernetes.io/projected/7be50088-be0f-413c-b07c-f4e7ab1ea22e-kube-api-access-5xkx6\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189015 kubelet[2587]: I0912 16:52:04.188967 2587 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189015 kubelet[2587]: I0912 16:52:04.188975 2587 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189015 kubelet[2587]: I0912 16:52:04.188984 2587 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189317 kubelet[2587]: I0912 16:52:04.188991 2587 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189317 kubelet[2587]: I0912 16:52:04.188999 2587 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189317 kubelet[2587]: I0912 16:52:04.189007 2587 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bpn9v\" (UniqueName: \"kubernetes.io/projected/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-kube-api-access-bpn9v\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189317 kubelet[2587]: I0912 16:52:04.189015 2587 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.189317 kubelet[2587]: I0912 16:52:04.189023 2587 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 16:52:04.678159 systemd[1]: Removed slice kubepods-besteffort-pod7be50088_be0f_413c_b07c_f4e7ab1ea22e.slice - libcontainer container kubepods-besteffort-pod7be50088_be0f_413c_b07c_f4e7ab1ea22e.slice. Sep 12 16:52:04.680393 systemd[1]: Removed slice kubepods-burstable-pod8c68a7ab_25ab_4933_afc0_2c2eeaa2d9df.slice - libcontainer container kubepods-burstable-pod8c68a7ab_25ab_4933_afc0_2c2eeaa2d9df.slice. Sep 12 16:52:04.680476 systemd[1]: kubepods-burstable-pod8c68a7ab_25ab_4933_afc0_2c2eeaa2d9df.slice: Consumed 6.192s CPU time, 126.5M memory peak, 148K read from disk, 12.9M written to disk. Sep 12 16:52:04.855244 kubelet[2587]: I0912 16:52:04.855023 2587 scope.go:117] "RemoveContainer" containerID="381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8" Sep 12 16:52:04.860188 containerd[1480]: time="2025-09-12T16:52:04.860148182Z" level=info msg="RemoveContainer for \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\"" Sep 12 16:52:04.860766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f320e64dcb16655c995733a089d1c79a6005a227e4737d24af682a2455a9250-rootfs.mount: Deactivated successfully. Sep 12 16:52:04.860873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aab4597d391e2678b93fd8adec7a79bf3fef6ad3189d62fb2907847674b3b180-rootfs.mount: Deactivated successfully. Sep 12 16:52:04.860932 systemd[1]: var-lib-kubelet-pods-7be50088\x2dbe0f\x2d413c\x2db07c\x2df4e7ab1ea22e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5xkx6.mount: Deactivated successfully. Sep 12 16:52:04.860981 systemd[1]: var-lib-kubelet-pods-8c68a7ab\x2d25ab\x2d4933\x2dafc0\x2d2c2eeaa2d9df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbpn9v.mount: Deactivated successfully. Sep 12 16:52:04.861060 systemd[1]: var-lib-kubelet-pods-8c68a7ab\x2d25ab\x2d4933\x2dafc0\x2d2c2eeaa2d9df-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 16:52:04.861132 systemd[1]: var-lib-kubelet-pods-8c68a7ab\x2d25ab\x2d4933\x2dafc0\x2d2c2eeaa2d9df-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 16:52:04.862878 containerd[1480]: time="2025-09-12T16:52:04.862850208Z" level=info msg="RemoveContainer for \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\" returns successfully" Sep 12 16:52:04.863223 kubelet[2587]: I0912 16:52:04.863161 2587 scope.go:117] "RemoveContainer" containerID="381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8" Sep 12 16:52:04.863496 containerd[1480]: time="2025-09-12T16:52:04.863360093Z" level=error msg="ContainerStatus for \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\": not found" Sep 12 16:52:04.873354 kubelet[2587]: E0912 16:52:04.873331 2587 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\": not found" containerID="381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8" Sep 12 16:52:04.881861 kubelet[2587]: I0912 16:52:04.881729 2587 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8"} err="failed to get container status \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"381658ddab4a9518e69273ebd8efe926f72fc507a057dcdb62e4c88329bef4b8\": not found" Sep 12 16:52:04.881912 kubelet[2587]: I0912 16:52:04.881864 2587 scope.go:117] "RemoveContainer" containerID="3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798" Sep 12 16:52:04.883686 containerd[1480]: time="2025-09-12T16:52:04.883656212Z" level=info msg="RemoveContainer for \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\"" Sep 12 16:52:04.888308 containerd[1480]: time="2025-09-12T16:52:04.887490569Z" level=info msg="RemoveContainer for \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\" returns successfully" Sep 12 16:52:04.888466 kubelet[2587]: I0912 16:52:04.888399 2587 scope.go:117] "RemoveContainer" containerID="91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565" Sep 12 16:52:04.892217 containerd[1480]: time="2025-09-12T16:52:04.892189055Z" level=info msg="RemoveContainer for \"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565\"" Sep 12 16:52:04.895022 containerd[1480]: time="2025-09-12T16:52:04.894991122Z" level=info msg="RemoveContainer for \"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565\" returns successfully" Sep 12 16:52:04.896839 kubelet[2587]: I0912 16:52:04.896812 2587 scope.go:117] "RemoveContainer" containerID="d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67" Sep 12 16:52:04.898197 containerd[1480]: time="2025-09-12T16:52:04.898150873Z" level=info msg="RemoveContainer for \"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67\"" Sep 12 16:52:04.901366 containerd[1480]: time="2025-09-12T16:52:04.901337265Z" level=info msg="RemoveContainer for \"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67\" returns successfully" Sep 12 16:52:04.901542 kubelet[2587]: I0912 16:52:04.901497 2587 scope.go:117] "RemoveContainer" containerID="38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476" Sep 12 16:52:04.902554 containerd[1480]: time="2025-09-12T16:52:04.902526476Z" level=info msg="RemoveContainer for \"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476\"" Sep 12 16:52:04.904816 containerd[1480]: time="2025-09-12T16:52:04.904783138Z" level=info msg="RemoveContainer for \"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476\" returns successfully" Sep 12 16:52:04.904981 kubelet[2587]: I0912 16:52:04.904931 2587 scope.go:117] "RemoveContainer" containerID="e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9" Sep 12 16:52:04.906091 containerd[1480]: time="2025-09-12T16:52:04.905856229Z" level=info msg="RemoveContainer for \"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9\"" Sep 12 16:52:04.908266 containerd[1480]: time="2025-09-12T16:52:04.908235452Z" level=info msg="RemoveContainer for \"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9\" returns successfully" Sep 12 16:52:04.908519 kubelet[2587]: I0912 16:52:04.908489 2587 scope.go:117] "RemoveContainer" containerID="3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798" Sep 12 16:52:04.908737 containerd[1480]: time="2025-09-12T16:52:04.908687376Z" level=error msg="ContainerStatus for \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\": not found" Sep 12 16:52:04.908837 kubelet[2587]: E0912 16:52:04.908818 2587 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\": not found" containerID="3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798" Sep 12 16:52:04.908885 kubelet[2587]: I0912 16:52:04.908846 2587 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798"} err="failed to get container status \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\": rpc error: code = NotFound desc = an error occurred when try to find container \"3dde4212269f5501e95d8861d2547a9a03ab4e911af582cf79b54aa80a23e798\": not found" Sep 12 16:52:04.908885 kubelet[2587]: I0912 16:52:04.908866 2587 scope.go:117] "RemoveContainer" containerID="91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565" Sep 12 16:52:04.909207 kubelet[2587]: E0912 16:52:04.909190 2587 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565\": not found" containerID="91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565" Sep 12 16:52:04.909239 containerd[1480]: time="2025-09-12T16:52:04.909064900Z" level=error msg="ContainerStatus for \"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565\": not found" Sep 12 16:52:04.909270 kubelet[2587]: I0912 16:52:04.909214 2587 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565"} err="failed to get container status \"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565\": rpc error: code = NotFound desc = an error occurred when try to find container \"91aa956df6385bff24de3898a88322aaab364b07c77f5fe57b30c27b88cd6565\": not found" Sep 12 16:52:04.909270 kubelet[2587]: I0912 16:52:04.909233 2587 scope.go:117] "RemoveContainer" containerID="d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67" Sep 12 16:52:04.909394 containerd[1480]: time="2025-09-12T16:52:04.909363983Z" level=error msg="ContainerStatus for \"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67\": not found" Sep 12 16:52:04.909500 kubelet[2587]: E0912 16:52:04.909473 2587 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67\": not found" containerID="d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67" Sep 12 16:52:04.909500 kubelet[2587]: I0912 16:52:04.909491 2587 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67"} err="failed to get container status \"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0dfc48a04bafb633accf99c0bde759fc5f93868d8527e84cf21fda74a05fc67\": not found" Sep 12 16:52:04.909589 kubelet[2587]: I0912 16:52:04.909503 2587 scope.go:117] "RemoveContainer" containerID="38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476" Sep 12 16:52:04.909687 containerd[1480]: time="2025-09-12T16:52:04.909619146Z" level=error msg="ContainerStatus for \"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476\": not found" Sep 12 16:52:04.909750 kubelet[2587]: E0912 16:52:04.909722 2587 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476\": not found" containerID="38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476" Sep 12 16:52:04.909774 kubelet[2587]: I0912 16:52:04.909743 2587 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476"} err="failed to get container status \"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476\": rpc error: code = NotFound desc = an error occurred when try to find container \"38f838306886c1fb84239a00b92548d58036387f8f338f175fd7716c576ce476\": not found" Sep 12 16:52:04.909774 kubelet[2587]: I0912 16:52:04.909759 2587 scope.go:117] "RemoveContainer" containerID="e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9" Sep 12 16:52:04.909924 containerd[1480]: time="2025-09-12T16:52:04.909880668Z" level=error msg="ContainerStatus for \"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9\": not found" Sep 12 16:52:04.909986 kubelet[2587]: E0912 16:52:04.909975 2587 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9\": not found" containerID="e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9" Sep 12 16:52:04.910015 kubelet[2587]: I0912 16:52:04.909990 2587 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9"} err="failed to get container status \"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e31584fe04b16ad0d51db5e23ef05346ae49d152c4a1b20df34f0aea45e750f9\": not found" Sep 12 16:52:05.800732 sshd[4249]: Connection closed by 10.0.0.1 port 46378 Sep 12 16:52:05.801273 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Sep 12 16:52:05.810944 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:46378.service: Deactivated successfully. Sep 12 16:52:05.812459 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 16:52:05.812645 systemd[1]: session-23.scope: Consumed 1.092s CPU time, 25.7M memory peak. Sep 12 16:52:05.813240 systemd-logind[1465]: Session 23 logged out. Waiting for processes to exit. Sep 12 16:52:05.824957 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:46386.service - OpenSSH per-connection server daemon (10.0.0.1:46386). Sep 12 16:52:05.825822 systemd-logind[1465]: Removed session 23. Sep 12 16:52:05.864394 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 46386 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:52:05.865483 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:52:05.869621 systemd-logind[1465]: New session 24 of user core. Sep 12 16:52:05.874837 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 16:52:06.673317 kubelet[2587]: I0912 16:52:06.673275 2587 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be50088-be0f-413c-b07c-f4e7ab1ea22e" path="/var/lib/kubelet/pods/7be50088-be0f-413c-b07c-f4e7ab1ea22e/volumes" Sep 12 16:52:06.673669 kubelet[2587]: I0912 16:52:06.673647 2587 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" path="/var/lib/kubelet/pods/8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df/volumes" Sep 12 16:52:07.006024 sshd[4414]: Connection closed by 10.0.0.1 port 46386 Sep 12 16:52:07.006649 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Sep 12 16:52:07.018779 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:46386.service: Deactivated successfully. Sep 12 16:52:07.020935 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 16:52:07.022744 systemd[1]: session-24.scope: Consumed 1.056s CPU time, 24.3M memory peak. Sep 12 16:52:07.023449 systemd-logind[1465]: Session 24 logged out. Waiting for processes to exit. Sep 12 16:52:07.031116 systemd[1]: Started sshd@24-10.0.0.54:22-10.0.0.1:46400.service - OpenSSH per-connection server daemon (10.0.0.1:46400). Sep 12 16:52:07.032219 systemd-logind[1465]: Removed session 24. Sep 12 16:52:07.037860 kubelet[2587]: I0912 16:52:07.037816 2587 memory_manager.go:355] "RemoveStaleState removing state" podUID="8c68a7ab-25ab-4933-afc0-2c2eeaa2d9df" containerName="cilium-agent" Sep 12 16:52:07.037860 kubelet[2587]: I0912 16:52:07.037848 2587 memory_manager.go:355] "RemoveStaleState removing state" podUID="7be50088-be0f-413c-b07c-f4e7ab1ea22e" containerName="cilium-operator" Sep 12 16:52:07.049813 systemd[1]: Created slice kubepods-burstable-poda69b77c8_c376_479d_82c6_b63f39276f22.slice - libcontainer container kubepods-burstable-poda69b77c8_c376_479d_82c6_b63f39276f22.slice. Sep 12 16:52:07.087784 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 46400 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:52:07.091124 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:52:07.099455 systemd-logind[1465]: New session 25 of user core. Sep 12 16:52:07.104202 kubelet[2587]: I0912 16:52:07.104158 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a69b77c8-c376-479d-82c6-b63f39276f22-bpf-maps\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104202 kubelet[2587]: I0912 16:52:07.104200 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a69b77c8-c376-479d-82c6-b63f39276f22-cilium-config-path\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104314 kubelet[2587]: I0912 16:52:07.104222 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a69b77c8-c376-479d-82c6-b63f39276f22-cilium-ipsec-secrets\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104314 kubelet[2587]: I0912 16:52:07.104239 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a69b77c8-c376-479d-82c6-b63f39276f22-host-proc-sys-kernel\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104314 kubelet[2587]: I0912 16:52:07.104258 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r24p\" (UniqueName: \"kubernetes.io/projected/a69b77c8-c376-479d-82c6-b63f39276f22-kube-api-access-7r24p\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104314 kubelet[2587]: I0912 16:52:07.104278 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a69b77c8-c376-479d-82c6-b63f39276f22-cilium-run\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104314 kubelet[2587]: I0912 16:52:07.104292 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a69b77c8-c376-479d-82c6-b63f39276f22-hostproc\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104421 kubelet[2587]: I0912 16:52:07.104308 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a69b77c8-c376-479d-82c6-b63f39276f22-clustermesh-secrets\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104421 kubelet[2587]: I0912 16:52:07.104322 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a69b77c8-c376-479d-82c6-b63f39276f22-cilium-cgroup\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104421 kubelet[2587]: I0912 16:52:07.104337 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a69b77c8-c376-479d-82c6-b63f39276f22-etc-cni-netd\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104421 kubelet[2587]: I0912 16:52:07.104352 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a69b77c8-c376-479d-82c6-b63f39276f22-xtables-lock\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104421 kubelet[2587]: I0912 16:52:07.104366 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a69b77c8-c376-479d-82c6-b63f39276f22-hubble-tls\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104421 kubelet[2587]: I0912 16:52:07.104381 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a69b77c8-c376-479d-82c6-b63f39276f22-cni-path\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104539 kubelet[2587]: I0912 16:52:07.104397 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a69b77c8-c376-479d-82c6-b63f39276f22-lib-modules\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.104539 kubelet[2587]: I0912 16:52:07.104413 2587 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a69b77c8-c376-479d-82c6-b63f39276f22-host-proc-sys-net\") pod \"cilium-wqmt9\" (UID: \"a69b77c8-c376-479d-82c6-b63f39276f22\") " pod="kube-system/cilium-wqmt9" Sep 12 16:52:07.108868 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 16:52:07.158163 sshd[4429]: Connection closed by 10.0.0.1 port 46400 Sep 12 16:52:07.158762 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Sep 12 16:52:07.172819 systemd[1]: sshd@24-10.0.0.54:22-10.0.0.1:46400.service: Deactivated successfully. Sep 12 16:52:07.174359 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 16:52:07.175016 systemd-logind[1465]: Session 25 logged out. Waiting for processes to exit. Sep 12 16:52:07.185956 systemd[1]: Started sshd@25-10.0.0.54:22-10.0.0.1:46402.service - OpenSSH per-connection server daemon (10.0.0.1:46402). Sep 12 16:52:07.187132 systemd-logind[1465]: Removed session 25. Sep 12 16:52:07.232311 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 46402 ssh2: RSA SHA256:hBa6LAhizNVTLUsQMkAlM2iOfW9N2Aj4dQy/X5pbOfM Sep 12 16:52:07.233450 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 16:52:07.237547 systemd-logind[1465]: New session 26 of user core. Sep 12 16:52:07.246850 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 16:52:07.355918 kubelet[2587]: E0912 16:52:07.355871 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:07.356740 containerd[1480]: time="2025-09-12T16:52:07.356377737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqmt9,Uid:a69b77c8-c376-479d-82c6-b63f39276f22,Namespace:kube-system,Attempt:0,}" Sep 12 16:52:07.375262 containerd[1480]: time="2025-09-12T16:52:07.375130466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 16:52:07.375262 containerd[1480]: time="2025-09-12T16:52:07.375223147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 16:52:07.375262 containerd[1480]: time="2025-09-12T16:52:07.375235587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:52:07.375758 containerd[1480]: time="2025-09-12T16:52:07.375308988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 16:52:07.397923 systemd[1]: Started cri-containerd-c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba.scope - libcontainer container c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba. Sep 12 16:52:07.424195 containerd[1480]: time="2025-09-12T16:52:07.424156068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqmt9,Uid:a69b77c8-c376-479d-82c6-b63f39276f22,Namespace:kube-system,Attempt:0,} returns sandbox id \"c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba\"" Sep 12 16:52:07.425546 kubelet[2587]: E0912 16:52:07.425069 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:07.427363 containerd[1480]: time="2025-09-12T16:52:07.427241736Z" level=info msg="CreateContainer within sandbox \"c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 16:52:07.439038 containerd[1480]: time="2025-09-12T16:52:07.438997841Z" level=info msg="CreateContainer within sandbox \"c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"794a72427f76a38bd329c83064d973e689f38b9b517cdd96275ad1cd10a5099d\"" Sep 12 16:52:07.439729 containerd[1480]: time="2025-09-12T16:52:07.439686088Z" level=info msg="StartContainer for \"794a72427f76a38bd329c83064d973e689f38b9b517cdd96275ad1cd10a5099d\"" Sep 12 16:52:07.466858 systemd[1]: Started cri-containerd-794a72427f76a38bd329c83064d973e689f38b9b517cdd96275ad1cd10a5099d.scope - libcontainer container 794a72427f76a38bd329c83064d973e689f38b9b517cdd96275ad1cd10a5099d. Sep 12 16:52:07.488347 containerd[1480]: time="2025-09-12T16:52:07.488293566Z" level=info msg="StartContainer for \"794a72427f76a38bd329c83064d973e689f38b9b517cdd96275ad1cd10a5099d\" returns successfully" Sep 12 16:52:07.496130 systemd[1]: cri-containerd-794a72427f76a38bd329c83064d973e689f38b9b517cdd96275ad1cd10a5099d.scope: Deactivated successfully. Sep 12 16:52:07.523497 containerd[1480]: time="2025-09-12T16:52:07.523427042Z" level=info msg="shim disconnected" id=794a72427f76a38bd329c83064d973e689f38b9b517cdd96275ad1cd10a5099d namespace=k8s.io Sep 12 16:52:07.523497 containerd[1480]: time="2025-09-12T16:52:07.523481563Z" level=warning msg="cleaning up after shim disconnected" id=794a72427f76a38bd329c83064d973e689f38b9b517cdd96275ad1cd10a5099d namespace=k8s.io Sep 12 16:52:07.523497 containerd[1480]: time="2025-09-12T16:52:07.523493083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:07.721729 kubelet[2587]: E0912 16:52:07.721609 2587 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 16:52:07.863523 kubelet[2587]: E0912 16:52:07.863009 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:07.868587 containerd[1480]: time="2025-09-12T16:52:07.866042130Z" level=info msg="CreateContainer within sandbox \"c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 16:52:07.875804 containerd[1480]: time="2025-09-12T16:52:07.875751857Z" level=info msg="CreateContainer within sandbox \"c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"828cad48f942c4f8b45cbd018ca3a63012da9303b977140a17ada30c4a760504\"" Sep 12 16:52:07.876339 containerd[1480]: time="2025-09-12T16:52:07.876297182Z" level=info msg="StartContainer for \"828cad48f942c4f8b45cbd018ca3a63012da9303b977140a17ada30c4a760504\"" Sep 12 16:52:07.908835 systemd[1]: Started cri-containerd-828cad48f942c4f8b45cbd018ca3a63012da9303b977140a17ada30c4a760504.scope - libcontainer container 828cad48f942c4f8b45cbd018ca3a63012da9303b977140a17ada30c4a760504. Sep 12 16:52:07.933962 containerd[1480]: time="2025-09-12T16:52:07.933919261Z" level=info msg="StartContainer for \"828cad48f942c4f8b45cbd018ca3a63012da9303b977140a17ada30c4a760504\" returns successfully" Sep 12 16:52:07.937370 systemd[1]: cri-containerd-828cad48f942c4f8b45cbd018ca3a63012da9303b977140a17ada30c4a760504.scope: Deactivated successfully. Sep 12 16:52:07.955313 containerd[1480]: time="2025-09-12T16:52:07.955135213Z" level=info msg="shim disconnected" id=828cad48f942c4f8b45cbd018ca3a63012da9303b977140a17ada30c4a760504 namespace=k8s.io Sep 12 16:52:07.955313 containerd[1480]: time="2025-09-12T16:52:07.955182693Z" level=warning msg="cleaning up after shim disconnected" id=828cad48f942c4f8b45cbd018ca3a63012da9303b977140a17ada30c4a760504 namespace=k8s.io Sep 12 16:52:07.955313 containerd[1480]: time="2025-09-12T16:52:07.955191173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:08.866378 kubelet[2587]: E0912 16:52:08.866337 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:08.869293 containerd[1480]: time="2025-09-12T16:52:08.868392954Z" level=info msg="CreateContainer within sandbox \"c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 16:52:08.885766 containerd[1480]: time="2025-09-12T16:52:08.885732386Z" level=info msg="CreateContainer within sandbox \"c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"24e931079eb15c37c8d948999c7e8f54512748cce44cbe445ff64cea656e9cbc\"" Sep 12 16:52:08.886805 containerd[1480]: time="2025-09-12T16:52:08.886270031Z" level=info msg="StartContainer for \"24e931079eb15c37c8d948999c7e8f54512748cce44cbe445ff64cea656e9cbc\"" Sep 12 16:52:08.907828 systemd[1]: Started cri-containerd-24e931079eb15c37c8d948999c7e8f54512748cce44cbe445ff64cea656e9cbc.scope - libcontainer container 24e931079eb15c37c8d948999c7e8f54512748cce44cbe445ff64cea656e9cbc. Sep 12 16:52:08.933013 systemd[1]: cri-containerd-24e931079eb15c37c8d948999c7e8f54512748cce44cbe445ff64cea656e9cbc.scope: Deactivated successfully. Sep 12 16:52:08.934101 containerd[1480]: time="2025-09-12T16:52:08.934048170Z" level=info msg="StartContainer for \"24e931079eb15c37c8d948999c7e8f54512748cce44cbe445ff64cea656e9cbc\" returns successfully" Sep 12 16:52:08.955836 containerd[1480]: time="2025-09-12T16:52:08.955768400Z" level=info msg="shim disconnected" id=24e931079eb15c37c8d948999c7e8f54512748cce44cbe445ff64cea656e9cbc namespace=k8s.io Sep 12 16:52:08.955836 containerd[1480]: time="2025-09-12T16:52:08.955831641Z" level=warning msg="cleaning up after shim disconnected" id=24e931079eb15c37c8d948999c7e8f54512748cce44cbe445ff64cea656e9cbc namespace=k8s.io Sep 12 16:52:08.956008 containerd[1480]: time="2025-09-12T16:52:08.955849161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:09.210091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24e931079eb15c37c8d948999c7e8f54512748cce44cbe445ff64cea656e9cbc-rootfs.mount: Deactivated successfully. Sep 12 16:52:09.869807 kubelet[2587]: E0912 16:52:09.869769 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:09.874672 containerd[1480]: time="2025-09-12T16:52:09.874639737Z" level=info msg="CreateContainer within sandbox \"c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 16:52:09.887197 containerd[1480]: time="2025-09-12T16:52:09.887096483Z" level=info msg="CreateContainer within sandbox \"c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"36497cefdb72ccaa06f8b02f9ab1e387bb598ea2a13979775874004910dd1506\"" Sep 12 16:52:09.888991 containerd[1480]: time="2025-09-12T16:52:09.888949339Z" level=info msg="StartContainer for \"36497cefdb72ccaa06f8b02f9ab1e387bb598ea2a13979775874004910dd1506\"" Sep 12 16:52:09.915827 systemd[1]: Started cri-containerd-36497cefdb72ccaa06f8b02f9ab1e387bb598ea2a13979775874004910dd1506.scope - libcontainer container 36497cefdb72ccaa06f8b02f9ab1e387bb598ea2a13979775874004910dd1506. Sep 12 16:52:09.933149 systemd[1]: cri-containerd-36497cefdb72ccaa06f8b02f9ab1e387bb598ea2a13979775874004910dd1506.scope: Deactivated successfully. Sep 12 16:52:09.935055 containerd[1480]: time="2025-09-12T16:52:09.934951172Z" level=info msg="StartContainer for \"36497cefdb72ccaa06f8b02f9ab1e387bb598ea2a13979775874004910dd1506\" returns successfully" Sep 12 16:52:09.951900 containerd[1480]: time="2025-09-12T16:52:09.951850516Z" level=info msg="shim disconnected" id=36497cefdb72ccaa06f8b02f9ab1e387bb598ea2a13979775874004910dd1506 namespace=k8s.io Sep 12 16:52:09.952208 containerd[1480]: time="2025-09-12T16:52:09.952062598Z" level=warning msg="cleaning up after shim disconnected" id=36497cefdb72ccaa06f8b02f9ab1e387bb598ea2a13979775874004910dd1506 namespace=k8s.io Sep 12 16:52:09.952208 containerd[1480]: time="2025-09-12T16:52:09.952087758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 16:52:10.209748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36497cefdb72ccaa06f8b02f9ab1e387bb598ea2a13979775874004910dd1506-rootfs.mount: Deactivated successfully. Sep 12 16:52:10.874611 kubelet[2587]: E0912 16:52:10.874582 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:10.877037 containerd[1480]: time="2025-09-12T16:52:10.876999178Z" level=info msg="CreateContainer within sandbox \"c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 16:52:10.891988 containerd[1480]: time="2025-09-12T16:52:10.891888902Z" level=info msg="CreateContainer within sandbox \"c98cf9d47affc7f92c180bf0288e5a285cf46ab432d9d696284399ee24b26dba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7225691e7afc9a8096eb67723267aa13fea996ef2054727e686c4d9f4d33eccd\"" Sep 12 16:52:10.892724 containerd[1480]: time="2025-09-12T16:52:10.892395786Z" level=info msg="StartContainer for \"7225691e7afc9a8096eb67723267aa13fea996ef2054727e686c4d9f4d33eccd\"" Sep 12 16:52:10.918854 systemd[1]: Started cri-containerd-7225691e7afc9a8096eb67723267aa13fea996ef2054727e686c4d9f4d33eccd.scope - libcontainer container 7225691e7afc9a8096eb67723267aa13fea996ef2054727e686c4d9f4d33eccd. Sep 12 16:52:10.947096 containerd[1480]: time="2025-09-12T16:52:10.947038200Z" level=info msg="StartContainer for \"7225691e7afc9a8096eb67723267aa13fea996ef2054727e686c4d9f4d33eccd\" returns successfully" Sep 12 16:52:11.197795 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 16:52:11.881038 kubelet[2587]: E0912 16:52:11.879736 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:11.899878 kubelet[2587]: I0912 16:52:11.899824 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wqmt9" podStartSLOduration=4.899808484 podStartE2EDuration="4.899808484s" podCreationTimestamp="2025-09-12 16:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 16:52:11.899631722 +0000 UTC m=+79.314827800" watchObservedRunningTime="2025-09-12 16:52:11.899808484 +0000 UTC m=+79.315004562" Sep 12 16:52:13.356962 kubelet[2587]: E0912 16:52:13.356863 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:14.064251 systemd-networkd[1398]: lxc_health: Link UP Sep 12 16:52:14.073983 systemd-networkd[1398]: lxc_health: Gained carrier Sep 12 16:52:15.357681 kubelet[2587]: E0912 16:52:15.357586 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:15.885934 kubelet[2587]: E0912 16:52:15.885904 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:16.124856 systemd-networkd[1398]: lxc_health: Gained IPv6LL Sep 12 16:52:16.887717 kubelet[2587]: E0912 16:52:16.887598 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:17.670975 kubelet[2587]: E0912 16:52:17.670234 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:18.670973 kubelet[2587]: E0912 16:52:18.670934 2587 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 16:52:19.926123 sshd[4442]: Connection closed by 10.0.0.1 port 46402 Sep 12 16:52:19.926598 sshd-session[4435]: pam_unix(sshd:session): session closed for user core Sep 12 16:52:19.929899 systemd[1]: sshd@25-10.0.0.54:22-10.0.0.1:46402.service: Deactivated successfully. Sep 12 16:52:19.931689 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 16:52:19.934349 systemd-logind[1465]: Session 26 logged out. Waiting for processes to exit. Sep 12 16:52:19.935385 systemd-logind[1465]: Removed session 26.