Jul 14 21:52:19.901972 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 21:52:19.902020 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jul 14 20:26:44 -00 2025 Jul 14 21:52:19.902031 kernel: KASLR enabled Jul 14 21:52:19.902037 kernel: efi: EFI v2.7 by EDK II Jul 14 21:52:19.902043 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 14 21:52:19.902049 kernel: random: crng init done Jul 14 21:52:19.902057 kernel: ACPI: Early table checksum verification disabled Jul 14 21:52:19.902063 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 14 21:52:19.902069 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:52:19.902077 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:19.902084 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:19.902090 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:19.902096 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:19.902103 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:19.902111 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:19.902119 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:19.902126 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:19.902133 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:19.902139 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 21:52:19.902146 kernel: NUMA: Failed to initialise from firmware Jul 14 21:52:19.902152 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:52:19.902159 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 14 21:52:19.902166 kernel: Zone ranges: Jul 14 21:52:19.902172 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:52:19.902179 kernel: DMA32 empty Jul 14 21:52:19.902186 kernel: Normal empty Jul 14 21:52:19.902193 kernel: Movable zone start for each node Jul 14 21:52:19.902199 kernel: Early memory node ranges Jul 14 21:52:19.902206 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 14 21:52:19.902212 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 14 21:52:19.902219 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 14 21:52:19.902225 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 14 21:52:19.902232 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 14 21:52:19.902238 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 14 21:52:19.902245 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 14 21:52:19.902256 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:52:19.902264 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 21:52:19.902273 kernel: psci: probing for conduit method from ACPI. Jul 14 21:52:19.902279 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 21:52:19.902286 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 21:52:19.902296 kernel: psci: Trusted OS migration not required Jul 14 21:52:19.902313 kernel: psci: SMC Calling Convention v1.1 Jul 14 21:52:19.902321 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 21:52:19.902335 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 14 21:52:19.902342 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 14 21:52:19.902349 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 21:52:19.902356 kernel: Detected PIPT I-cache on CPU0 Jul 14 21:52:19.902363 kernel: CPU features: detected: GIC system register CPU interface Jul 14 21:52:19.902371 kernel: CPU features: detected: Hardware dirty bit management Jul 14 21:52:19.902378 kernel: CPU features: detected: Spectre-v4 Jul 14 21:52:19.902385 kernel: CPU features: detected: Spectre-BHB Jul 14 21:52:19.902398 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 21:52:19.902431 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 21:52:19.902440 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 21:52:19.902447 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 21:52:19.902454 kernel: alternatives: applying boot alternatives Jul 14 21:52:19.902462 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 21:52:19.902470 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:52:19.902477 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:52:19.902484 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:52:19.902491 kernel: Fallback order for Node 0: 0 Jul 14 21:52:19.902498 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 21:52:19.902505 kernel: Policy zone: DMA Jul 14 21:52:19.902512 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:52:19.902520 kernel: software IO TLB: area num 4. Jul 14 21:52:19.902527 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 14 21:52:19.902535 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 14 21:52:19.902542 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:52:19.902549 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:52:19.902557 kernel: rcu: RCU event tracing is enabled. Jul 14 21:52:19.902564 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:52:19.902572 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:52:19.902579 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:52:19.902586 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:52:19.902593 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:52:19.902600 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 21:52:19.902609 kernel: GICv3: 256 SPIs implemented Jul 14 21:52:19.902616 kernel: GICv3: 0 Extended SPIs implemented Jul 14 21:52:19.902623 kernel: Root IRQ handler: gic_handle_irq Jul 14 21:52:19.902630 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 14 21:52:19.902637 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 21:52:19.902644 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 21:52:19.902651 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 21:52:19.902681 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 14 21:52:19.902700 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 14 21:52:19.902708 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 14 21:52:19.902715 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 21:52:19.902723 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:52:19.902731 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 21:52:19.902738 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 21:52:19.902745 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 21:52:19.902752 kernel: arm-pv: using stolen time PV Jul 14 21:52:19.902760 kernel: Console: colour dummy device 80x25 Jul 14 21:52:19.902767 kernel: ACPI: Core revision 20230628 Jul 14 21:52:19.902775 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 21:52:19.902782 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:52:19.902790 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 21:52:19.902799 kernel: landlock: Up and running. Jul 14 21:52:19.902806 kernel: SELinux: Initializing. Jul 14 21:52:19.902813 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:52:19.902821 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:52:19.902828 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:52:19.902836 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:52:19.902843 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:52:19.902850 kernel: rcu: Max phase no-delay instances is 400. Jul 14 21:52:19.902858 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 21:52:19.902866 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 21:52:19.902874 kernel: Remapping and enabling EFI services. Jul 14 21:52:19.902881 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:52:19.902889 kernel: Detected PIPT I-cache on CPU1 Jul 14 21:52:19.902896 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 21:52:19.902904 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 14 21:52:19.902950 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:52:19.902958 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 21:52:19.902965 kernel: Detected PIPT I-cache on CPU2 Jul 14 21:52:19.902973 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 21:52:19.902983 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 14 21:52:19.902991 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:52:19.903004 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 21:52:19.903013 kernel: Detected PIPT I-cache on CPU3 Jul 14 21:52:19.903021 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 21:52:19.903029 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 14 21:52:19.903037 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:52:19.903044 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 21:52:19.903052 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:52:19.903061 kernel: SMP: Total of 4 processors activated. Jul 14 21:52:19.903069 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 21:52:19.903077 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 21:52:19.903085 kernel: CPU features: detected: Common not Private translations Jul 14 21:52:19.903092 kernel: CPU features: detected: CRC32 instructions Jul 14 21:52:19.903100 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 14 21:52:19.903108 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 21:52:19.903116 kernel: CPU features: detected: LSE atomic instructions Jul 14 21:52:19.903125 kernel: CPU features: detected: Privileged Access Never Jul 14 21:52:19.903133 kernel: CPU features: detected: RAS Extension Support Jul 14 21:52:19.903141 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 21:52:19.903149 kernel: CPU: All CPU(s) started at EL1 Jul 14 21:52:19.903157 kernel: alternatives: applying system-wide alternatives Jul 14 21:52:19.903165 kernel: devtmpfs: initialized Jul 14 21:52:19.903173 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:52:19.903180 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:52:19.903188 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:52:19.903197 kernel: SMBIOS 3.0.0 present. Jul 14 21:52:19.903205 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 14 21:52:19.903213 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:52:19.903221 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 21:52:19.903228 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 21:52:19.903236 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 21:52:19.903244 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:52:19.903258 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Jul 14 21:52:19.903266 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:52:19.903275 kernel: cpuidle: using governor menu Jul 14 21:52:19.903283 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 21:52:19.903291 kernel: ASID allocator initialised with 32768 entries Jul 14 21:52:19.903299 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:52:19.903316 kernel: Serial: AMBA PL011 UART driver Jul 14 21:52:19.903324 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 14 21:52:19.903332 kernel: Modules: 0 pages in range for non-PLT usage Jul 14 21:52:19.903339 kernel: Modules: 509008 pages in range for PLT usage Jul 14 21:52:19.903347 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:52:19.903357 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 21:52:19.903364 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 21:52:19.903372 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 14 21:52:19.903380 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:52:19.903388 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 21:52:19.903395 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 21:52:19.903403 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 14 21:52:19.903411 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:52:19.903419 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:52:19.903429 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:52:19.903443 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:52:19.903450 kernel: ACPI: Interpreter enabled Jul 14 21:52:19.903459 kernel: ACPI: Using GIC for interrupt routing Jul 14 21:52:19.903474 kernel: ACPI: MCFG table detected, 1 entries Jul 14 21:52:19.903481 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 21:52:19.903495 kernel: printk: console [ttyAMA0] enabled Jul 14 21:52:19.903503 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:52:19.903664 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:52:19.903744 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 21:52:19.903817 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 21:52:19.903918 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 21:52:19.903982 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 21:52:19.903992 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 21:52:19.904000 kernel: PCI host bridge to bus 0000:00 Jul 14 21:52:19.904072 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 21:52:19.904135 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 21:52:19.904195 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 21:52:19.904262 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:52:19.904359 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 21:52:19.904453 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 21:52:19.904522 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 21:52:19.904592 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 21:52:19.904659 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:52:19.904725 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:52:19.904791 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 21:52:19.904858 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 21:52:19.904920 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 21:52:19.904979 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 21:52:19.905040 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 21:52:19.905050 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 21:52:19.905058 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 21:52:19.905066 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 21:52:19.905074 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 21:52:19.905081 kernel: iommu: Default domain type: Translated Jul 14 21:52:19.905089 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 21:52:19.905097 kernel: efivars: Registered efivars operations Jul 14 21:52:19.905105 kernel: vgaarb: loaded Jul 14 21:52:19.905114 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 21:52:19.905122 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:52:19.905130 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:52:19.905137 kernel: pnp: PnP ACPI init Jul 14 21:52:19.905213 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 21:52:19.905224 kernel: pnp: PnP ACPI: found 1 devices Jul 14 21:52:19.905232 kernel: NET: Registered PF_INET protocol family Jul 14 21:52:19.905239 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:52:19.905249 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:52:19.905265 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:52:19.905273 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:52:19.905281 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 21:52:19.905289 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:52:19.905296 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:52:19.905321 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:52:19.905330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:52:19.905338 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:52:19.905348 kernel: kvm [1]: HYP mode not available Jul 14 21:52:19.905356 kernel: Initialise system trusted keyrings Jul 14 21:52:19.905363 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:52:19.905371 kernel: Key type asymmetric registered Jul 14 21:52:19.905378 kernel: Asymmetric key parser 'x509' registered Jul 14 21:52:19.905386 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 14 21:52:19.905394 kernel: io scheduler mq-deadline registered Jul 14 21:52:19.905401 kernel: io scheduler kyber registered Jul 14 21:52:19.905409 kernel: io scheduler bfq registered Jul 14 21:52:19.905419 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 21:52:19.905426 kernel: ACPI: button: Power Button [PWRB] Jul 14 21:52:19.905434 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 21:52:19.905516 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 21:52:19.905527 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:52:19.905535 kernel: thunder_xcv, ver 1.0 Jul 14 21:52:19.905543 kernel: thunder_bgx, ver 1.0 Jul 14 21:52:19.905551 kernel: nicpf, ver 1.0 Jul 14 21:52:19.905559 kernel: nicvf, ver 1.0 Jul 14 21:52:19.905639 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 21:52:19.905703 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T21:52:19 UTC (1752529939) Jul 14 21:52:19.905714 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 21:52:19.905722 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 21:52:19.905730 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 14 21:52:19.905738 kernel: watchdog: Hard watchdog permanently disabled Jul 14 21:52:19.905745 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:52:19.905753 kernel: Segment Routing with IPv6 Jul 14 21:52:19.905763 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:52:19.905771 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:52:19.905778 kernel: Key type dns_resolver registered Jul 14 21:52:19.905786 kernel: registered taskstats version 1 Jul 14 21:52:19.905794 kernel: Loading compiled-in X.509 certificates Jul 14 21:52:19.905802 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: 0878f879bf0f15203fd920e9f7d6346db298c301' Jul 14 21:52:19.905810 kernel: Key type .fscrypt registered Jul 14 21:52:19.905817 kernel: Key type fscrypt-provisioning registered Jul 14 21:52:19.905825 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:52:19.905834 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:52:19.905842 kernel: ima: No architecture policies found Jul 14 21:52:19.905849 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 21:52:19.905857 kernel: clk: Disabling unused clocks Jul 14 21:52:19.905865 kernel: Freeing unused kernel memory: 39424K Jul 14 21:52:19.905872 kernel: Run /init as init process Jul 14 21:52:19.905880 kernel: with arguments: Jul 14 21:52:19.905888 kernel: /init Jul 14 21:52:19.905895 kernel: with environment: Jul 14 21:52:19.905904 kernel: HOME=/ Jul 14 21:52:19.905912 kernel: TERM=linux Jul 14 21:52:19.905919 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:52:19.905929 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 21:52:19.905939 systemd[1]: Detected virtualization kvm. Jul 14 21:52:19.905947 systemd[1]: Detected architecture arm64. Jul 14 21:52:19.905955 systemd[1]: Running in initrd. Jul 14 21:52:19.905963 systemd[1]: No hostname configured, using default hostname. Jul 14 21:52:19.905972 systemd[1]: Hostname set to . Jul 14 21:52:19.905981 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:52:19.905989 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:52:19.905998 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:52:19.906006 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:52:19.906015 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 21:52:19.906023 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:52:19.906034 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 21:52:19.906042 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 21:52:19.906052 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 21:52:19.906060 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 21:52:19.906069 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:52:19.906077 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:52:19.906086 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:52:19.906096 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:52:19.906104 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:52:19.906113 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:52:19.906121 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:52:19.906129 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:52:19.906138 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 21:52:19.906146 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 21:52:19.906154 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:52:19.906162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:52:19.906172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:52:19.906181 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:52:19.906189 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 21:52:19.906197 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:52:19.906205 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 21:52:19.906214 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:52:19.906222 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:52:19.906230 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:52:19.906240 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:52:19.906249 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 21:52:19.906266 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:52:19.906274 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:52:19.906283 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:52:19.906294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:52:19.906415 systemd-journald[237]: Collecting audit messages is disabled. Jul 14 21:52:19.906437 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:52:19.906446 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:52:19.906458 systemd-journald[237]: Journal started Jul 14 21:52:19.906476 systemd-journald[237]: Runtime Journal (/run/log/journal/3e45320bb9bd472ba3d5b0cb2c51409b) is 5.9M, max 47.3M, 41.4M free. Jul 14 21:52:19.897765 systemd-modules-load[239]: Inserted module 'overlay' Jul 14 21:52:19.909567 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:52:19.911344 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:52:19.911375 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:52:19.913654 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 14 21:52:19.916040 kernel: Bridge firewalling registered Jul 14 21:52:19.916102 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:52:19.919856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:52:19.924031 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:52:19.925689 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:52:19.934203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:52:19.936696 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:52:19.939789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:52:19.955557 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 21:52:19.958050 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:52:19.966070 dracut-cmdline[275]: dracut-dracut-053 Jul 14 21:52:19.968578 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 21:52:19.989101 systemd-resolved[277]: Positive Trust Anchors: Jul 14 21:52:19.989122 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:52:19.989154 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:52:19.993884 systemd-resolved[277]: Defaulting to hostname 'linux'. Jul 14 21:52:19.995090 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:52:19.998695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:52:20.040336 kernel: SCSI subsystem initialized Jul 14 21:52:20.044322 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:52:20.052354 kernel: iscsi: registered transport (tcp) Jul 14 21:52:20.064573 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:52:20.064598 kernel: QLogic iSCSI HBA Driver Jul 14 21:52:20.108491 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 21:52:20.121461 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 21:52:20.148088 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:52:20.149819 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:52:20.149856 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 21:52:20.196331 kernel: raid6: neonx8 gen() 15780 MB/s Jul 14 21:52:20.213318 kernel: raid6: neonx4 gen() 15662 MB/s Jul 14 21:52:20.230318 kernel: raid6: neonx2 gen() 13259 MB/s Jul 14 21:52:20.247321 kernel: raid6: neonx1 gen() 10486 MB/s Jul 14 21:52:20.264319 kernel: raid6: int64x8 gen() 6966 MB/s Jul 14 21:52:20.281319 kernel: raid6: int64x4 gen() 7362 MB/s Jul 14 21:52:20.298320 kernel: raid6: int64x2 gen() 6137 MB/s Jul 14 21:52:20.315395 kernel: raid6: int64x1 gen() 5050 MB/s Jul 14 21:52:20.315424 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Jul 14 21:52:20.333372 kernel: raid6: .... xor() 11948 MB/s, rmw enabled Jul 14 21:52:20.333388 kernel: raid6: using neon recovery algorithm Jul 14 21:52:20.338785 kernel: xor: measuring software checksum speed Jul 14 21:52:20.338803 kernel: 8regs : 19168 MB/sec Jul 14 21:52:20.339446 kernel: 32regs : 19636 MB/sec Jul 14 21:52:20.340660 kernel: arm64_neon : 26945 MB/sec Jul 14 21:52:20.340687 kernel: xor: using function: arm64_neon (26945 MB/sec) Jul 14 21:52:20.390349 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 21:52:20.400217 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:52:20.408495 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:52:20.420302 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 14 21:52:20.423474 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:52:20.426868 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 21:52:20.441147 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jul 14 21:52:20.467496 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:52:20.483490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:52:20.525388 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:52:20.535422 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 21:52:20.546268 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 21:52:20.547931 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:52:20.549937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:52:20.552446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:52:20.560443 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 21:52:20.568607 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 14 21:52:20.575319 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:52:20.576372 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:52:20.581869 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:52:20.581906 kernel: GPT:9289727 != 19775487 Jul 14 21:52:20.581917 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:52:20.581928 kernel: GPT:9289727 != 19775487 Jul 14 21:52:20.581938 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:52:20.581953 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:52:20.581725 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:52:20.581842 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:52:20.585507 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:52:20.586811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:52:20.586950 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:52:20.589202 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:52:20.601572 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:52:20.606178 kernel: BTRFS: device fsid a239cc51-2249-4f1a-8861-421a0d84a369 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (508) Jul 14 21:52:20.606203 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (506) Jul 14 21:52:20.616691 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 21:52:20.617994 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:52:20.623420 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 21:52:20.629836 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 21:52:20.630957 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 21:52:20.636415 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:52:20.648488 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 21:52:20.650107 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:52:20.654815 disk-uuid[551]: Primary Header is updated. Jul 14 21:52:20.654815 disk-uuid[551]: Secondary Entries is updated. Jul 14 21:52:20.654815 disk-uuid[551]: Secondary Header is updated. Jul 14 21:52:20.659324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:52:20.673890 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:52:21.672335 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:52:21.672425 disk-uuid[552]: The operation has completed successfully. Jul 14 21:52:21.689323 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:52:21.689419 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 21:52:21.716444 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 21:52:21.719024 sh[573]: Success Jul 14 21:52:21.731331 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 21:52:21.770632 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 21:52:21.771859 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 21:52:21.774613 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 21:52:21.784954 kernel: BTRFS info (device dm-0): first mount of filesystem a239cc51-2249-4f1a-8861-421a0d84a369 Jul 14 21:52:21.784986 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:52:21.785005 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 21:52:21.786770 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 21:52:21.786789 kernel: BTRFS info (device dm-0): using free space tree Jul 14 21:52:21.791138 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 21:52:21.792137 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 21:52:21.804427 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 21:52:21.805886 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 21:52:21.812471 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:52:21.812511 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:52:21.813332 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:52:21.815330 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:52:21.821535 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 21:52:21.823316 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:52:21.827974 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 21:52:21.834454 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 21:52:21.895518 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:52:21.913467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:52:21.929099 ignition[663]: Ignition 2.19.0 Jul 14 21:52:21.929108 ignition[663]: Stage: fetch-offline Jul 14 21:52:21.929141 ignition[663]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:21.929150 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:21.929380 ignition[663]: parsed url from cmdline: "" Jul 14 21:52:21.929383 ignition[663]: no config URL provided Jul 14 21:52:21.929387 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:52:21.929395 ignition[663]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:52:21.929418 ignition[663]: op(1): [started] loading QEMU firmware config module Jul 14 21:52:21.929422 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:52:21.938689 ignition[663]: op(1): [finished] loading QEMU firmware config module Jul 14 21:52:21.940382 systemd-networkd[766]: lo: Link UP Jul 14 21:52:21.940392 systemd-networkd[766]: lo: Gained carrier Jul 14 21:52:21.941045 systemd-networkd[766]: Enumeration completed Jul 14 21:52:21.941137 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:52:21.941536 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:52:21.941539 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:52:21.942228 systemd-networkd[766]: eth0: Link UP Jul 14 21:52:21.942231 systemd-networkd[766]: eth0: Gained carrier Jul 14 21:52:21.942237 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:52:21.942960 systemd[1]: Reached target network.target - Network. Jul 14 21:52:21.964350 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:52:21.985088 ignition[663]: parsing config with SHA512: 16e58e05b1e016722a7de0ea45fdc52785e9b754887af547edc379a726bb2351894c043623a4ac3f518f5134c79b18531b9943ddeb79f3a61bbdf7cd6b7d0c36 Jul 14 21:52:21.990367 unknown[663]: fetched base config from "system" Jul 14 21:52:21.990383 unknown[663]: fetched user config from "qemu" Jul 14 21:52:21.991374 ignition[663]: fetch-offline: fetch-offline passed Jul 14 21:52:21.991446 ignition[663]: Ignition finished successfully Jul 14 21:52:21.993676 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:52:21.995051 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:52:21.999456 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 21:52:22.009091 ignition[773]: Ignition 2.19.0 Jul 14 21:52:22.009102 ignition[773]: Stage: kargs Jul 14 21:52:22.009267 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:22.009277 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:22.010131 ignition[773]: kargs: kargs passed Jul 14 21:52:22.013690 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 21:52:22.010170 ignition[773]: Ignition finished successfully Jul 14 21:52:22.024483 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 21:52:22.034927 ignition[780]: Ignition 2.19.0 Jul 14 21:52:22.034936 ignition[780]: Stage: disks Jul 14 21:52:22.035092 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:22.037646 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 21:52:22.035101 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:22.039180 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 21:52:22.035974 ignition[780]: disks: disks passed Jul 14 21:52:22.040918 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 21:52:22.036017 ignition[780]: Ignition finished successfully Jul 14 21:52:22.042939 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:52:22.044758 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:52:22.046218 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:52:22.057485 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 21:52:22.066762 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 21:52:22.067039 systemd-resolved[277]: Detected conflict on linux IN A 10.0.0.65 Jul 14 21:52:22.067047 systemd-resolved[277]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jul 14 21:52:22.071077 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 21:52:22.073383 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 21:52:22.119323 kernel: EXT4-fs (vda9): mounted filesystem a9f35e2f-e295-4589-8fb4-4b611a8bb71c r/w with ordered data mode. Quota mode: none. Jul 14 21:52:22.119608 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 21:52:22.120856 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 21:52:22.133400 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:52:22.135105 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 21:52:22.136580 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 21:52:22.136618 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:52:22.143268 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (799) Jul 14 21:52:22.136639 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:52:22.143941 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 21:52:22.147888 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:52:22.147910 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:52:22.147921 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:52:22.147313 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 21:52:22.151376 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:52:22.153059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:52:22.187276 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:52:22.190222 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:52:22.194166 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:52:22.198095 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:52:22.268831 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 21:52:22.281444 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 21:52:22.283716 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 21:52:22.288318 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:52:22.303480 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 21:52:22.305223 ignition[913]: INFO : Ignition 2.19.0 Jul 14 21:52:22.305223 ignition[913]: INFO : Stage: mount Jul 14 21:52:22.305223 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:22.305223 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:22.305223 ignition[913]: INFO : mount: mount passed Jul 14 21:52:22.310944 ignition[913]: INFO : Ignition finished successfully Jul 14 21:52:22.306793 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 21:52:22.316403 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 21:52:22.783960 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 21:52:22.792553 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:52:22.798875 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (926) Jul 14 21:52:22.798906 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:52:22.798917 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:52:22.800370 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:52:22.802319 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:52:22.803432 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:52:22.818381 ignition[943]: INFO : Ignition 2.19.0 Jul 14 21:52:22.818381 ignition[943]: INFO : Stage: files Jul 14 21:52:22.819985 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:22.819985 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:22.819985 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:52:22.823392 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:52:22.823392 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:52:22.823392 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:52:22.823392 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:52:22.823392 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:52:22.822647 unknown[943]: wrote ssh authorized keys file for user: core Jul 14 21:52:22.830414 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 14 21:52:22.830414 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 14 21:52:22.917326 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 21:52:23.091885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 14 21:52:23.091885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 21:52:23.091885 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 14 21:52:23.489134 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 21:52:23.583354 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:52:23.585183 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 14 21:52:23.589865 systemd-networkd[766]: eth0: Gained IPv6LL Jul 14 21:52:23.957137 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 21:52:24.462608 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:52:24.462608 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 14 21:52:24.466143 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:52:24.466143 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:52:24.466143 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 14 21:52:24.466143 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 14 21:52:24.466143 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:52:24.466143 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:52:24.466143 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 14 21:52:24.466143 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 21:52:24.493019 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:52:24.496895 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:52:24.498614 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 21:52:24.498614 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 14 21:52:24.498614 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 21:52:24.498614 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:52:24.498614 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:52:24.498614 ignition[943]: INFO : files: files passed Jul 14 21:52:24.498614 ignition[943]: INFO : Ignition finished successfully Jul 14 21:52:24.498937 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 21:52:24.511508 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 21:52:24.514003 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 21:52:24.517044 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 21:52:24.517129 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 21:52:24.521542 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 21:52:24.523279 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:52:24.523279 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:52:24.526358 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:52:24.526233 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:52:24.527871 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 21:52:24.544512 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 21:52:24.563095 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 21:52:24.563202 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 21:52:24.565510 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 21:52:24.567362 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 21:52:24.569206 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 21:52:24.569907 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 21:52:24.584914 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:52:24.592432 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 21:52:24.600006 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:52:24.601286 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:52:24.603374 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 21:52:24.605207 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 21:52:24.605344 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:52:24.607868 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 21:52:24.609929 systemd[1]: Stopped target basic.target - Basic System. Jul 14 21:52:24.611553 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 21:52:24.613276 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:52:24.615283 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 21:52:24.617246 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 21:52:24.619012 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:52:24.620831 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 21:52:24.622699 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 21:52:24.624427 systemd[1]: Stopped target swap.target - Swaps. Jul 14 21:52:24.625968 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 21:52:24.626087 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:52:24.628355 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:52:24.630315 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:52:24.632300 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 21:52:24.634282 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:52:24.635522 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 21:52:24.635636 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 21:52:24.638437 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 21:52:24.638556 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:52:24.640458 systemd[1]: Stopped target paths.target - Path Units. Jul 14 21:52:24.642059 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 21:52:24.645357 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:52:24.646586 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 21:52:24.648523 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 21:52:24.651720 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 21:52:24.651812 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:52:24.653440 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 21:52:24.653521 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:52:24.655066 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 21:52:24.655175 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:52:24.656848 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 21:52:24.656944 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 21:52:24.668466 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 21:52:24.670061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 21:52:24.670983 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 21:52:24.671103 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:52:24.673018 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 21:52:24.673110 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:52:24.679013 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 21:52:24.680115 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 21:52:24.682496 ignition[999]: INFO : Ignition 2.19.0 Jul 14 21:52:24.682496 ignition[999]: INFO : Stage: umount Jul 14 21:52:24.682496 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:24.682496 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:24.682496 ignition[999]: INFO : umount: umount passed Jul 14 21:52:24.682496 ignition[999]: INFO : Ignition finished successfully Jul 14 21:52:24.683139 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 21:52:24.685117 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 21:52:24.685206 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 21:52:24.687450 systemd[1]: Stopped target network.target - Network. Jul 14 21:52:24.688575 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 21:52:24.688647 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 21:52:24.690319 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 21:52:24.690372 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 21:52:24.691367 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 21:52:24.691413 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 21:52:24.693001 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 21:52:24.693047 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 21:52:24.695425 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 21:52:24.697231 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 21:52:24.703366 systemd-networkd[766]: eth0: DHCPv6 lease lost Jul 14 21:52:24.705043 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 21:52:24.705159 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 21:52:24.706531 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 21:52:24.706564 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:52:24.713917 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 21:52:24.714815 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 21:52:24.714873 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:52:24.716161 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:52:24.717957 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 21:52:24.718062 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 21:52:24.721878 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:52:24.721929 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:52:24.723630 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 21:52:24.723675 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 21:52:24.725416 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 21:52:24.725459 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:52:24.728752 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 21:52:24.728838 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 21:52:24.734131 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 21:52:24.734273 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:52:24.736044 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 21:52:24.736082 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 21:52:24.737780 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 21:52:24.737810 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:52:24.739688 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 21:52:24.739737 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:52:24.742529 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 21:52:24.742574 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 21:52:24.744392 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:52:24.744446 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:52:24.759454 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 21:52:24.760459 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 21:52:24.760526 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:52:24.762578 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 14 21:52:24.762622 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:52:24.764463 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 21:52:24.764506 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:52:24.766533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:52:24.766580 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:52:24.768708 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 21:52:24.768800 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 21:52:24.770594 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 21:52:24.770664 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 21:52:24.773051 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 21:52:24.774249 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 21:52:24.774320 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 21:52:24.785461 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 21:52:24.792729 systemd[1]: Switching root. Jul 14 21:52:24.818526 systemd-journald[237]: Journal stopped Jul 14 21:52:25.573144 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 14 21:52:25.573196 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 21:52:25.573212 kernel: SELinux: policy capability open_perms=1 Jul 14 21:52:25.573222 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 21:52:25.573232 kernel: SELinux: policy capability always_check_network=0 Jul 14 21:52:25.573253 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 21:52:25.573264 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 21:52:25.573274 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 21:52:25.573284 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 21:52:25.573294 kernel: audit: type=1403 audit(1752529944.988:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 21:52:25.573315 systemd[1]: Successfully loaded SELinux policy in 33.615ms. Jul 14 21:52:25.573339 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.107ms. Jul 14 21:52:25.573353 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 21:52:25.573365 systemd[1]: Detected virtualization kvm. Jul 14 21:52:25.573375 systemd[1]: Detected architecture arm64. Jul 14 21:52:25.573386 systemd[1]: Detected first boot. Jul 14 21:52:25.573397 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:52:25.573408 zram_generator::config[1045]: No configuration found. Jul 14 21:52:25.573419 systemd[1]: Populated /etc with preset unit settings. Jul 14 21:52:25.573432 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 21:52:25.573443 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 21:52:25.573454 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 21:52:25.573468 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 21:52:25.573479 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 21:52:25.573490 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 21:52:25.573502 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 21:52:25.573513 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 21:52:25.573524 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 21:52:25.573537 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 21:52:25.573547 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 21:52:25.573558 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:52:25.573569 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:52:25.573580 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 21:52:25.573592 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 21:52:25.573604 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 21:52:25.573618 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:52:25.573630 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 14 21:52:25.573642 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:52:25.573653 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 21:52:25.573664 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 21:52:25.573675 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 21:52:25.573686 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 21:52:25.573696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:52:25.573708 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:52:25.573718 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:52:25.573731 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:52:25.573742 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 21:52:25.573753 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 21:52:25.573764 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:52:25.573775 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:52:25.573786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:52:25.573797 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 21:52:25.573807 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 21:52:25.573818 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 21:52:25.573832 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 21:52:25.573843 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 21:52:25.573854 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 21:52:25.573864 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 21:52:25.573876 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 21:52:25.573887 systemd[1]: Reached target machines.target - Containers. Jul 14 21:52:25.573897 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 21:52:25.573908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:52:25.573920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:52:25.573931 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 21:52:25.573942 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:52:25.573953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:52:25.573964 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:52:25.573975 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 21:52:25.573986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:52:25.573997 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 21:52:25.574008 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 21:52:25.574021 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 21:52:25.574031 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 21:52:25.574042 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 21:52:25.574052 kernel: fuse: init (API version 7.39) Jul 14 21:52:25.574063 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:52:25.574073 kernel: loop: module loaded Jul 14 21:52:25.574082 kernel: ACPI: bus type drm_connector registered Jul 14 21:52:25.574092 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:52:25.574103 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 21:52:25.574116 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 21:52:25.574127 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:52:25.574138 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 21:52:25.574164 systemd-journald[1116]: Collecting audit messages is disabled. Jul 14 21:52:25.574185 systemd[1]: Stopped verity-setup.service. Jul 14 21:52:25.574197 systemd-journald[1116]: Journal started Jul 14 21:52:25.574219 systemd-journald[1116]: Runtime Journal (/run/log/journal/3e45320bb9bd472ba3d5b0cb2c51409b) is 5.9M, max 47.3M, 41.4M free. Jul 14 21:52:25.362677 systemd[1]: Queued start job for default target multi-user.target. Jul 14 21:52:25.379766 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 21:52:25.380133 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 21:52:25.577191 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:52:25.577609 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 21:52:25.578700 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 21:52:25.579863 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 21:52:25.580943 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 21:52:25.582135 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 21:52:25.583314 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 21:52:25.585363 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 21:52:25.586732 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:52:25.588146 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 21:52:25.588292 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 21:52:25.589644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:52:25.589800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:52:25.591106 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:52:25.591254 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:52:25.592606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:52:25.592747 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:52:25.594348 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 21:52:25.594485 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 21:52:25.595722 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:52:25.595857 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:52:25.597336 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:52:25.598643 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 21:52:25.600079 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 21:52:25.612271 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 21:52:25.621415 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 21:52:25.623535 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 21:52:25.624612 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 21:52:25.624654 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:52:25.626755 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 21:52:25.629062 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 21:52:25.631226 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 21:52:25.632447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:52:25.633841 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 21:52:25.638493 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 21:52:25.639725 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:52:25.640746 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 21:52:25.641962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:52:25.647371 systemd-journald[1116]: Time spent on flushing to /var/log/journal/3e45320bb9bd472ba3d5b0cb2c51409b is 21.248ms for 858 entries. Jul 14 21:52:25.647371 systemd-journald[1116]: System Journal (/var/log/journal/3e45320bb9bd472ba3d5b0cb2c51409b) is 8.0M, max 195.6M, 187.6M free. Jul 14 21:52:25.679434 systemd-journald[1116]: Received client request to flush runtime journal. Jul 14 21:52:25.679478 kernel: loop0: detected capacity change from 0 to 114328 Jul 14 21:52:25.645490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:52:25.652567 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 21:52:25.655518 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:52:25.659377 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:52:25.660797 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 21:52:25.662182 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 21:52:25.666001 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 21:52:25.672024 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 21:52:25.676923 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:52:25.679717 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 21:52:25.687356 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 21:52:25.695139 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jul 14 21:52:25.695157 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jul 14 21:52:25.700517 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 21:52:25.703523 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 21:52:25.706345 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 21:52:25.707988 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:52:25.714374 kernel: loop1: detected capacity change from 0 to 207008 Jul 14 21:52:25.715693 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 21:52:25.722349 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 21:52:25.723122 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 21:52:25.728671 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 21:52:25.744728 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 21:52:25.751549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:52:25.755333 kernel: loop2: detected capacity change from 0 to 114432 Jul 14 21:52:25.770019 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jul 14 21:52:25.770372 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jul 14 21:52:25.774501 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:52:25.789441 kernel: loop3: detected capacity change from 0 to 114328 Jul 14 21:52:25.794508 kernel: loop4: detected capacity change from 0 to 207008 Jul 14 21:52:25.800464 kernel: loop5: detected capacity change from 0 to 114432 Jul 14 21:52:25.803614 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 21:52:25.803989 (sd-merge)[1183]: Merged extensions into '/usr'. Jul 14 21:52:25.811257 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 21:52:25.811272 systemd[1]: Reloading... Jul 14 21:52:25.864361 zram_generator::config[1206]: No configuration found. Jul 14 21:52:25.922358 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 21:52:25.976805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:52:26.013503 systemd[1]: Reloading finished in 201 ms. Jul 14 21:52:26.040499 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 21:52:26.041973 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 21:52:26.059550 systemd[1]: Starting ensure-sysext.service... Jul 14 21:52:26.061588 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:52:26.083825 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 21:52:26.084102 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 21:52:26.084781 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 21:52:26.085006 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jul 14 21:52:26.085063 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jul 14 21:52:26.090006 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:52:26.090024 systemd-tmpfiles[1245]: Skipping /boot Jul 14 21:52:26.094530 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jul 14 21:52:26.094547 systemd[1]: Reloading... Jul 14 21:52:26.097160 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:52:26.097179 systemd-tmpfiles[1245]: Skipping /boot Jul 14 21:52:26.136342 zram_generator::config[1272]: No configuration found. Jul 14 21:52:26.223793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:52:26.261056 systemd[1]: Reloading finished in 166 ms. Jul 14 21:52:26.276513 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 21:52:26.290844 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:52:26.300025 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 21:52:26.302931 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 21:52:26.305596 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 21:52:26.312407 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:52:26.315662 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:52:26.318076 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 21:52:26.322131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:52:26.324592 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:52:26.327549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:52:26.329885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:52:26.333625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:52:26.348682 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 21:52:26.350179 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 21:52:26.359687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:52:26.359972 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:52:26.363439 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 21:52:26.365247 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 21:52:26.368069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:52:26.368211 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:52:26.373501 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 21:52:26.381733 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:52:26.381951 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:52:26.383578 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:52:26.383713 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:52:26.385366 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 21:52:26.391688 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jul 14 21:52:26.392722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:52:26.401561 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:52:26.406509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:52:26.409943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:52:26.413704 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:52:26.414981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:52:26.415053 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:52:26.415442 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:52:26.419816 systemd[1]: Finished ensure-sysext.service. Jul 14 21:52:26.421053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:52:26.421210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:52:26.422714 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:52:26.422854 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:52:26.424301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:52:26.424466 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:52:26.426009 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:52:26.427356 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:52:26.441847 augenrules[1353]: No rules Jul 14 21:52:26.445676 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 21:52:26.459538 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:52:26.462467 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:52:26.462572 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:52:26.473539 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 21:52:26.476549 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 21:52:26.480642 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 14 21:52:26.498432 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1371) Jul 14 21:52:26.549973 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 21:52:26.550929 systemd-resolved[1313]: Positive Trust Anchors: Jul 14 21:52:26.551209 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:52:26.551328 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:52:26.552261 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 21:52:26.564255 systemd-resolved[1313]: Defaulting to hostname 'linux'. Jul 14 21:52:26.567875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:52:26.569550 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:52:26.571458 systemd-networkd[1376]: lo: Link UP Jul 14 21:52:26.571470 systemd-networkd[1376]: lo: Gained carrier Jul 14 21:52:26.572150 systemd-networkd[1376]: Enumeration completed Jul 14 21:52:26.572485 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:52:26.574781 systemd[1]: Reached target network.target - Network. Jul 14 21:52:26.581392 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:52:26.581402 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:52:26.582481 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:52:26.582515 systemd-networkd[1376]: eth0: Link UP Jul 14 21:52:26.582518 systemd-networkd[1376]: eth0: Gained carrier Jul 14 21:52:26.582526 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:52:26.586528 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 21:52:26.592448 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:52:26.599039 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 21:52:26.601432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:52:26.604433 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:52:26.606031 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Jul 14 21:52:26.606558 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 21:52:26.140718 systemd-resolved[1313]: Clock change detected. Flushing caches. Jul 14 21:52:26.145509 systemd-journald[1116]: Time jumped backwards, rotating. Jul 14 21:52:26.140721 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 21:52:26.140772 systemd-timesyncd[1377]: Initial clock synchronization to Mon 2025-07-14 21:52:26.140635 UTC. Jul 14 21:52:26.141185 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 21:52:26.147737 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 21:52:26.160086 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:52:26.184363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:52:26.199045 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 21:52:26.200424 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:52:26.201559 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:52:26.202692 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 21:52:26.203874 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 21:52:26.205282 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 21:52:26.206406 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 21:52:26.207593 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 21:52:26.208767 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 21:52:26.208805 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:52:26.209636 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:52:26.211392 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 21:52:26.213679 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 21:52:26.222617 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 21:52:26.224718 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 21:52:26.226183 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 21:52:26.227277 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:52:26.228252 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:52:26.229207 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:52:26.229235 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:52:26.230155 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 21:52:26.232177 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 21:52:26.233756 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:52:26.235375 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 21:52:26.238005 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 21:52:26.241952 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 21:52:26.242969 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 21:52:26.245893 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 21:52:26.246705 jq[1414]: false Jul 14 21:52:26.248845 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 21:52:26.251947 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 21:52:26.257198 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 21:52:26.262369 extend-filesystems[1415]: Found loop3 Jul 14 21:52:26.264866 extend-filesystems[1415]: Found loop4 Jul 14 21:52:26.264866 extend-filesystems[1415]: Found loop5 Jul 14 21:52:26.264866 extend-filesystems[1415]: Found vda Jul 14 21:52:26.264866 extend-filesystems[1415]: Found vda1 Jul 14 21:52:26.264866 extend-filesystems[1415]: Found vda2 Jul 14 21:52:26.264866 extend-filesystems[1415]: Found vda3 Jul 14 21:52:26.264866 extend-filesystems[1415]: Found usr Jul 14 21:52:26.264866 extend-filesystems[1415]: Found vda4 Jul 14 21:52:26.264866 extend-filesystems[1415]: Found vda6 Jul 14 21:52:26.264866 extend-filesystems[1415]: Found vda7 Jul 14 21:52:26.264866 extend-filesystems[1415]: Found vda9 Jul 14 21:52:26.264866 extend-filesystems[1415]: Checking size of /dev/vda9 Jul 14 21:52:26.262677 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 21:52:26.280074 dbus-daemon[1413]: [system] SELinux support is enabled Jul 14 21:52:26.263102 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 21:52:26.304724 extend-filesystems[1415]: Resized partition /dev/vda9 Jul 14 21:52:26.308603 update_engine[1429]: I20250714 21:52:26.303791 1429 main.cc:92] Flatcar Update Engine starting Jul 14 21:52:26.265051 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 21:52:26.308942 jq[1431]: true Jul 14 21:52:26.269478 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 21:52:26.273764 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 21:52:26.278558 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 21:52:26.278720 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 21:52:26.278961 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 21:52:26.279088 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 21:52:26.281844 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 21:52:26.300021 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 21:52:26.300186 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 21:52:26.317692 update_engine[1429]: I20250714 21:52:26.312844 1429 update_check_scheduler.cc:74] Next update check in 10m18s Jul 14 21:52:26.315968 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 21:52:26.316020 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 21:52:26.320377 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 21:52:26.327465 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1375) Jul 14 21:52:26.327505 extend-filesystems[1440]: resize2fs 1.47.1 (20-May-2024) Jul 14 21:52:26.332776 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 21:52:26.321044 systemd-logind[1423]: New seat seat0. Jul 14 21:52:26.321057 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 21:52:26.321078 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 21:52:26.329303 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 21:52:26.332001 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 21:52:26.333845 systemd[1]: Started update-engine.service - Update Engine. Jul 14 21:52:26.334506 jq[1438]: true Jul 14 21:52:26.335453 tar[1435]: linux-arm64/LICENSE Jul 14 21:52:26.335453 tar[1435]: linux-arm64/helm Jul 14 21:52:26.353067 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 21:52:26.359634 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 21:52:26.390365 extend-filesystems[1440]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 21:52:26.390365 extend-filesystems[1440]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 21:52:26.390365 extend-filesystems[1440]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 21:52:26.397246 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Jul 14 21:52:26.392111 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 21:52:26.393814 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 21:52:26.400881 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Jul 14 21:52:26.402153 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 21:52:26.406749 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 21:52:26.415411 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 21:52:26.534625 containerd[1439]: time="2025-07-14T21:52:26.532854909Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 21:52:26.556302 containerd[1439]: time="2025-07-14T21:52:26.556246629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.557815 containerd[1439]: time="2025-07-14T21:52:26.557771349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:52:26.557815 containerd[1439]: time="2025-07-14T21:52:26.557813989Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 21:52:26.557898 containerd[1439]: time="2025-07-14T21:52:26.557832309Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 21:52:26.558009 containerd[1439]: time="2025-07-14T21:52:26.557986589Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 21:52:26.558034 containerd[1439]: time="2025-07-14T21:52:26.558009789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.558083 containerd[1439]: time="2025-07-14T21:52:26.558065349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:52:26.558109 containerd[1439]: time="2025-07-14T21:52:26.558083109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.558258 containerd[1439]: time="2025-07-14T21:52:26.558235109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:52:26.558258 containerd[1439]: time="2025-07-14T21:52:26.558255709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.558297 containerd[1439]: time="2025-07-14T21:52:26.558276429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:52:26.558297 containerd[1439]: time="2025-07-14T21:52:26.558286469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.558372 containerd[1439]: time="2025-07-14T21:52:26.558355389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.558771 containerd[1439]: time="2025-07-14T21:52:26.558744469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.558942 containerd[1439]: time="2025-07-14T21:52:26.558915949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:52:26.558969 containerd[1439]: time="2025-07-14T21:52:26.558944989Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 21:52:26.559049 containerd[1439]: time="2025-07-14T21:52:26.559029469Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 21:52:26.559098 containerd[1439]: time="2025-07-14T21:52:26.559084589Z" level=info msg="metadata content store policy set" policy=shared Jul 14 21:52:26.563020 containerd[1439]: time="2025-07-14T21:52:26.562986189Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 21:52:26.563069 containerd[1439]: time="2025-07-14T21:52:26.563035389Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 21:52:26.563069 containerd[1439]: time="2025-07-14T21:52:26.563051029Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 21:52:26.563069 containerd[1439]: time="2025-07-14T21:52:26.563065789Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 21:52:26.563138 containerd[1439]: time="2025-07-14T21:52:26.563079149Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 21:52:26.563250 containerd[1439]: time="2025-07-14T21:52:26.563227349Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 21:52:26.563488 containerd[1439]: time="2025-07-14T21:52:26.563458469Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 21:52:26.563818 containerd[1439]: time="2025-07-14T21:52:26.563792669Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 21:52:26.563847 containerd[1439]: time="2025-07-14T21:52:26.563821749Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 21:52:26.563847 containerd[1439]: time="2025-07-14T21:52:26.563836189Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 21:52:26.563893 containerd[1439]: time="2025-07-14T21:52:26.563851789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.563893 containerd[1439]: time="2025-07-14T21:52:26.563864989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.563893 containerd[1439]: time="2025-07-14T21:52:26.563878349Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.563941 containerd[1439]: time="2025-07-14T21:52:26.563892549Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.563941 containerd[1439]: time="2025-07-14T21:52:26.563912509Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.563941 containerd[1439]: time="2025-07-14T21:52:26.563925549Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.563941 containerd[1439]: time="2025-07-14T21:52:26.563937429Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.564006 containerd[1439]: time="2025-07-14T21:52:26.563948909Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.564006 containerd[1439]: time="2025-07-14T21:52:26.563973349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564006 containerd[1439]: time="2025-07-14T21:52:26.563986749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564006 containerd[1439]: time="2025-07-14T21:52:26.563999269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564070 containerd[1439]: time="2025-07-14T21:52:26.564012269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564070 containerd[1439]: time="2025-07-14T21:52:26.564024909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564070 containerd[1439]: time="2025-07-14T21:52:26.564037309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564070 containerd[1439]: time="2025-07-14T21:52:26.564049189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564070 containerd[1439]: time="2025-07-14T21:52:26.564066709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564150 containerd[1439]: time="2025-07-14T21:52:26.564083269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564150 containerd[1439]: time="2025-07-14T21:52:26.564098869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564150 containerd[1439]: time="2025-07-14T21:52:26.564110749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564150 containerd[1439]: time="2025-07-14T21:52:26.564122429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564150 containerd[1439]: time="2025-07-14T21:52:26.564136069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564269 containerd[1439]: time="2025-07-14T21:52:26.564152349Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 21:52:26.564269 containerd[1439]: time="2025-07-14T21:52:26.564173629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564269 containerd[1439]: time="2025-07-14T21:52:26.564186269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564269 containerd[1439]: time="2025-07-14T21:52:26.564201149Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 21:52:26.564334 containerd[1439]: time="2025-07-14T21:52:26.564315589Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 21:52:26.564352 containerd[1439]: time="2025-07-14T21:52:26.564332429Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 21:52:26.564352 containerd[1439]: time="2025-07-14T21:52:26.564342949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 21:52:26.564385 containerd[1439]: time="2025-07-14T21:52:26.564355229Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 21:52:26.564385 containerd[1439]: time="2025-07-14T21:52:26.564365989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564385 containerd[1439]: time="2025-07-14T21:52:26.564379029Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 21:52:26.564436 containerd[1439]: time="2025-07-14T21:52:26.564407189Z" level=info msg="NRI interface is disabled by configuration." Jul 14 21:52:26.564436 containerd[1439]: time="2025-07-14T21:52:26.564418909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.564997 containerd[1439]: time="2025-07-14T21:52:26.564884109Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 21:52:26.565109 containerd[1439]: time="2025-07-14T21:52:26.564998509Z" level=info msg="Connect containerd service" Jul 14 21:52:26.565109 containerd[1439]: time="2025-07-14T21:52:26.565032869Z" level=info msg="using legacy CRI server" Jul 14 21:52:26.565109 containerd[1439]: time="2025-07-14T21:52:26.565040069Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 21:52:26.565173 containerd[1439]: time="2025-07-14T21:52:26.565120789Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 21:52:26.565995 containerd[1439]: time="2025-07-14T21:52:26.565962309Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:52:26.566645 containerd[1439]: time="2025-07-14T21:52:26.566238949Z" level=info msg="Start subscribing containerd event" Jul 14 21:52:26.566645 containerd[1439]: time="2025-07-14T21:52:26.566371149Z" level=info msg="Start recovering state" Jul 14 21:52:26.566645 containerd[1439]: time="2025-07-14T21:52:26.566468389Z" level=info msg="Start event monitor" Jul 14 21:52:26.566645 containerd[1439]: time="2025-07-14T21:52:26.566491509Z" level=info msg="Start snapshots syncer" Jul 14 21:52:26.566645 containerd[1439]: time="2025-07-14T21:52:26.566501429Z" level=info msg="Start cni network conf syncer for default" Jul 14 21:52:26.566645 containerd[1439]: time="2025-07-14T21:52:26.566508989Z" level=info msg="Start streaming server" Jul 14 21:52:26.567332 containerd[1439]: time="2025-07-14T21:52:26.567249469Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 21:52:26.567370 containerd[1439]: time="2025-07-14T21:52:26.567356229Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 21:52:26.567882 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 21:52:26.569570 containerd[1439]: time="2025-07-14T21:52:26.569527109Z" level=info msg="containerd successfully booted in 0.038160s" Jul 14 21:52:26.741589 tar[1435]: linux-arm64/README.md Jul 14 21:52:26.751972 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 21:52:27.117477 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 21:52:27.136415 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 21:52:27.147946 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 21:52:27.153389 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 21:52:27.153601 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 21:52:27.156226 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 21:52:27.169414 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 21:52:27.172469 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 21:52:27.174643 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 14 21:52:27.176005 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 21:52:27.919739 systemd-networkd[1376]: eth0: Gained IPv6LL Jul 14 21:52:27.923687 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 21:52:27.925417 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 21:52:27.935872 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 21:52:27.938269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:27.940400 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 21:52:27.954483 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 21:52:27.954710 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 21:52:27.956200 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 21:52:27.959574 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 21:52:28.523771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:28.525405 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 21:52:28.527488 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:52:28.532153 systemd[1]: Startup finished in 595ms (kernel) + 5.280s (initrd) + 4.047s (userspace) = 9.923s. Jul 14 21:52:28.916136 kubelet[1525]: E0714 21:52:28.916020 1525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:52:28.918279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:52:28.918424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:52:31.706424 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 21:52:31.707532 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:59442.service - OpenSSH per-connection server daemon (10.0.0.1:59442). Jul 14 21:52:31.788578 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 59442 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:31.790644 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:31.798885 systemd-logind[1423]: New session 1 of user core. Jul 14 21:52:31.799880 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 21:52:31.809864 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 21:52:31.818070 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 21:52:31.820114 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 21:52:31.825987 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:52:31.898775 systemd[1543]: Queued start job for default target default.target. Jul 14 21:52:31.908541 systemd[1543]: Created slice app.slice - User Application Slice. Jul 14 21:52:31.908569 systemd[1543]: Reached target paths.target - Paths. Jul 14 21:52:31.908581 systemd[1543]: Reached target timers.target - Timers. Jul 14 21:52:31.909747 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 21:52:31.918854 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 21:52:31.918911 systemd[1543]: Reached target sockets.target - Sockets. Jul 14 21:52:31.918924 systemd[1543]: Reached target basic.target - Basic System. Jul 14 21:52:31.918959 systemd[1543]: Reached target default.target - Main User Target. Jul 14 21:52:31.918982 systemd[1543]: Startup finished in 88ms. Jul 14 21:52:31.919209 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 21:52:31.920461 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 21:52:31.983185 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:59448.service - OpenSSH per-connection server daemon (10.0.0.1:59448). Jul 14 21:52:32.024964 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 59448 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:32.026233 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:32.030909 systemd-logind[1423]: New session 2 of user core. Jul 14 21:52:32.037738 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 21:52:32.089126 sshd[1554]: pam_unix(sshd:session): session closed for user core Jul 14 21:52:32.102962 systemd[1]: sshd@1-10.0.0.65:22-10.0.0.1:59448.service: Deactivated successfully. Jul 14 21:52:32.104379 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 21:52:32.106659 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Jul 14 21:52:32.107721 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:59456.service - OpenSSH per-connection server daemon (10.0.0.1:59456). Jul 14 21:52:32.108423 systemd-logind[1423]: Removed session 2. Jul 14 21:52:32.140587 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 59456 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:32.141950 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:32.146142 systemd-logind[1423]: New session 3 of user core. Jul 14 21:52:32.162779 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 21:52:32.211488 sshd[1561]: pam_unix(sshd:session): session closed for user core Jul 14 21:52:32.220998 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:59456.service: Deactivated successfully. Jul 14 21:52:32.222917 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 21:52:32.224093 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Jul 14 21:52:32.225129 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:59462.service - OpenSSH per-connection server daemon (10.0.0.1:59462). Jul 14 21:52:32.225914 systemd-logind[1423]: Removed session 3. Jul 14 21:52:32.258126 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 59462 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:32.259634 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:32.263691 systemd-logind[1423]: New session 4 of user core. Jul 14 21:52:32.273862 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 21:52:32.325286 sshd[1568]: pam_unix(sshd:session): session closed for user core Jul 14 21:52:32.333922 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:59462.service: Deactivated successfully. Jul 14 21:52:32.336810 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 21:52:32.338029 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Jul 14 21:52:32.340235 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:59468.service - OpenSSH per-connection server daemon (10.0.0.1:59468). Jul 14 21:52:32.340959 systemd-logind[1423]: Removed session 4. Jul 14 21:52:32.372236 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 59468 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:32.373457 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:32.377757 systemd-logind[1423]: New session 5 of user core. Jul 14 21:52:32.388787 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 21:52:32.448901 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 21:52:32.449179 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:52:32.463317 sudo[1578]: pam_unix(sudo:session): session closed for user root Jul 14 21:52:32.465016 sshd[1575]: pam_unix(sshd:session): session closed for user core Jul 14 21:52:32.477045 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:59468.service: Deactivated successfully. Jul 14 21:52:32.478328 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 21:52:32.481633 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Jul 14 21:52:32.482718 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:56374.service - OpenSSH per-connection server daemon (10.0.0.1:56374). Jul 14 21:52:32.483378 systemd-logind[1423]: Removed session 5. Jul 14 21:52:32.514246 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 56374 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:32.515775 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:32.519377 systemd-logind[1423]: New session 6 of user core. Jul 14 21:52:32.529745 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 21:52:32.581258 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 21:52:32.581541 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:52:32.584439 sudo[1587]: pam_unix(sudo:session): session closed for user root Jul 14 21:52:32.588731 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 21:52:32.588990 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:52:32.615915 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 21:52:32.617092 auditctl[1590]: No rules Jul 14 21:52:32.617935 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:52:32.619684 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 21:52:32.621365 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 21:52:32.644135 augenrules[1608]: No rules Jul 14 21:52:32.645304 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 21:52:32.646309 sudo[1586]: pam_unix(sudo:session): session closed for user root Jul 14 21:52:32.648471 sshd[1583]: pam_unix(sshd:session): session closed for user core Jul 14 21:52:32.660967 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:56374.service: Deactivated successfully. Jul 14 21:52:32.663909 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 21:52:32.665142 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Jul 14 21:52:32.666386 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:56380.service - OpenSSH per-connection server daemon (10.0.0.1:56380). Jul 14 21:52:32.667130 systemd-logind[1423]: Removed session 6. Jul 14 21:52:32.698492 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 56380 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:32.699838 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:32.703089 systemd-logind[1423]: New session 7 of user core. Jul 14 21:52:32.711746 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 21:52:32.761697 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 21:52:32.762235 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:52:33.081847 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 21:52:33.081969 (dockerd)[1637]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 21:52:33.338703 dockerd[1637]: time="2025-07-14T21:52:33.338413389Z" level=info msg="Starting up" Jul 14 21:52:33.512826 dockerd[1637]: time="2025-07-14T21:52:33.512769589Z" level=info msg="Loading containers: start." Jul 14 21:52:33.606640 kernel: Initializing XFRM netlink socket Jul 14 21:52:33.669594 systemd-networkd[1376]: docker0: Link UP Jul 14 21:52:33.687891 dockerd[1637]: time="2025-07-14T21:52:33.687796829Z" level=info msg="Loading containers: done." Jul 14 21:52:33.702076 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4294291545-merged.mount: Deactivated successfully. Jul 14 21:52:33.703506 dockerd[1637]: time="2025-07-14T21:52:33.703459829Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 21:52:33.703578 dockerd[1637]: time="2025-07-14T21:52:33.703563029Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 21:52:33.703729 dockerd[1637]: time="2025-07-14T21:52:33.703701189Z" level=info msg="Daemon has completed initialization" Jul 14 21:52:33.732861 dockerd[1637]: time="2025-07-14T21:52:33.732729909Z" level=info msg="API listen on /run/docker.sock" Jul 14 21:52:33.732950 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 21:52:34.278152 containerd[1439]: time="2025-07-14T21:52:34.278105669Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 14 21:52:34.897292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901329319.mount: Deactivated successfully. Jul 14 21:52:35.679424 containerd[1439]: time="2025-07-14T21:52:35.679380029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:35.679853 containerd[1439]: time="2025-07-14T21:52:35.679814669Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 14 21:52:35.680713 containerd[1439]: time="2025-07-14T21:52:35.680683549Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:35.683626 containerd[1439]: time="2025-07-14T21:52:35.683581069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:35.684862 containerd[1439]: time="2025-07-14T21:52:35.684829349Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.40668s" Jul 14 21:52:35.684904 containerd[1439]: time="2025-07-14T21:52:35.684865309Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 14 21:52:35.685551 containerd[1439]: time="2025-07-14T21:52:35.685478069Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 14 21:52:36.688632 containerd[1439]: time="2025-07-14T21:52:36.688489669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:36.689565 containerd[1439]: time="2025-07-14T21:52:36.689331869Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 14 21:52:36.690372 containerd[1439]: time="2025-07-14T21:52:36.690332149Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:36.693246 containerd[1439]: time="2025-07-14T21:52:36.693203829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:36.695275 containerd[1439]: time="2025-07-14T21:52:36.695231469Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.00972384s" Jul 14 21:52:36.695275 containerd[1439]: time="2025-07-14T21:52:36.695267669Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 14 21:52:36.695749 containerd[1439]: time="2025-07-14T21:52:36.695721949Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 14 21:52:37.664180 containerd[1439]: time="2025-07-14T21:52:37.664127669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:37.667280 containerd[1439]: time="2025-07-14T21:52:37.667225269Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 14 21:52:37.668076 containerd[1439]: time="2025-07-14T21:52:37.668037069Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:37.671247 containerd[1439]: time="2025-07-14T21:52:37.671204309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:37.672757 containerd[1439]: time="2025-07-14T21:52:37.672665749Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 976.90912ms" Jul 14 21:52:37.672757 containerd[1439]: time="2025-07-14T21:52:37.672705309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 14 21:52:37.673679 containerd[1439]: time="2025-07-14T21:52:37.673511549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 14 21:52:38.617401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3338522812.mount: Deactivated successfully. Jul 14 21:52:38.982903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 21:52:38.991868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:39.084290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:39.087760 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:52:39.159051 containerd[1439]: time="2025-07-14T21:52:39.159006509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:39.160063 containerd[1439]: time="2025-07-14T21:52:39.160023589Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 14 21:52:39.161209 containerd[1439]: time="2025-07-14T21:52:39.161086629Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:39.162821 containerd[1439]: time="2025-07-14T21:52:39.162774789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:39.163805 containerd[1439]: time="2025-07-14T21:52:39.163673189Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.49013124s" Jul 14 21:52:39.163805 containerd[1439]: time="2025-07-14T21:52:39.163721149Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 14 21:52:39.164197 containerd[1439]: time="2025-07-14T21:52:39.164173749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 21:52:39.171811 kubelet[1862]: E0714 21:52:39.171768 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:52:39.174821 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:52:39.174958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:52:39.684484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount494099504.mount: Deactivated successfully. Jul 14 21:52:40.323773 containerd[1439]: time="2025-07-14T21:52:40.323728109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.324340 containerd[1439]: time="2025-07-14T21:52:40.324303789Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 14 21:52:40.325192 containerd[1439]: time="2025-07-14T21:52:40.325164629Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.328276 containerd[1439]: time="2025-07-14T21:52:40.328243349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.330510 containerd[1439]: time="2025-07-14T21:52:40.330469029Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.16626456s" Jul 14 21:52:40.330510 containerd[1439]: time="2025-07-14T21:52:40.330507189Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 14 21:52:40.331006 containerd[1439]: time="2025-07-14T21:52:40.330950829Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 21:52:40.810001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758258950.mount: Deactivated successfully. Jul 14 21:52:40.814514 containerd[1439]: time="2025-07-14T21:52:40.813791389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.815302 containerd[1439]: time="2025-07-14T21:52:40.815279309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 14 21:52:40.816546 containerd[1439]: time="2025-07-14T21:52:40.816513229Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.819240 containerd[1439]: time="2025-07-14T21:52:40.819208109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.819923 containerd[1439]: time="2025-07-14T21:52:40.819893469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 488.90868ms" Jul 14 21:52:40.820026 containerd[1439]: time="2025-07-14T21:52:40.820008469Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 21:52:40.820729 containerd[1439]: time="2025-07-14T21:52:40.820544269Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 14 21:52:41.377755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603903472.mount: Deactivated successfully. Jul 14 21:52:42.900640 containerd[1439]: time="2025-07-14T21:52:42.900087349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:42.901252 containerd[1439]: time="2025-07-14T21:52:42.901229989Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 14 21:52:42.904643 containerd[1439]: time="2025-07-14T21:52:42.904421509Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:42.908952 containerd[1439]: time="2025-07-14T21:52:42.908920709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:42.910531 containerd[1439]: time="2025-07-14T21:52:42.910301869Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.08972484s" Jul 14 21:52:42.910531 containerd[1439]: time="2025-07-14T21:52:42.910337149Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 14 21:52:48.732312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:48.744078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:48.763774 systemd[1]: Reloading requested from client PID 2011 ('systemctl') (unit session-7.scope)... Jul 14 21:52:48.763931 systemd[1]: Reloading... Jul 14 21:52:48.831633 zram_generator::config[2050]: No configuration found. Jul 14 21:52:48.965577 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:52:49.019447 systemd[1]: Reloading finished in 255 ms. Jul 14 21:52:49.062491 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:49.065513 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:52:49.065726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:49.067239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:49.168782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:49.172883 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:52:49.210407 kubelet[2097]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:52:49.210407 kubelet[2097]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 21:52:49.210407 kubelet[2097]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:52:49.210817 kubelet[2097]: I0714 21:52:49.210462 2097 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:52:49.678820 kubelet[2097]: I0714 21:52:49.678780 2097 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 21:52:49.678820 kubelet[2097]: I0714 21:52:49.678810 2097 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:52:49.679076 kubelet[2097]: I0714 21:52:49.679059 2097 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 21:52:49.721704 kubelet[2097]: E0714 21:52:49.721644 2097 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:49.723648 kubelet[2097]: I0714 21:52:49.723544 2097 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:52:49.729671 kubelet[2097]: E0714 21:52:49.729640 2097 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:52:49.729671 kubelet[2097]: I0714 21:52:49.729665 2097 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:52:49.732146 kubelet[2097]: I0714 21:52:49.732126 2097 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:52:49.732338 kubelet[2097]: I0714 21:52:49.732303 2097 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:52:49.732496 kubelet[2097]: I0714 21:52:49.732329 2097 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:52:49.732578 kubelet[2097]: I0714 21:52:49.732558 2097 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:52:49.732578 kubelet[2097]: I0714 21:52:49.732567 2097 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 21:52:49.732779 kubelet[2097]: I0714 21:52:49.732753 2097 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:52:49.737045 kubelet[2097]: I0714 21:52:49.737002 2097 kubelet.go:446] "Attempting to sync node with API server" Jul 14 21:52:49.737045 kubelet[2097]: I0714 21:52:49.737022 2097 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:52:49.737045 kubelet[2097]: I0714 21:52:49.737043 2097 kubelet.go:352] "Adding apiserver pod source" Jul 14 21:52:49.737410 kubelet[2097]: I0714 21:52:49.737054 2097 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:52:49.739167 kubelet[2097]: W0714 21:52:49.739124 2097 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 14 21:52:49.739221 kubelet[2097]: E0714 21:52:49.739178 2097 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:49.740168 kubelet[2097]: W0714 21:52:49.740132 2097 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 14 21:52:49.740272 kubelet[2097]: E0714 21:52:49.740253 2097 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:49.740704 kubelet[2097]: I0714 21:52:49.740685 2097 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 21:52:49.741356 kubelet[2097]: I0714 21:52:49.741344 2097 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:52:49.742849 kubelet[2097]: W0714 21:52:49.742630 2097 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 21:52:49.743486 kubelet[2097]: I0714 21:52:49.743452 2097 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 21:52:49.743546 kubelet[2097]: I0714 21:52:49.743498 2097 server.go:1287] "Started kubelet" Jul 14 21:52:49.743939 kubelet[2097]: I0714 21:52:49.743899 2097 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:52:49.744112 kubelet[2097]: I0714 21:52:49.744063 2097 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:52:49.744380 kubelet[2097]: I0714 21:52:49.744355 2097 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:52:49.745108 kubelet[2097]: I0714 21:52:49.745091 2097 server.go:479] "Adding debug handlers to kubelet server" Jul 14 21:52:49.746838 kubelet[2097]: I0714 21:52:49.746807 2097 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:52:49.747113 kubelet[2097]: I0714 21:52:49.747094 2097 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:52:49.749668 kubelet[2097]: I0714 21:52:49.748235 2097 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 21:52:49.749668 kubelet[2097]: E0714 21:52:49.748393 2097 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:49.749668 kubelet[2097]: I0714 21:52:49.748598 2097 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 21:52:49.749668 kubelet[2097]: I0714 21:52:49.748669 2097 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:52:49.749668 kubelet[2097]: W0714 21:52:49.749167 2097 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 14 21:52:49.749668 kubelet[2097]: E0714 21:52:49.749202 2097 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:49.749668 kubelet[2097]: E0714 21:52:49.749250 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="200ms" Jul 14 21:52:49.750242 kubelet[2097]: I0714 21:52:49.750061 2097 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:52:49.750242 kubelet[2097]: I0714 21:52:49.750147 2097 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:52:49.752714 kubelet[2097]: I0714 21:52:49.751535 2097 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:52:49.752714 kubelet[2097]: E0714 21:52:49.751511 2097 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523cb8e0c9362d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:52:49.743476269 +0000 UTC m=+0.567605961,LastTimestamp:2025-07-14 21:52:49.743476269 +0000 UTC m=+0.567605961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:52:49.761204 kubelet[2097]: I0714 21:52:49.761149 2097 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:52:49.762187 kubelet[2097]: I0714 21:52:49.762136 2097 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:52:49.762187 kubelet[2097]: I0714 21:52:49.762159 2097 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 21:52:49.762187 kubelet[2097]: I0714 21:52:49.762174 2097 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 21:52:49.762187 kubelet[2097]: I0714 21:52:49.762182 2097 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 21:52:49.762318 kubelet[2097]: E0714 21:52:49.762217 2097 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:52:49.766439 kubelet[2097]: I0714 21:52:49.766244 2097 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 21:52:49.766439 kubelet[2097]: I0714 21:52:49.766257 2097 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 21:52:49.766439 kubelet[2097]: I0714 21:52:49.766281 2097 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:52:49.767133 kubelet[2097]: W0714 21:52:49.767095 2097 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 14 21:52:49.767252 kubelet[2097]: E0714 21:52:49.767233 2097 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:49.841283 kubelet[2097]: I0714 21:52:49.841254 2097 policy_none.go:49] "None policy: Start" Jul 14 21:52:49.841752 kubelet[2097]: I0714 21:52:49.841433 2097 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 21:52:49.841752 kubelet[2097]: I0714 21:52:49.841459 2097 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:52:49.846850 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 21:52:49.849419 kubelet[2097]: E0714 21:52:49.849392 2097 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:49.859995 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 21:52:49.862402 kubelet[2097]: E0714 21:52:49.862378 2097 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 21:52:49.862624 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 21:52:49.883502 kubelet[2097]: I0714 21:52:49.883437 2097 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:52:49.883700 kubelet[2097]: I0714 21:52:49.883655 2097 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:52:49.883700 kubelet[2097]: I0714 21:52:49.883674 2097 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:52:49.883945 kubelet[2097]: I0714 21:52:49.883828 2097 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:52:49.884985 kubelet[2097]: E0714 21:52:49.884964 2097 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 21:52:49.885076 kubelet[2097]: E0714 21:52:49.885065 2097 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 21:52:49.950742 kubelet[2097]: E0714 21:52:49.950640 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="400ms" Jul 14 21:52:49.985771 kubelet[2097]: I0714 21:52:49.985740 2097 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:52:49.986246 kubelet[2097]: E0714 21:52:49.986210 2097 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jul 14 21:52:50.070106 systemd[1]: Created slice kubepods-burstable-pod5c9a34cba65408693c74603e3bfea2c2.slice - libcontainer container kubepods-burstable-pod5c9a34cba65408693c74603e3bfea2c2.slice. Jul 14 21:52:50.087937 kubelet[2097]: E0714 21:52:50.087905 2097 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:52:50.091018 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 14 21:52:50.092688 kubelet[2097]: E0714 21:52:50.092662 2097 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:52:50.094768 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 14 21:52:50.096148 kubelet[2097]: E0714 21:52:50.096098 2097 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:52:50.151512 kubelet[2097]: I0714 21:52:50.151479 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c9a34cba65408693c74603e3bfea2c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c9a34cba65408693c74603e3bfea2c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:50.151718 kubelet[2097]: I0714 21:52:50.151518 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:50.151718 kubelet[2097]: I0714 21:52:50.151537 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:52:50.151718 kubelet[2097]: I0714 21:52:50.151551 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c9a34cba65408693c74603e3bfea2c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c9a34cba65408693c74603e3bfea2c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:50.151718 kubelet[2097]: I0714 21:52:50.151578 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c9a34cba65408693c74603e3bfea2c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5c9a34cba65408693c74603e3bfea2c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:50.151718 kubelet[2097]: I0714 21:52:50.151601 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:50.151829 kubelet[2097]: I0714 21:52:50.151644 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:50.151829 kubelet[2097]: I0714 21:52:50.151661 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:50.151829 kubelet[2097]: I0714 21:52:50.151684 2097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:50.187674 kubelet[2097]: I0714 21:52:50.187637 2097 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:52:50.188098 kubelet[2097]: E0714 21:52:50.188051 2097 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jul 14 21:52:50.351505 kubelet[2097]: E0714 21:52:50.351436 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="800ms" Jul 14 21:52:50.388816 kubelet[2097]: E0714 21:52:50.388794 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:50.389453 containerd[1439]: time="2025-07-14T21:52:50.389416669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5c9a34cba65408693c74603e3bfea2c2,Namespace:kube-system,Attempt:0,}" Jul 14 21:52:50.394211 kubelet[2097]: E0714 21:52:50.393956 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:50.394747 containerd[1439]: time="2025-07-14T21:52:50.394456029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 14 21:52:50.396864 kubelet[2097]: E0714 21:52:50.396844 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:50.400770 containerd[1439]: time="2025-07-14T21:52:50.400730789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 14 21:52:50.590169 kubelet[2097]: I0714 21:52:50.589802 2097 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:52:50.590169 kubelet[2097]: E0714 21:52:50.590129 2097 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jul 14 21:52:50.741638 kubelet[2097]: W0714 21:52:50.741185 2097 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 14 21:52:50.741638 kubelet[2097]: E0714 21:52:50.741284 2097 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:50.910313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470601186.mount: Deactivated successfully. Jul 14 21:52:50.917577 containerd[1439]: time="2025-07-14T21:52:50.917525309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:52:50.918308 containerd[1439]: time="2025-07-14T21:52:50.918264469Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 14 21:52:50.919039 containerd[1439]: time="2025-07-14T21:52:50.919005029Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:52:50.920231 containerd[1439]: time="2025-07-14T21:52:50.920200349Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:52:50.920558 containerd[1439]: time="2025-07-14T21:52:50.920523349Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:52:50.921625 containerd[1439]: time="2025-07-14T21:52:50.921589389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:52:50.922035 containerd[1439]: time="2025-07-14T21:52:50.922015909Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:52:50.924606 containerd[1439]: time="2025-07-14T21:52:50.924462749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.92296ms" Jul 14 21:52:50.925814 containerd[1439]: time="2025-07-14T21:52:50.925786469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:52:50.926929 containerd[1439]: time="2025-07-14T21:52:50.926842429Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 537.33876ms" Jul 14 21:52:50.929211 containerd[1439]: time="2025-07-14T21:52:50.929146669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.348ms" Jul 14 21:52:50.954646 kubelet[2097]: W0714 21:52:50.953807 2097 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 14 21:52:50.954646 kubelet[2097]: E0714 21:52:50.953872 2097 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:51.133289 containerd[1439]: time="2025-07-14T21:52:51.132638669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:52:51.133289 containerd[1439]: time="2025-07-14T21:52:51.132676549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:52:51.133289 containerd[1439]: time="2025-07-14T21:52:51.132686909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.133289 containerd[1439]: time="2025-07-14T21:52:51.132750709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.133289 containerd[1439]: time="2025-07-14T21:52:51.132379669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:52:51.133289 containerd[1439]: time="2025-07-14T21:52:51.132447989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:52:51.133289 containerd[1439]: time="2025-07-14T21:52:51.132466829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.133289 containerd[1439]: time="2025-07-14T21:52:51.132556909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.133580 containerd[1439]: time="2025-07-14T21:52:51.133487029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:52:51.133580 containerd[1439]: time="2025-07-14T21:52:51.133529749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:52:51.133580 containerd[1439]: time="2025-07-14T21:52:51.133540709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.133709 containerd[1439]: time="2025-07-14T21:52:51.133607949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.152632 kubelet[2097]: E0714 21:52:51.152503 2097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="1.6s" Jul 14 21:52:51.156821 systemd[1]: Started cri-containerd-515399452dd15df90933d13f17a7a5e79159c3f375029d516b28fcb3731ee0f7.scope - libcontainer container 515399452dd15df90933d13f17a7a5e79159c3f375029d516b28fcb3731ee0f7. Jul 14 21:52:51.158163 systemd[1]: Started cri-containerd-5d355d6b1b311df7f8d896d79cbef966338f09ac79192f10a3249ba1dc990566.scope - libcontainer container 5d355d6b1b311df7f8d896d79cbef966338f09ac79192f10a3249ba1dc990566. Jul 14 21:52:51.160792 systemd[1]: Started cri-containerd-2a22668dbf8935e627a905c7bc30d0a5e67a40fd8a7be4ad1b444c319ae4190e.scope - libcontainer container 2a22668dbf8935e627a905c7bc30d0a5e67a40fd8a7be4ad1b444c319ae4190e. Jul 14 21:52:51.191844 containerd[1439]: time="2025-07-14T21:52:51.191772789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5c9a34cba65408693c74603e3bfea2c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a22668dbf8935e627a905c7bc30d0a5e67a40fd8a7be4ad1b444c319ae4190e\"" Jul 14 21:52:51.192511 containerd[1439]: time="2025-07-14T21:52:51.192318469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"515399452dd15df90933d13f17a7a5e79159c3f375029d516b28fcb3731ee0f7\"" Jul 14 21:52:51.193080 kubelet[2097]: E0714 21:52:51.192808 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:51.194846 kubelet[2097]: E0714 21:52:51.194811 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:51.195366 containerd[1439]: time="2025-07-14T21:52:51.195146149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d355d6b1b311df7f8d896d79cbef966338f09ac79192f10a3249ba1dc990566\"" Jul 14 21:52:51.195580 kubelet[2097]: E0714 21:52:51.195561 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:51.196113 kubelet[2097]: W0714 21:52:51.196071 2097 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 14 21:52:51.196335 kubelet[2097]: E0714 21:52:51.196296 2097 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:51.196550 containerd[1439]: time="2025-07-14T21:52:51.196515309Z" level=info msg="CreateContainer within sandbox \"2a22668dbf8935e627a905c7bc30d0a5e67a40fd8a7be4ad1b444c319ae4190e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 21:52:51.198983 containerd[1439]: time="2025-07-14T21:52:51.198948749Z" level=info msg="CreateContainer within sandbox \"515399452dd15df90933d13f17a7a5e79159c3f375029d516b28fcb3731ee0f7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 21:52:51.199173 containerd[1439]: time="2025-07-14T21:52:51.198951309Z" level=info msg="CreateContainer within sandbox \"5d355d6b1b311df7f8d896d79cbef966338f09ac79192f10a3249ba1dc990566\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 21:52:51.214694 containerd[1439]: time="2025-07-14T21:52:51.214646349Z" level=info msg="CreateContainer within sandbox \"2a22668dbf8935e627a905c7bc30d0a5e67a40fd8a7be4ad1b444c319ae4190e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23c746a9f1fae0a6e745581f4df3e97701d9e3fbb04ec2e1a8a45f211cdf81ab\"" Jul 14 21:52:51.215414 containerd[1439]: time="2025-07-14T21:52:51.215390309Z" level=info msg="StartContainer for \"23c746a9f1fae0a6e745581f4df3e97701d9e3fbb04ec2e1a8a45f211cdf81ab\"" Jul 14 21:52:51.217746 containerd[1439]: time="2025-07-14T21:52:51.217704909Z" level=info msg="CreateContainer within sandbox \"515399452dd15df90933d13f17a7a5e79159c3f375029d516b28fcb3731ee0f7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a06350f87c05954668c5a5c92cfa02ccf4beb8a4733c0fb71121aaead1487396\"" Jul 14 21:52:51.218204 containerd[1439]: time="2025-07-14T21:52:51.218165869Z" level=info msg="CreateContainer within sandbox \"5d355d6b1b311df7f8d896d79cbef966338f09ac79192f10a3249ba1dc990566\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f72a0d5d5737bad879b86e0bf03a7b3a81ba41e1d2d01839b42064a21b9a1c8b\"" Jul 14 21:52:51.218599 containerd[1439]: time="2025-07-14T21:52:51.218568669Z" level=info msg="StartContainer for \"f72a0d5d5737bad879b86e0bf03a7b3a81ba41e1d2d01839b42064a21b9a1c8b\"" Jul 14 21:52:51.219100 containerd[1439]: time="2025-07-14T21:52:51.219071549Z" level=info msg="StartContainer for \"a06350f87c05954668c5a5c92cfa02ccf4beb8a4733c0fb71121aaead1487396\"" Jul 14 21:52:51.239163 kubelet[2097]: W0714 21:52:51.239035 2097 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 14 21:52:51.239723 kubelet[2097]: E0714 21:52:51.239688 2097 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:51.241783 systemd[1]: Started cri-containerd-23c746a9f1fae0a6e745581f4df3e97701d9e3fbb04ec2e1a8a45f211cdf81ab.scope - libcontainer container 23c746a9f1fae0a6e745581f4df3e97701d9e3fbb04ec2e1a8a45f211cdf81ab. Jul 14 21:52:51.245743 systemd[1]: Started cri-containerd-a06350f87c05954668c5a5c92cfa02ccf4beb8a4733c0fb71121aaead1487396.scope - libcontainer container a06350f87c05954668c5a5c92cfa02ccf4beb8a4733c0fb71121aaead1487396. Jul 14 21:52:51.246997 systemd[1]: Started cri-containerd-f72a0d5d5737bad879b86e0bf03a7b3a81ba41e1d2d01839b42064a21b9a1c8b.scope - libcontainer container f72a0d5d5737bad879b86e0bf03a7b3a81ba41e1d2d01839b42064a21b9a1c8b. Jul 14 21:52:51.297995 containerd[1439]: time="2025-07-14T21:52:51.290457069Z" level=info msg="StartContainer for \"23c746a9f1fae0a6e745581f4df3e97701d9e3fbb04ec2e1a8a45f211cdf81ab\" returns successfully" Jul 14 21:52:51.297995 containerd[1439]: time="2025-07-14T21:52:51.290589429Z" level=info msg="StartContainer for \"f72a0d5d5737bad879b86e0bf03a7b3a81ba41e1d2d01839b42064a21b9a1c8b\" returns successfully" Jul 14 21:52:51.331039 containerd[1439]: time="2025-07-14T21:52:51.330994229Z" level=info msg="StartContainer for \"a06350f87c05954668c5a5c92cfa02ccf4beb8a4733c0fb71121aaead1487396\" returns successfully" Jul 14 21:52:51.401212 kubelet[2097]: I0714 21:52:51.391698 2097 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:52:51.401212 kubelet[2097]: E0714 21:52:51.392091 2097 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jul 14 21:52:51.771571 kubelet[2097]: E0714 21:52:51.771362 2097 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:52:51.771571 kubelet[2097]: E0714 21:52:51.771509 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:51.772302 kubelet[2097]: E0714 21:52:51.772278 2097 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:52:51.772388 kubelet[2097]: E0714 21:52:51.772375 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:51.774602 kubelet[2097]: E0714 21:52:51.774431 2097 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:52:51.774602 kubelet[2097]: E0714 21:52:51.774548 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:52.780059 kubelet[2097]: E0714 21:52:52.780029 2097 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:52:52.780376 kubelet[2097]: E0714 21:52:52.780142 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:52.780727 kubelet[2097]: E0714 21:52:52.780548 2097 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:52:52.780727 kubelet[2097]: E0714 21:52:52.780670 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:52.901398 kubelet[2097]: E0714 21:52:52.901353 2097 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 21:52:52.993989 kubelet[2097]: I0714 21:52:52.993415 2097 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:52:53.004363 kubelet[2097]: I0714 21:52:53.004325 2097 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 21:52:53.004546 kubelet[2097]: E0714 21:52:53.004530 2097 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 21:52:53.025250 kubelet[2097]: E0714 21:52:53.025199 2097 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:53.126033 kubelet[2097]: E0714 21:52:53.125927 2097 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:53.226695 kubelet[2097]: E0714 21:52:53.226657 2097 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:53.348689 kubelet[2097]: I0714 21:52:53.348661 2097 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:53.356429 kubelet[2097]: E0714 21:52:53.356398 2097 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:53.356429 kubelet[2097]: I0714 21:52:53.356426 2097 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:53.357975 kubelet[2097]: E0714 21:52:53.357953 2097 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:53.357975 kubelet[2097]: I0714 21:52:53.357975 2097 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:52:53.359240 kubelet[2097]: E0714 21:52:53.359214 2097 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 14 21:52:53.738504 kubelet[2097]: I0714 21:52:53.738469 2097 apiserver.go:52] "Watching apiserver" Jul 14 21:52:53.748888 kubelet[2097]: I0714 21:52:53.748856 2097 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 21:52:53.796407 kubelet[2097]: I0714 21:52:53.796364 2097 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:52:53.798251 kubelet[2097]: E0714 21:52:53.798226 2097 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 14 21:52:53.798387 kubelet[2097]: E0714 21:52:53.798372 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:53.828701 kubelet[2097]: I0714 21:52:53.828677 2097 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:53.830730 kubelet[2097]: E0714 21:52:53.830706 2097 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:53.830857 kubelet[2097]: E0714 21:52:53.830844 2097 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:55.136601 systemd[1]: Reloading requested from client PID 2379 ('systemctl') (unit session-7.scope)... Jul 14 21:52:55.136632 systemd[1]: Reloading... Jul 14 21:52:55.197642 zram_generator::config[2419]: No configuration found. Jul 14 21:52:55.287804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:52:55.353938 systemd[1]: Reloading finished in 217 ms. Jul 14 21:52:55.383251 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:55.393954 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:52:55.394167 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:55.405944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:55.505594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:55.510559 (kubelet)[2460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:52:55.553996 kubelet[2460]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:52:55.553996 kubelet[2460]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 21:52:55.553996 kubelet[2460]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:52:55.554338 kubelet[2460]: I0714 21:52:55.554107 2460 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:52:55.566231 kubelet[2460]: I0714 21:52:55.566049 2460 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 21:52:55.566231 kubelet[2460]: I0714 21:52:55.566079 2460 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:52:55.566685 kubelet[2460]: I0714 21:52:55.566371 2460 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 21:52:55.567753 kubelet[2460]: I0714 21:52:55.567731 2460 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 21:52:55.570393 kubelet[2460]: I0714 21:52:55.570346 2460 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:52:55.574676 kubelet[2460]: E0714 21:52:55.574597 2460 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:52:55.574676 kubelet[2460]: I0714 21:52:55.574666 2460 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:52:55.579522 kubelet[2460]: I0714 21:52:55.579488 2460 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:52:55.579740 kubelet[2460]: I0714 21:52:55.579707 2460 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:52:55.579906 kubelet[2460]: I0714 21:52:55.579731 2460 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:52:55.579978 kubelet[2460]: I0714 21:52:55.579910 2460 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:52:55.579978 kubelet[2460]: I0714 21:52:55.579919 2460 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 21:52:55.579978 kubelet[2460]: I0714 21:52:55.579961 2460 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:52:55.580100 kubelet[2460]: I0714 21:52:55.580089 2460 kubelet.go:446] "Attempting to sync node with API server" Jul 14 21:52:55.580125 kubelet[2460]: I0714 21:52:55.580104 2460 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:52:55.580125 kubelet[2460]: I0714 21:52:55.580121 2460 kubelet.go:352] "Adding apiserver pod source" Jul 14 21:52:55.580171 kubelet[2460]: I0714 21:52:55.580130 2460 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:52:55.581347 kubelet[2460]: I0714 21:52:55.580768 2460 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 21:52:55.581347 kubelet[2460]: I0714 21:52:55.581212 2460 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:52:55.581650 kubelet[2460]: I0714 21:52:55.581625 2460 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 21:52:55.581703 kubelet[2460]: I0714 21:52:55.581664 2460 server.go:1287] "Started kubelet" Jul 14 21:52:55.582731 kubelet[2460]: I0714 21:52:55.582602 2460 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:52:55.583033 kubelet[2460]: I0714 21:52:55.583012 2460 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:52:55.583162 kubelet[2460]: I0714 21:52:55.583132 2460 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:52:55.583224 kubelet[2460]: I0714 21:52:55.583139 2460 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:52:55.584096 kubelet[2460]: I0714 21:52:55.584075 2460 server.go:479] "Adding debug handlers to kubelet server" Jul 14 21:52:55.585062 kubelet[2460]: I0714 21:52:55.585019 2460 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:52:55.586930 kubelet[2460]: E0714 21:52:55.586902 2460 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:55.587121 kubelet[2460]: I0714 21:52:55.587108 2460 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 21:52:55.587243 kubelet[2460]: E0714 21:52:55.587213 2460 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:52:55.587422 kubelet[2460]: I0714 21:52:55.587402 2460 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 21:52:55.590814 kubelet[2460]: I0714 21:52:55.590790 2460 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:52:55.597662 kubelet[2460]: I0714 21:52:55.596342 2460 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:52:55.597662 kubelet[2460]: I0714 21:52:55.596366 2460 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:52:55.597662 kubelet[2460]: I0714 21:52:55.596470 2460 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:52:55.607949 kubelet[2460]: I0714 21:52:55.607834 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:52:55.609429 kubelet[2460]: I0714 21:52:55.609396 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:52:55.609429 kubelet[2460]: I0714 21:52:55.609421 2460 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 21:52:55.609532 kubelet[2460]: I0714 21:52:55.609443 2460 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 21:52:55.609532 kubelet[2460]: I0714 21:52:55.609459 2460 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 21:52:55.609532 kubelet[2460]: E0714 21:52:55.609500 2460 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:52:55.647175 kubelet[2460]: I0714 21:52:55.647069 2460 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 21:52:55.647175 kubelet[2460]: I0714 21:52:55.647088 2460 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 21:52:55.647175 kubelet[2460]: I0714 21:52:55.647109 2460 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:52:55.647312 kubelet[2460]: I0714 21:52:55.647291 2460 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 21:52:55.647589 kubelet[2460]: I0714 21:52:55.647302 2460 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 21:52:55.647589 kubelet[2460]: I0714 21:52:55.647320 2460 policy_none.go:49] "None policy: Start" Jul 14 21:52:55.647589 kubelet[2460]: I0714 21:52:55.647328 2460 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 21:52:55.647589 kubelet[2460]: I0714 21:52:55.647337 2460 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:52:55.647589 kubelet[2460]: I0714 21:52:55.647429 2460 state_mem.go:75] "Updated machine memory state" Jul 14 21:52:55.653829 kubelet[2460]: I0714 21:52:55.653793 2460 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:52:55.654085 kubelet[2460]: I0714 21:52:55.653963 2460 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:52:55.654212 kubelet[2460]: I0714 21:52:55.654171 2460 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:52:55.654674 kubelet[2460]: I0714 21:52:55.654422 2460 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:52:55.656579 kubelet[2460]: E0714 21:52:55.656540 2460 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 21:52:55.710534 kubelet[2460]: I0714 21:52:55.710486 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:55.710682 kubelet[2460]: I0714 21:52:55.710552 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:52:55.710682 kubelet[2460]: I0714 21:52:55.710486 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:55.758926 kubelet[2460]: I0714 21:52:55.758750 2460 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:52:55.766852 kubelet[2460]: I0714 21:52:55.766817 2460 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 14 21:52:55.766969 kubelet[2460]: I0714 21:52:55.766899 2460 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 21:52:55.792556 kubelet[2460]: I0714 21:52:55.792308 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:55.792556 kubelet[2460]: I0714 21:52:55.792343 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:55.792556 kubelet[2460]: I0714 21:52:55.792365 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:52:55.792556 kubelet[2460]: I0714 21:52:55.792383 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c9a34cba65408693c74603e3bfea2c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c9a34cba65408693c74603e3bfea2c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:55.792556 kubelet[2460]: I0714 21:52:55.792399 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c9a34cba65408693c74603e3bfea2c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c9a34cba65408693c74603e3bfea2c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:55.792789 kubelet[2460]: I0714 21:52:55.792413 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c9a34cba65408693c74603e3bfea2c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5c9a34cba65408693c74603e3bfea2c2\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:55.792789 kubelet[2460]: I0714 21:52:55.792428 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:55.792789 kubelet[2460]: I0714 21:52:55.792443 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:55.792789 kubelet[2460]: I0714 21:52:55.792467 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:56.016691 kubelet[2460]: E0714 21:52:56.016644 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.016691 kubelet[2460]: E0714 21:52:56.016690 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.016849 kubelet[2460]: E0714 21:52:56.016789 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.136959 sudo[2496]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 21:52:56.137232 sudo[2496]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 14 21:52:56.568269 sudo[2496]: pam_unix(sudo:session): session closed for user root Jul 14 21:52:56.581582 kubelet[2460]: I0714 21:52:56.581334 2460 apiserver.go:52] "Watching apiserver" Jul 14 21:52:56.587722 kubelet[2460]: I0714 21:52:56.587690 2460 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 21:52:56.627329 kubelet[2460]: E0714 21:52:56.626979 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.627602 kubelet[2460]: I0714 21:52:56.627571 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:52:56.628261 kubelet[2460]: E0714 21:52:56.628227 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.633947 kubelet[2460]: E0714 21:52:56.633762 2460 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 21:52:56.633947 kubelet[2460]: E0714 21:52:56.633882 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.646170 kubelet[2460]: I0714 21:52:56.646061 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.646029669 podStartE2EDuration="1.646029669s" podCreationTimestamp="2025-07-14 21:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:52:56.644826589 +0000 UTC m=+1.131072761" watchObservedRunningTime="2025-07-14 21:52:56.646029669 +0000 UTC m=+1.132275841" Jul 14 21:52:56.658811 kubelet[2460]: I0714 21:52:56.658751 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.658733789 podStartE2EDuration="1.658733789s" podCreationTimestamp="2025-07-14 21:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:52:56.651838069 +0000 UTC m=+1.138084241" watchObservedRunningTime="2025-07-14 21:52:56.658733789 +0000 UTC m=+1.144979961" Jul 14 21:52:57.628690 kubelet[2460]: E0714 21:52:57.628532 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:57.628690 kubelet[2460]: E0714 21:52:57.628627 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:58.267265 kubelet[2460]: E0714 21:52:58.267236 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:58.999493 sudo[1619]: pam_unix(sudo:session): session closed for user root Jul 14 21:52:59.001048 sshd[1616]: pam_unix(sshd:session): session closed for user core Jul 14 21:52:59.004448 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:56380.service: Deactivated successfully. Jul 14 21:52:59.007099 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 21:52:59.007322 systemd[1]: session-7.scope: Consumed 8.752s CPU time, 153.5M memory peak, 0B memory swap peak. Jul 14 21:52:59.007897 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Jul 14 21:52:59.008699 systemd-logind[1423]: Removed session 7. Jul 14 21:53:00.074071 kubelet[2460]: I0714 21:53:00.074038 2460 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 21:53:00.074750 containerd[1439]: time="2025-07-14T21:53:00.074333401Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 21:53:00.075910 kubelet[2460]: I0714 21:53:00.075069 2460 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 21:53:00.894340 kubelet[2460]: I0714 21:53:00.892701 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.89268167 podStartE2EDuration="5.89268167s" podCreationTimestamp="2025-07-14 21:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:52:56.658890749 +0000 UTC m=+1.145136921" watchObservedRunningTime="2025-07-14 21:53:00.89268167 +0000 UTC m=+5.378927842" Jul 14 21:53:00.898695 kubelet[2460]: W0714 21:53:00.898659 2460 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 14 21:53:00.898776 kubelet[2460]: I0714 21:53:00.898679 2460 status_manager.go:890] "Failed to get status for pod" podUID="dfeea4c2-0d5c-4236-ad0f-c92f73041820" pod="kube-system/kube-proxy-8pqkj" err="pods \"kube-proxy-8pqkj\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jul 14 21:53:00.898981 kubelet[2460]: W0714 21:53:00.898780 2460 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 14 21:53:00.898981 kubelet[2460]: E0714 21:53:00.898804 2460 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 14 21:53:00.898981 kubelet[2460]: E0714 21:53:00.898909 2460 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 14 21:53:00.910766 systemd[1]: Created slice kubepods-besteffort-poddfeea4c2_0d5c_4236_ad0f_c92f73041820.slice - libcontainer container kubepods-besteffort-poddfeea4c2_0d5c_4236_ad0f_c92f73041820.slice. Jul 14 21:53:00.924840 systemd[1]: Created slice kubepods-burstable-pod1f2350fc_31f6_452b_9b51_78f61b63831f.slice - libcontainer container kubepods-burstable-pod1f2350fc_31f6_452b_9b51_78f61b63831f.slice. Jul 14 21:53:00.930242 kubelet[2460]: I0714 21:53:00.930204 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-config-path\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930321 kubelet[2460]: I0714 21:53:00.930272 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfeea4c2-0d5c-4236-ad0f-c92f73041820-kube-proxy\") pod \"kube-proxy-8pqkj\" (UID: \"dfeea4c2-0d5c-4236-ad0f-c92f73041820\") " pod="kube-system/kube-proxy-8pqkj" Jul 14 21:53:00.930321 kubelet[2460]: I0714 21:53:00.930294 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-lib-modules\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930321 kubelet[2460]: I0714 21:53:00.930313 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h96gw\" (UniqueName: \"kubernetes.io/projected/1f2350fc-31f6-452b-9b51-78f61b63831f-kube-api-access-h96gw\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930407 kubelet[2460]: I0714 21:53:00.930334 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfeea4c2-0d5c-4236-ad0f-c92f73041820-xtables-lock\") pod \"kube-proxy-8pqkj\" (UID: \"dfeea4c2-0d5c-4236-ad0f-c92f73041820\") " pod="kube-system/kube-proxy-8pqkj" Jul 14 21:53:00.930407 kubelet[2460]: I0714 21:53:00.930362 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfeea4c2-0d5c-4236-ad0f-c92f73041820-lib-modules\") pod \"kube-proxy-8pqkj\" (UID: \"dfeea4c2-0d5c-4236-ad0f-c92f73041820\") " pod="kube-system/kube-proxy-8pqkj" Jul 14 21:53:00.930407 kubelet[2460]: I0714 21:53:00.930378 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-host-proc-sys-kernel\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930407 kubelet[2460]: I0714 21:53:00.930398 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-run\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930509 kubelet[2460]: I0714 21:53:00.930417 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-etc-cni-netd\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930509 kubelet[2460]: I0714 21:53:00.930446 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-hostproc\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930509 kubelet[2460]: I0714 21:53:00.930463 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-cgroup\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930509 kubelet[2460]: I0714 21:53:00.930482 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cni-path\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930509 kubelet[2460]: I0714 21:53:00.930500 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f2350fc-31f6-452b-9b51-78f61b63831f-clustermesh-secrets\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930604 kubelet[2460]: I0714 21:53:00.930519 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-host-proc-sys-net\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930604 kubelet[2460]: I0714 21:53:00.930537 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-bpf-maps\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930604 kubelet[2460]: I0714 21:53:00.930552 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-xtables-lock\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930604 kubelet[2460]: I0714 21:53:00.930574 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f2350fc-31f6-452b-9b51-78f61b63831f-hubble-tls\") pod \"cilium-bmbnf\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " pod="kube-system/cilium-bmbnf" Jul 14 21:53:00.930604 kubelet[2460]: I0714 21:53:00.930597 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r96dn\" (UniqueName: \"kubernetes.io/projected/dfeea4c2-0d5c-4236-ad0f-c92f73041820-kube-api-access-r96dn\") pod \"kube-proxy-8pqkj\" (UID: \"dfeea4c2-0d5c-4236-ad0f-c92f73041820\") " pod="kube-system/kube-proxy-8pqkj" Jul 14 21:53:01.201926 systemd[1]: Created slice kubepods-besteffort-pode7f25c1f_1902_473a_bdc1_cd04cdd099b3.slice - libcontainer container kubepods-besteffort-pode7f25c1f_1902_473a_bdc1_cd04cdd099b3.slice. Jul 14 21:53:01.232596 kubelet[2460]: I0714 21:53:01.232533 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w47j5\" (UniqueName: \"kubernetes.io/projected/e7f25c1f-1902-473a-bdc1-cd04cdd099b3-kube-api-access-w47j5\") pod \"cilium-operator-6c4d7847fc-4wf79\" (UID: \"e7f25c1f-1902-473a-bdc1-cd04cdd099b3\") " pod="kube-system/cilium-operator-6c4d7847fc-4wf79" Jul 14 21:53:01.232596 kubelet[2460]: I0714 21:53:01.232585 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7f25c1f-1902-473a-bdc1-cd04cdd099b3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4wf79\" (UID: \"e7f25c1f-1902-473a-bdc1-cd04cdd099b3\") " pod="kube-system/cilium-operator-6c4d7847fc-4wf79" Jul 14 21:53:02.106488 kubelet[2460]: E0714 21:53:02.106441 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:02.107510 containerd[1439]: time="2025-07-14T21:53:02.107051642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4wf79,Uid:e7f25c1f-1902-473a-bdc1-cd04cdd099b3,Namespace:kube-system,Attempt:0,}" Jul 14 21:53:02.121996 kubelet[2460]: E0714 21:53:02.121914 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:02.122705 containerd[1439]: time="2025-07-14T21:53:02.122352508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pqkj,Uid:dfeea4c2-0d5c-4236-ad0f-c92f73041820,Namespace:kube-system,Attempt:0,}" Jul 14 21:53:02.134671 kubelet[2460]: E0714 21:53:02.134637 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:02.137987 containerd[1439]: time="2025-07-14T21:53:02.137733774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmbnf,Uid:1f2350fc-31f6-452b-9b51-78f61b63831f,Namespace:kube-system,Attempt:0,}" Jul 14 21:53:02.139989 containerd[1439]: time="2025-07-14T21:53:02.139892898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:02.140085 containerd[1439]: time="2025-07-14T21:53:02.140055578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:02.140123 containerd[1439]: time="2025-07-14T21:53:02.140075298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:02.141904 containerd[1439]: time="2025-07-14T21:53:02.141209220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:02.157159 containerd[1439]: time="2025-07-14T21:53:02.156853127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:02.157159 containerd[1439]: time="2025-07-14T21:53:02.156926527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:02.157159 containerd[1439]: time="2025-07-14T21:53:02.156939367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:02.157159 containerd[1439]: time="2025-07-14T21:53:02.157018007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:02.161786 systemd[1]: Started cri-containerd-5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86.scope - libcontainer container 5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86. Jul 14 21:53:02.170245 containerd[1439]: time="2025-07-14T21:53:02.170150630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:02.170343 containerd[1439]: time="2025-07-14T21:53:02.170315870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:02.170343 containerd[1439]: time="2025-07-14T21:53:02.170333750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:02.170826 containerd[1439]: time="2025-07-14T21:53:02.170683830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:02.173212 systemd[1]: Started cri-containerd-3783ee95e9436d664eb3baa53ab9adda8239786102d0c325d6eeb412cf92d63a.scope - libcontainer container 3783ee95e9436d664eb3baa53ab9adda8239786102d0c325d6eeb412cf92d63a. Jul 14 21:53:02.196988 systemd[1]: Started cri-containerd-42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d.scope - libcontainer container 42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d. Jul 14 21:53:02.202259 containerd[1439]: time="2025-07-14T21:53:02.202205524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pqkj,Uid:dfeea4c2-0d5c-4236-ad0f-c92f73041820,Namespace:kube-system,Attempt:0,} returns sandbox id \"3783ee95e9436d664eb3baa53ab9adda8239786102d0c325d6eeb412cf92d63a\"" Jul 14 21:53:02.203099 kubelet[2460]: E0714 21:53:02.203034 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:02.208633 containerd[1439]: time="2025-07-14T21:53:02.208496015Z" level=info msg="CreateContainer within sandbox \"3783ee95e9436d664eb3baa53ab9adda8239786102d0c325d6eeb412cf92d63a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 21:53:02.209935 containerd[1439]: time="2025-07-14T21:53:02.209087056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4wf79,Uid:e7f25c1f-1902-473a-bdc1-cd04cdd099b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86\"" Jul 14 21:53:02.211132 kubelet[2460]: E0714 21:53:02.211107 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:02.212697 containerd[1439]: time="2025-07-14T21:53:02.212668302Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 21:53:02.223264 containerd[1439]: time="2025-07-14T21:53:02.223231920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmbnf,Uid:1f2350fc-31f6-452b-9b51-78f61b63831f,Namespace:kube-system,Attempt:0,} returns sandbox id \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\"" Jul 14 21:53:02.223915 kubelet[2460]: E0714 21:53:02.223894 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:02.233502 containerd[1439]: time="2025-07-14T21:53:02.233443018Z" level=info msg="CreateContainer within sandbox \"3783ee95e9436d664eb3baa53ab9adda8239786102d0c325d6eeb412cf92d63a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b37eb5258ef477027297fae335805b63dfce41987ec50f36f8618dd978e8bf7\"" Jul 14 21:53:02.234520 containerd[1439]: time="2025-07-14T21:53:02.234483699Z" level=info msg="StartContainer for \"0b37eb5258ef477027297fae335805b63dfce41987ec50f36f8618dd978e8bf7\"" Jul 14 21:53:02.269795 systemd[1]: Started cri-containerd-0b37eb5258ef477027297fae335805b63dfce41987ec50f36f8618dd978e8bf7.scope - libcontainer container 0b37eb5258ef477027297fae335805b63dfce41987ec50f36f8618dd978e8bf7. Jul 14 21:53:02.290584 containerd[1439]: time="2025-07-14T21:53:02.290403435Z" level=info msg="StartContainer for \"0b37eb5258ef477027297fae335805b63dfce41987ec50f36f8618dd978e8bf7\" returns successfully" Jul 14 21:53:02.638753 kubelet[2460]: E0714 21:53:02.638717 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:02.652645 kubelet[2460]: I0714 21:53:02.649586 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8pqkj" podStartSLOduration=2.649568688 podStartE2EDuration="2.649568688s" podCreationTimestamp="2025-07-14 21:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:53:02.649513008 +0000 UTC m=+7.135759180" watchObservedRunningTime="2025-07-14 21:53:02.649568688 +0000 UTC m=+7.135814900" Jul 14 21:53:03.348944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233275017.mount: Deactivated successfully. Jul 14 21:53:03.525696 kubelet[2460]: E0714 21:53:03.525663 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:03.643148 kubelet[2460]: E0714 21:53:03.642804 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:03.733218 containerd[1439]: time="2025-07-14T21:53:03.733161219Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:03.734406 containerd[1439]: time="2025-07-14T21:53:03.734231021Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 14 21:53:03.735038 containerd[1439]: time="2025-07-14T21:53:03.735006902Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:03.736574 containerd[1439]: time="2025-07-14T21:53:03.736536384Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.523831122s" Jul 14 21:53:03.736648 containerd[1439]: time="2025-07-14T21:53:03.736577785Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 14 21:53:03.740785 containerd[1439]: time="2025-07-14T21:53:03.740204830Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 21:53:03.740943 containerd[1439]: time="2025-07-14T21:53:03.740735591Z" level=info msg="CreateContainer within sandbox \"5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 21:53:03.757726 containerd[1439]: time="2025-07-14T21:53:03.757677098Z" level=info msg="CreateContainer within sandbox \"5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\"" Jul 14 21:53:03.758472 containerd[1439]: time="2025-07-14T21:53:03.758214659Z" level=info msg="StartContainer for \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\"" Jul 14 21:53:03.785786 systemd[1]: Started cri-containerd-b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8.scope - libcontainer container b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8. Jul 14 21:53:03.806563 containerd[1439]: time="2025-07-14T21:53:03.805978576Z" level=info msg="StartContainer for \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\" returns successfully" Jul 14 21:53:04.654809 kubelet[2460]: E0714 21:53:04.654763 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:04.655477 kubelet[2460]: E0714 21:53:04.654791 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:04.674326 kubelet[2460]: I0714 21:53:04.674106 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4wf79" podStartSLOduration=2.146280849 podStartE2EDuration="3.674087097s" podCreationTimestamp="2025-07-14 21:53:01 +0000 UTC" firstStartedPulling="2025-07-14 21:53:02.211785901 +0000 UTC m=+6.698032073" lastFinishedPulling="2025-07-14 21:53:03.739592149 +0000 UTC m=+8.225838321" observedRunningTime="2025-07-14 21:53:04.673993257 +0000 UTC m=+9.160239429" watchObservedRunningTime="2025-07-14 21:53:04.674087097 +0000 UTC m=+9.160333269" Jul 14 21:53:05.318209 kubelet[2460]: E0714 21:53:05.318168 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:05.651202 kubelet[2460]: E0714 21:53:05.650730 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:05.651202 kubelet[2460]: E0714 21:53:05.650730 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:06.653058 kubelet[2460]: E0714 21:53:06.652661 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:08.281747 kubelet[2460]: E0714 21:53:08.281391 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:09.357045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1257044428.mount: Deactivated successfully. Jul 14 21:53:11.610164 update_engine[1429]: I20250714 21:53:11.610090 1429 update_attempter.cc:509] Updating boot flags... Jul 14 21:53:11.657743 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2912) Jul 14 21:53:11.710648 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2912) Jul 14 21:53:11.757654 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2912) Jul 14 21:53:12.700086 containerd[1439]: time="2025-07-14T21:53:12.699702392Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:12.700553 containerd[1439]: time="2025-07-14T21:53:12.700514793Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 14 21:53:12.701106 containerd[1439]: time="2025-07-14T21:53:12.701060553Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:12.702734 containerd[1439]: time="2025-07-14T21:53:12.702706875Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.961921524s" Jul 14 21:53:12.702811 containerd[1439]: time="2025-07-14T21:53:12.702738955Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 14 21:53:12.707815 containerd[1439]: time="2025-07-14T21:53:12.707763159Z" level=info msg="CreateContainer within sandbox \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:53:12.733479 containerd[1439]: time="2025-07-14T21:53:12.733383462Z" level=info msg="CreateContainer within sandbox \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94\"" Jul 14 21:53:12.734578 containerd[1439]: time="2025-07-14T21:53:12.733819542Z" level=info msg="StartContainer for \"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94\"" Jul 14 21:53:12.762789 systemd[1]: Started cri-containerd-537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94.scope - libcontainer container 537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94. Jul 14 21:53:12.782830 containerd[1439]: time="2025-07-14T21:53:12.782788746Z" level=info msg="StartContainer for \"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94\" returns successfully" Jul 14 21:53:12.832599 systemd[1]: cri-containerd-537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94.scope: Deactivated successfully. Jul 14 21:53:12.969796 containerd[1439]: time="2025-07-14T21:53:12.965016909Z" level=info msg="shim disconnected" id=537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94 namespace=k8s.io Jul 14 21:53:12.969796 containerd[1439]: time="2025-07-14T21:53:12.969565953Z" level=warning msg="cleaning up after shim disconnected" id=537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94 namespace=k8s.io Jul 14 21:53:12.969796 containerd[1439]: time="2025-07-14T21:53:12.969578593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:53:13.680092 kubelet[2460]: E0714 21:53:13.680048 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:13.681987 containerd[1439]: time="2025-07-14T21:53:13.681930473Z" level=info msg="CreateContainer within sandbox \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:53:13.695783 containerd[1439]: time="2025-07-14T21:53:13.695735045Z" level=info msg="CreateContainer within sandbox \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07\"" Jul 14 21:53:13.697697 containerd[1439]: time="2025-07-14T21:53:13.696371005Z" level=info msg="StartContainer for \"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07\"" Jul 14 21:53:13.721790 systemd[1]: Started cri-containerd-53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07.scope - libcontainer container 53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07. Jul 14 21:53:13.732075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94-rootfs.mount: Deactivated successfully. Jul 14 21:53:13.748996 containerd[1439]: time="2025-07-14T21:53:13.748909449Z" level=info msg="StartContainer for \"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07\" returns successfully" Jul 14 21:53:13.781873 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:53:13.782085 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:53:13.782152 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:53:13.787955 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:53:13.788190 systemd[1]: cri-containerd-53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07.scope: Deactivated successfully. Jul 14 21:53:13.802576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07-rootfs.mount: Deactivated successfully. Jul 14 21:53:13.803643 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:53:13.805641 containerd[1439]: time="2025-07-14T21:53:13.805538057Z" level=info msg="shim disconnected" id=53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07 namespace=k8s.io Jul 14 21:53:13.805641 containerd[1439]: time="2025-07-14T21:53:13.805598737Z" level=warning msg="cleaning up after shim disconnected" id=53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07 namespace=k8s.io Jul 14 21:53:13.805641 containerd[1439]: time="2025-07-14T21:53:13.805607377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:53:13.815185 containerd[1439]: time="2025-07-14T21:53:13.815126985Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:53:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 14 21:53:14.683454 kubelet[2460]: E0714 21:53:14.683365 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:14.686617 containerd[1439]: time="2025-07-14T21:53:14.686571840Z" level=info msg="CreateContainer within sandbox \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:53:14.707215 containerd[1439]: time="2025-07-14T21:53:14.707155976Z" level=info msg="CreateContainer within sandbox \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c\"" Jul 14 21:53:14.707820 containerd[1439]: time="2025-07-14T21:53:14.707793137Z" level=info msg="StartContainer for \"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c\"" Jul 14 21:53:14.733794 systemd[1]: Started cri-containerd-75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c.scope - libcontainer container 75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c. Jul 14 21:53:14.758730 containerd[1439]: time="2025-07-14T21:53:14.758687737Z" level=info msg="StartContainer for \"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c\" returns successfully" Jul 14 21:53:14.775400 systemd[1]: cri-containerd-75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c.scope: Deactivated successfully. Jul 14 21:53:14.793207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c-rootfs.mount: Deactivated successfully. Jul 14 21:53:14.807153 containerd[1439]: time="2025-07-14T21:53:14.807097935Z" level=info msg="shim disconnected" id=75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c namespace=k8s.io Jul 14 21:53:14.807153 containerd[1439]: time="2025-07-14T21:53:14.807151055Z" level=warning msg="cleaning up after shim disconnected" id=75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c namespace=k8s.io Jul 14 21:53:14.807153 containerd[1439]: time="2025-07-14T21:53:14.807160895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:53:15.687335 kubelet[2460]: E0714 21:53:15.686995 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:15.689785 containerd[1439]: time="2025-07-14T21:53:15.689683435Z" level=info msg="CreateContainer within sandbox \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:53:15.706566 containerd[1439]: time="2025-07-14T21:53:15.706523888Z" level=info msg="CreateContainer within sandbox \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5\"" Jul 14 21:53:15.707106 containerd[1439]: time="2025-07-14T21:53:15.707031888Z" level=info msg="StartContainer for \"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5\"" Jul 14 21:53:15.740814 systemd[1]: Started cri-containerd-d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5.scope - libcontainer container d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5. Jul 14 21:53:15.760864 systemd[1]: cri-containerd-d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5.scope: Deactivated successfully. Jul 14 21:53:15.762667 containerd[1439]: time="2025-07-14T21:53:15.762513649Z" level=info msg="StartContainer for \"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5\" returns successfully" Jul 14 21:53:15.778384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5-rootfs.mount: Deactivated successfully. Jul 14 21:53:15.783458 containerd[1439]: time="2025-07-14T21:53:15.783387704Z" level=info msg="shim disconnected" id=d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5 namespace=k8s.io Jul 14 21:53:15.783458 containerd[1439]: time="2025-07-14T21:53:15.783455584Z" level=warning msg="cleaning up after shim disconnected" id=d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5 namespace=k8s.io Jul 14 21:53:15.783671 containerd[1439]: time="2025-07-14T21:53:15.783464984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:53:16.690797 kubelet[2460]: E0714 21:53:16.690767 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:16.693871 containerd[1439]: time="2025-07-14T21:53:16.693830824Z" level=info msg="CreateContainer within sandbox \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:53:16.709587 containerd[1439]: time="2025-07-14T21:53:16.709450555Z" level=info msg="CreateContainer within sandbox \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\"" Jul 14 21:53:16.711391 containerd[1439]: time="2025-07-14T21:53:16.710527835Z" level=info msg="StartContainer for \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\"" Jul 14 21:53:16.733779 systemd[1]: Started cri-containerd-56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a.scope - libcontainer container 56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a. Jul 14 21:53:16.762116 containerd[1439]: time="2025-07-14T21:53:16.762008631Z" level=info msg="StartContainer for \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\" returns successfully" Jul 14 21:53:16.913928 kubelet[2460]: I0714 21:53:16.913892 2460 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 21:53:16.956894 systemd[1]: Created slice kubepods-burstable-pod1f1f80ec_223c_4040_90c9_5a4cc71fb41c.slice - libcontainer container kubepods-burstable-pod1f1f80ec_223c_4040_90c9_5a4cc71fb41c.slice. Jul 14 21:53:16.967351 systemd[1]: Created slice kubepods-burstable-podacf19fc2_9fbf_478f_8fcb_b1aae2e0f341.slice - libcontainer container kubepods-burstable-podacf19fc2_9fbf_478f_8fcb_b1aae2e0f341.slice. Jul 14 21:53:17.053165 kubelet[2460]: I0714 21:53:17.053079 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acf19fc2-9fbf-478f-8fcb-b1aae2e0f341-config-volume\") pod \"coredns-668d6bf9bc-rhr5p\" (UID: \"acf19fc2-9fbf-478f-8fcb-b1aae2e0f341\") " pod="kube-system/coredns-668d6bf9bc-rhr5p" Jul 14 21:53:17.053165 kubelet[2460]: I0714 21:53:17.053120 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f1f80ec-223c-4040-90c9-5a4cc71fb41c-config-volume\") pod \"coredns-668d6bf9bc-86xj9\" (UID: \"1f1f80ec-223c-4040-90c9-5a4cc71fb41c\") " pod="kube-system/coredns-668d6bf9bc-86xj9" Jul 14 21:53:17.053165 kubelet[2460]: I0714 21:53:17.053144 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5kpb\" (UniqueName: \"kubernetes.io/projected/1f1f80ec-223c-4040-90c9-5a4cc71fb41c-kube-api-access-f5kpb\") pod \"coredns-668d6bf9bc-86xj9\" (UID: \"1f1f80ec-223c-4040-90c9-5a4cc71fb41c\") " pod="kube-system/coredns-668d6bf9bc-86xj9" Jul 14 21:53:17.053427 kubelet[2460]: I0714 21:53:17.053391 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbpxk\" (UniqueName: \"kubernetes.io/projected/acf19fc2-9fbf-478f-8fcb-b1aae2e0f341-kube-api-access-wbpxk\") pod \"coredns-668d6bf9bc-rhr5p\" (UID: \"acf19fc2-9fbf-478f-8fcb-b1aae2e0f341\") " pod="kube-system/coredns-668d6bf9bc-rhr5p" Jul 14 21:53:17.263855 kubelet[2460]: E0714 21:53:17.263817 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:17.265923 containerd[1439]: time="2025-07-14T21:53:17.265731288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86xj9,Uid:1f1f80ec-223c-4040-90c9-5a4cc71fb41c,Namespace:kube-system,Attempt:0,}" Jul 14 21:53:17.271607 kubelet[2460]: E0714 21:53:17.271376 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:17.272983 containerd[1439]: time="2025-07-14T21:53:17.271970292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rhr5p,Uid:acf19fc2-9fbf-478f-8fcb-b1aae2e0f341,Namespace:kube-system,Attempt:0,}" Jul 14 21:53:17.696350 kubelet[2460]: E0714 21:53:17.696255 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:17.716055 kubelet[2460]: I0714 21:53:17.715986 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bmbnf" podStartSLOduration=7.237625469 podStartE2EDuration="17.71594902s" podCreationTimestamp="2025-07-14 21:53:00 +0000 UTC" firstStartedPulling="2025-07-14 21:53:02.225209764 +0000 UTC m=+6.711455896" lastFinishedPulling="2025-07-14 21:53:12.703533275 +0000 UTC m=+17.189779447" observedRunningTime="2025-07-14 21:53:17.713115578 +0000 UTC m=+22.199361750" watchObservedRunningTime="2025-07-14 21:53:17.71594902 +0000 UTC m=+22.202195192" Jul 14 21:53:18.698156 kubelet[2460]: E0714 21:53:18.698052 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:18.941214 systemd-networkd[1376]: cilium_host: Link UP Jul 14 21:53:18.941340 systemd-networkd[1376]: cilium_net: Link UP Jul 14 21:53:18.941343 systemd-networkd[1376]: cilium_net: Gained carrier Jul 14 21:53:18.941603 systemd-networkd[1376]: cilium_host: Gained carrier Jul 14 21:53:18.942561 systemd-networkd[1376]: cilium_net: Gained IPv6LL Jul 14 21:53:18.942767 systemd-networkd[1376]: cilium_host: Gained IPv6LL Jul 14 21:53:19.030695 systemd-networkd[1376]: cilium_vxlan: Link UP Jul 14 21:53:19.030701 systemd-networkd[1376]: cilium_vxlan: Gained carrier Jul 14 21:53:19.315650 kernel: NET: Registered PF_ALG protocol family Jul 14 21:53:19.699895 kubelet[2460]: E0714 21:53:19.699856 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:19.898159 systemd-networkd[1376]: lxc_health: Link UP Jul 14 21:53:19.907922 systemd-networkd[1376]: lxc_health: Gained carrier Jul 14 21:53:20.381285 systemd-networkd[1376]: lxc334e6eae6c74: Link UP Jul 14 21:53:20.386252 systemd-networkd[1376]: lxc74b937e0093e: Link UP Jul 14 21:53:20.387649 kernel: eth0: renamed from tmpfe6d5 Jul 14 21:53:20.408044 systemd-networkd[1376]: lxc334e6eae6c74: Gained carrier Jul 14 21:53:20.412209 kernel: eth0: renamed from tmp9e853 Jul 14 21:53:20.422386 systemd-networkd[1376]: lxc74b937e0093e: Gained carrier Jul 14 21:53:20.703389 kubelet[2460]: E0714 21:53:20.703284 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:21.039772 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Jul 14 21:53:21.615751 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jul 14 21:53:21.679777 systemd-networkd[1376]: lxc334e6eae6c74: Gained IPv6LL Jul 14 21:53:21.705423 kubelet[2460]: E0714 21:53:21.705373 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:21.807793 systemd-networkd[1376]: lxc74b937e0093e: Gained IPv6LL Jul 14 21:53:22.707346 kubelet[2460]: E0714 21:53:22.707221 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:23.969441 containerd[1439]: time="2025-07-14T21:53:23.969280792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:23.969441 containerd[1439]: time="2025-07-14T21:53:23.969338512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:23.969441 containerd[1439]: time="2025-07-14T21:53:23.969366072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:23.973550 containerd[1439]: time="2025-07-14T21:53:23.971042713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:23.989321 containerd[1439]: time="2025-07-14T21:53:23.989219561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:23.989469 containerd[1439]: time="2025-07-14T21:53:23.989309481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:23.989469 containerd[1439]: time="2025-07-14T21:53:23.989330241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:23.989469 containerd[1439]: time="2025-07-14T21:53:23.989428241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:23.990798 systemd[1]: run-containerd-runc-k8s.io-9e85306ca611c9ccca3cd15f23474023cfd68f8d4ffdbd6760cff06c6cb68948-runc.9qb3x3.mount: Deactivated successfully. Jul 14 21:53:23.999207 systemd[1]: Started cri-containerd-9e85306ca611c9ccca3cd15f23474023cfd68f8d4ffdbd6760cff06c6cb68948.scope - libcontainer container 9e85306ca611c9ccca3cd15f23474023cfd68f8d4ffdbd6760cff06c6cb68948. Jul 14 21:53:24.021828 systemd[1]: Started cri-containerd-fe6d5a5b6eb8094725bd3fc9a3c83dc37afa0b7b584dbf090f1b35f3890fc324.scope - libcontainer container fe6d5a5b6eb8094725bd3fc9a3c83dc37afa0b7b584dbf090f1b35f3890fc324. Jul 14 21:53:24.029143 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:53:24.035233 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:53:24.049068 containerd[1439]: time="2025-07-14T21:53:24.049020026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86xj9,Uid:1f1f80ec-223c-4040-90c9-5a4cc71fb41c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e85306ca611c9ccca3cd15f23474023cfd68f8d4ffdbd6760cff06c6cb68948\"" Jul 14 21:53:24.049804 kubelet[2460]: E0714 21:53:24.049781 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:24.052678 containerd[1439]: time="2025-07-14T21:53:24.052548148Z" level=info msg="CreateContainer within sandbox \"9e85306ca611c9ccca3cd15f23474023cfd68f8d4ffdbd6760cff06c6cb68948\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:53:24.058876 containerd[1439]: time="2025-07-14T21:53:24.058774270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rhr5p,Uid:acf19fc2-9fbf-478f-8fcb-b1aae2e0f341,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe6d5a5b6eb8094725bd3fc9a3c83dc37afa0b7b584dbf090f1b35f3890fc324\"" Jul 14 21:53:24.059538 kubelet[2460]: E0714 21:53:24.059506 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:24.062064 containerd[1439]: time="2025-07-14T21:53:24.061913712Z" level=info msg="CreateContainer within sandbox \"fe6d5a5b6eb8094725bd3fc9a3c83dc37afa0b7b584dbf090f1b35f3890fc324\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:53:24.068559 containerd[1439]: time="2025-07-14T21:53:24.068512874Z" level=info msg="CreateContainer within sandbox \"9e85306ca611c9ccca3cd15f23474023cfd68f8d4ffdbd6760cff06c6cb68948\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93641cb0cbc0e0badd17a12f30f906b0d07c4e426f59600a7c5038255d883df3\"" Jul 14 21:53:24.068949 containerd[1439]: time="2025-07-14T21:53:24.068923754Z" level=info msg="StartContainer for \"93641cb0cbc0e0badd17a12f30f906b0d07c4e426f59600a7c5038255d883df3\"" Jul 14 21:53:24.094810 systemd[1]: Started cri-containerd-93641cb0cbc0e0badd17a12f30f906b0d07c4e426f59600a7c5038255d883df3.scope - libcontainer container 93641cb0cbc0e0badd17a12f30f906b0d07c4e426f59600a7c5038255d883df3. Jul 14 21:53:24.112050 containerd[1439]: time="2025-07-14T21:53:24.111936892Z" level=info msg="CreateContainer within sandbox \"fe6d5a5b6eb8094725bd3fc9a3c83dc37afa0b7b584dbf090f1b35f3890fc324\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0740cc393cbc63cc3458d6f514dc11238ce6bdf1b768e48ec427a8dd53321179\"" Jul 14 21:53:24.113323 containerd[1439]: time="2025-07-14T21:53:24.112924533Z" level=info msg="StartContainer for \"0740cc393cbc63cc3458d6f514dc11238ce6bdf1b768e48ec427a8dd53321179\"" Jul 14 21:53:24.122520 containerd[1439]: time="2025-07-14T21:53:24.122482217Z" level=info msg="StartContainer for \"93641cb0cbc0e0badd17a12f30f906b0d07c4e426f59600a7c5038255d883df3\" returns successfully" Jul 14 21:53:24.141878 systemd[1]: Started cri-containerd-0740cc393cbc63cc3458d6f514dc11238ce6bdf1b768e48ec427a8dd53321179.scope - libcontainer container 0740cc393cbc63cc3458d6f514dc11238ce6bdf1b768e48ec427a8dd53321179. Jul 14 21:53:24.175680 containerd[1439]: time="2025-07-14T21:53:24.175620718Z" level=info msg="StartContainer for \"0740cc393cbc63cc3458d6f514dc11238ce6bdf1b768e48ec427a8dd53321179\" returns successfully" Jul 14 21:53:24.712403 kubelet[2460]: E0714 21:53:24.712365 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:24.715110 kubelet[2460]: E0714 21:53:24.714940 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:24.723084 kubelet[2460]: I0714 21:53:24.722777 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-86xj9" podStartSLOduration=23.722763464 podStartE2EDuration="23.722763464s" podCreationTimestamp="2025-07-14 21:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:53:24.722036984 +0000 UTC m=+29.208283156" watchObservedRunningTime="2025-07-14 21:53:24.722763464 +0000 UTC m=+29.209009636" Jul 14 21:53:24.746167 kubelet[2460]: I0714 21:53:24.746092 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rhr5p" podStartSLOduration=23.746071994 podStartE2EDuration="23.746071994s" podCreationTimestamp="2025-07-14 21:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:53:24.745049513 +0000 UTC m=+29.231295685" watchObservedRunningTime="2025-07-14 21:53:24.746071994 +0000 UTC m=+29.232318166" Jul 14 21:53:25.716636 kubelet[2460]: E0714 21:53:25.716488 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:25.718068 kubelet[2460]: E0714 21:53:25.718041 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:26.718269 kubelet[2460]: E0714 21:53:26.717931 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:26.718269 kubelet[2460]: E0714 21:53:26.717983 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:35.289292 systemd[1]: Started sshd@7-10.0.0.65:22-10.0.0.1:50180.service - OpenSSH per-connection server daemon (10.0.0.1:50180). Jul 14 21:53:35.327111 sshd[3878]: Accepted publickey for core from 10.0.0.1 port 50180 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:35.328683 sshd[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:35.332302 systemd-logind[1423]: New session 8 of user core. Jul 14 21:53:35.343780 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 21:53:35.475987 sshd[3878]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:35.480266 systemd[1]: sshd@7-10.0.0.65:22-10.0.0.1:50180.service: Deactivated successfully. Jul 14 21:53:35.482227 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 21:53:35.483492 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Jul 14 21:53:35.484681 systemd-logind[1423]: Removed session 8. Jul 14 21:53:40.487606 systemd[1]: Started sshd@8-10.0.0.65:22-10.0.0.1:50190.service - OpenSSH per-connection server daemon (10.0.0.1:50190). Jul 14 21:53:40.522595 sshd[3893]: Accepted publickey for core from 10.0.0.1 port 50190 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:40.523982 sshd[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:40.527473 systemd-logind[1423]: New session 9 of user core. Jul 14 21:53:40.543808 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 21:53:40.663035 sshd[3893]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:40.667335 systemd[1]: sshd@8-10.0.0.65:22-10.0.0.1:50190.service: Deactivated successfully. Jul 14 21:53:40.671753 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 21:53:40.675391 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Jul 14 21:53:40.676374 systemd-logind[1423]: Removed session 9. Jul 14 21:53:45.674446 systemd[1]: Started sshd@9-10.0.0.65:22-10.0.0.1:39492.service - OpenSSH per-connection server daemon (10.0.0.1:39492). Jul 14 21:53:45.712453 sshd[3908]: Accepted publickey for core from 10.0.0.1 port 39492 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:45.713976 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:45.718791 systemd-logind[1423]: New session 10 of user core. Jul 14 21:53:45.727797 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 21:53:45.845073 sshd[3908]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:45.849102 systemd[1]: sshd@9-10.0.0.65:22-10.0.0.1:39492.service: Deactivated successfully. Jul 14 21:53:45.851001 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 21:53:45.853899 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Jul 14 21:53:45.854743 systemd-logind[1423]: Removed session 10. Jul 14 21:53:50.854395 systemd[1]: Started sshd@10-10.0.0.65:22-10.0.0.1:39504.service - OpenSSH per-connection server daemon (10.0.0.1:39504). Jul 14 21:53:50.912031 sshd[3924]: Accepted publickey for core from 10.0.0.1 port 39504 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:50.913497 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:50.917111 systemd-logind[1423]: New session 11 of user core. Jul 14 21:53:50.928836 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 21:53:51.049222 sshd[3924]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:51.052780 systemd[1]: sshd@10-10.0.0.65:22-10.0.0.1:39504.service: Deactivated successfully. Jul 14 21:53:51.054450 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 21:53:51.056797 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Jul 14 21:53:51.057651 systemd-logind[1423]: Removed session 11. Jul 14 21:53:56.061662 systemd[1]: Started sshd@11-10.0.0.65:22-10.0.0.1:54724.service - OpenSSH per-connection server daemon (10.0.0.1:54724). Jul 14 21:53:56.112525 sshd[3941]: Accepted publickey for core from 10.0.0.1 port 54724 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:56.113936 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:56.122511 systemd-logind[1423]: New session 12 of user core. Jul 14 21:53:56.131848 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 21:53:56.255402 sshd[3941]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:56.258042 systemd[1]: sshd@11-10.0.0.65:22-10.0.0.1:54724.service: Deactivated successfully. Jul 14 21:53:56.259815 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 21:53:56.265751 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Jul 14 21:53:56.266646 systemd-logind[1423]: Removed session 12. Jul 14 21:54:01.265418 systemd[1]: Started sshd@12-10.0.0.65:22-10.0.0.1:54730.service - OpenSSH per-connection server daemon (10.0.0.1:54730). Jul 14 21:54:01.302853 sshd[3956]: Accepted publickey for core from 10.0.0.1 port 54730 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:01.304321 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:01.310479 systemd-logind[1423]: New session 13 of user core. Jul 14 21:54:01.324923 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 21:54:01.447954 sshd[3956]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:01.461037 systemd[1]: sshd@12-10.0.0.65:22-10.0.0.1:54730.service: Deactivated successfully. Jul 14 21:54:01.462867 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 21:54:01.472832 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Jul 14 21:54:01.473090 systemd[1]: Started sshd@13-10.0.0.65:22-10.0.0.1:54738.service - OpenSSH per-connection server daemon (10.0.0.1:54738). Jul 14 21:54:01.474302 systemd-logind[1423]: Removed session 13. Jul 14 21:54:01.505879 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 54738 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:01.507150 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:01.510636 systemd-logind[1423]: New session 14 of user core. Jul 14 21:54:01.523814 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 21:54:01.683132 sshd[3972]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:01.696291 systemd[1]: sshd@13-10.0.0.65:22-10.0.0.1:54738.service: Deactivated successfully. Jul 14 21:54:01.699401 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 21:54:01.702449 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Jul 14 21:54:01.713968 systemd[1]: Started sshd@14-10.0.0.65:22-10.0.0.1:54744.service - OpenSSH per-connection server daemon (10.0.0.1:54744). Jul 14 21:54:01.715023 systemd-logind[1423]: Removed session 14. Jul 14 21:54:01.743916 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 54744 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:01.745481 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:01.750198 systemd-logind[1423]: New session 15 of user core. Jul 14 21:54:01.759787 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 21:54:01.871850 sshd[3984]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:01.875172 systemd[1]: sshd@14-10.0.0.65:22-10.0.0.1:54744.service: Deactivated successfully. Jul 14 21:54:01.877261 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 21:54:01.877949 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Jul 14 21:54:01.879036 systemd-logind[1423]: Removed session 15. Jul 14 21:54:06.887559 systemd[1]: Started sshd@15-10.0.0.65:22-10.0.0.1:36818.service - OpenSSH per-connection server daemon (10.0.0.1:36818). Jul 14 21:54:06.917191 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 36818 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:06.918379 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:06.924514 systemd-logind[1423]: New session 16 of user core. Jul 14 21:54:06.933359 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 21:54:07.045970 sshd[4002]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:07.048721 systemd[1]: sshd@15-10.0.0.65:22-10.0.0.1:36818.service: Deactivated successfully. Jul 14 21:54:07.050351 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 21:54:07.054441 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Jul 14 21:54:07.056050 systemd-logind[1423]: Removed session 16. Jul 14 21:54:08.610857 kubelet[2460]: E0714 21:54:08.610825 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:11.610633 kubelet[2460]: E0714 21:54:11.610184 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:12.060139 systemd[1]: Started sshd@16-10.0.0.65:22-10.0.0.1:36822.service - OpenSSH per-connection server daemon (10.0.0.1:36822). Jul 14 21:54:12.096204 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 36822 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:12.097453 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:12.101854 systemd-logind[1423]: New session 17 of user core. Jul 14 21:54:12.107777 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 21:54:12.217760 sshd[4016]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:12.229195 systemd[1]: sshd@16-10.0.0.65:22-10.0.0.1:36822.service: Deactivated successfully. Jul 14 21:54:12.230925 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 21:54:12.232865 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Jul 14 21:54:12.234762 systemd[1]: Started sshd@17-10.0.0.65:22-10.0.0.1:36832.service - OpenSSH per-connection server daemon (10.0.0.1:36832). Jul 14 21:54:12.237757 systemd-logind[1423]: Removed session 17. Jul 14 21:54:12.270994 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 36832 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:12.272234 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:12.276482 systemd-logind[1423]: New session 18 of user core. Jul 14 21:54:12.289778 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 21:54:22.519757 sshd[4030]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:22.530451 systemd[1]: sshd@17-10.0.0.65:22-10.0.0.1:36832.service: Deactivated successfully. Jul 14 21:54:22.533360 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 21:54:22.534913 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Jul 14 21:54:22.550367 systemd[1]: Started sshd@18-10.0.0.65:22-10.0.0.1:34018.service - OpenSSH per-connection server daemon (10.0.0.1:34018). Jul 14 21:54:22.552009 systemd-logind[1423]: Removed session 18. Jul 14 21:54:22.585390 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 34018 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:22.586982 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:22.590799 systemd-logind[1423]: New session 19 of user core. Jul 14 21:54:22.602765 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 21:54:25.612192 kubelet[2460]: E0714 21:54:25.612097 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:25.612192 kubelet[2460]: E0714 21:54:25.612187 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:26.610935 kubelet[2460]: E0714 21:54:26.610476 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:30.610132 kubelet[2460]: E0714 21:54:30.610073 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:31.610304 kubelet[2460]: E0714 21:54:31.610136 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:40.609974 kubelet[2460]: E0714 21:54:40.609941 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:43.248061 sshd[4043]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:43.257716 systemd[1]: sshd@18-10.0.0.65:22-10.0.0.1:34018.service: Deactivated successfully. Jul 14 21:54:43.262184 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 21:54:43.264226 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Jul 14 21:54:43.273946 systemd[1]: Started sshd@19-10.0.0.65:22-10.0.0.1:47074.service - OpenSSH per-connection server daemon (10.0.0.1:47074). Jul 14 21:54:43.275854 systemd-logind[1423]: Removed session 19. Jul 14 21:54:43.308168 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 47074 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:43.309832 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:43.313766 systemd-logind[1423]: New session 20 of user core. Jul 14 21:54:43.322796 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 21:54:43.549003 sshd[4065]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:43.557358 systemd[1]: sshd@19-10.0.0.65:22-10.0.0.1:47074.service: Deactivated successfully. Jul 14 21:54:43.559672 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 21:54:43.564757 systemd-logind[1423]: Session 20 logged out. Waiting for processes to exit. Jul 14 21:54:43.574895 systemd[1]: Started sshd@20-10.0.0.65:22-10.0.0.1:47084.service - OpenSSH per-connection server daemon (10.0.0.1:47084). Jul 14 21:54:43.576197 systemd-logind[1423]: Removed session 20. Jul 14 21:54:43.604937 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 47084 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:43.606446 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:43.613752 systemd-logind[1423]: New session 21 of user core. Jul 14 21:54:43.621824 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 21:54:43.746132 sshd[4077]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:43.749771 systemd[1]: sshd@20-10.0.0.65:22-10.0.0.1:47084.service: Deactivated successfully. Jul 14 21:54:43.751981 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 21:54:43.752993 systemd-logind[1423]: Session 21 logged out. Waiting for processes to exit. Jul 14 21:54:43.753955 systemd-logind[1423]: Removed session 21. Jul 14 21:54:48.757456 systemd[1]: Started sshd@21-10.0.0.65:22-10.0.0.1:47086.service - OpenSSH per-connection server daemon (10.0.0.1:47086). Jul 14 21:54:48.790699 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 47086 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:48.792128 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:48.796368 systemd-logind[1423]: New session 22 of user core. Jul 14 21:54:48.808789 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 21:54:48.917728 sshd[4094]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:48.921274 systemd[1]: sshd@21-10.0.0.65:22-10.0.0.1:47086.service: Deactivated successfully. Jul 14 21:54:48.922957 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 21:54:48.923537 systemd-logind[1423]: Session 22 logged out. Waiting for processes to exit. Jul 14 21:54:48.924424 systemd-logind[1423]: Removed session 22. Jul 14 21:54:53.928525 systemd[1]: Started sshd@22-10.0.0.65:22-10.0.0.1:40436.service - OpenSSH per-connection server daemon (10.0.0.1:40436). Jul 14 21:54:53.973981 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 40436 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:53.975372 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:53.981649 systemd-logind[1423]: New session 23 of user core. Jul 14 21:54:53.990830 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 21:54:54.112847 sshd[4109]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:54.115540 systemd[1]: sshd@22-10.0.0.65:22-10.0.0.1:40436.service: Deactivated successfully. Jul 14 21:54:54.117603 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 21:54:54.119544 systemd-logind[1423]: Session 23 logged out. Waiting for processes to exit. Jul 14 21:54:54.120484 systemd-logind[1423]: Removed session 23. Jul 14 21:54:59.123236 systemd[1]: Started sshd@23-10.0.0.65:22-10.0.0.1:40440.service - OpenSSH per-connection server daemon (10.0.0.1:40440). Jul 14 21:54:59.154927 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 40440 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:59.156049 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:59.160256 systemd-logind[1423]: New session 24 of user core. Jul 14 21:54:59.174767 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 21:54:59.287235 sshd[4125]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:59.296991 systemd[1]: sshd@23-10.0.0.65:22-10.0.0.1:40440.service: Deactivated successfully. Jul 14 21:54:59.299189 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 21:54:59.300512 systemd-logind[1423]: Session 24 logged out. Waiting for processes to exit. Jul 14 21:54:59.312098 systemd[1]: Started sshd@24-10.0.0.65:22-10.0.0.1:40454.service - OpenSSH per-connection server daemon (10.0.0.1:40454). Jul 14 21:54:59.314963 systemd-logind[1423]: Removed session 24. Jul 14 21:54:59.342116 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 40454 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:59.343348 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:59.347164 systemd-logind[1423]: New session 25 of user core. Jul 14 21:54:59.358767 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 21:55:01.129932 containerd[1439]: time="2025-07-14T21:55:01.129863802Z" level=info msg="StopContainer for \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\" with timeout 30 (s)" Jul 14 21:55:01.130344 containerd[1439]: time="2025-07-14T21:55:01.130240242Z" level=info msg="Stop container \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\" with signal terminated" Jul 14 21:55:01.143097 systemd[1]: cri-containerd-b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8.scope: Deactivated successfully. Jul 14 21:55:01.166741 containerd[1439]: time="2025-07-14T21:55:01.166691070Z" level=info msg="StopContainer for \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\" with timeout 2 (s)" Jul 14 21:55:01.167152 containerd[1439]: time="2025-07-14T21:55:01.167121351Z" level=info msg="Stop container \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\" with signal terminated" Jul 14 21:55:01.170864 containerd[1439]: time="2025-07-14T21:55:01.170798278Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:55:01.171785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8-rootfs.mount: Deactivated successfully. Jul 14 21:55:01.173896 systemd-networkd[1376]: lxc_health: Link DOWN Jul 14 21:55:01.173907 systemd-networkd[1376]: lxc_health: Lost carrier Jul 14 21:55:01.176719 containerd[1439]: time="2025-07-14T21:55:01.176659969Z" level=info msg="shim disconnected" id=b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8 namespace=k8s.io Jul 14 21:55:01.176719 containerd[1439]: time="2025-07-14T21:55:01.176708129Z" level=warning msg="cleaning up after shim disconnected" id=b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8 namespace=k8s.io Jul 14 21:55:01.176719 containerd[1439]: time="2025-07-14T21:55:01.176716809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:55:01.203122 systemd[1]: cri-containerd-56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a.scope: Deactivated successfully. Jul 14 21:55:01.203418 systemd[1]: cri-containerd-56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a.scope: Consumed 6.622s CPU time. Jul 14 21:55:01.225004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a-rootfs.mount: Deactivated successfully. Jul 14 21:55:01.232233 containerd[1439]: time="2025-07-14T21:55:01.232184953Z" level=info msg="StopContainer for \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\" returns successfully" Jul 14 21:55:01.232924 containerd[1439]: time="2025-07-14T21:55:01.232887594Z" level=info msg="StopPodSandbox for \"5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86\"" Jul 14 21:55:01.232980 containerd[1439]: time="2025-07-14T21:55:01.232932514Z" level=info msg="Container to stop \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:55:01.233198 containerd[1439]: time="2025-07-14T21:55:01.233147195Z" level=info msg="shim disconnected" id=56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a namespace=k8s.io Jul 14 21:55:01.233238 containerd[1439]: time="2025-07-14T21:55:01.233196755Z" level=warning msg="cleaning up after shim disconnected" id=56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a namespace=k8s.io Jul 14 21:55:01.233238 containerd[1439]: time="2025-07-14T21:55:01.233207595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:55:01.234520 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86-shm.mount: Deactivated successfully. Jul 14 21:55:01.240799 systemd[1]: cri-containerd-5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86.scope: Deactivated successfully. Jul 14 21:55:01.249639 containerd[1439]: time="2025-07-14T21:55:01.249564105Z" level=info msg="StopContainer for \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\" returns successfully" Jul 14 21:55:01.250239 containerd[1439]: time="2025-07-14T21:55:01.250217587Z" level=info msg="StopPodSandbox for \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\"" Jul 14 21:55:01.250300 containerd[1439]: time="2025-07-14T21:55:01.250249667Z" level=info msg="Container to stop \"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:55:01.250401 containerd[1439]: time="2025-07-14T21:55:01.250262627Z" level=info msg="Container to stop \"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:55:01.250401 containerd[1439]: time="2025-07-14T21:55:01.250333667Z" level=info msg="Container to stop \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:55:01.250401 containerd[1439]: time="2025-07-14T21:55:01.250346347Z" level=info msg="Container to stop \"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:55:01.250401 containerd[1439]: time="2025-07-14T21:55:01.250355787Z" level=info msg="Container to stop \"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:55:01.252603 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d-shm.mount: Deactivated successfully. Jul 14 21:55:01.257094 systemd[1]: cri-containerd-42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d.scope: Deactivated successfully. Jul 14 21:55:01.276196 containerd[1439]: time="2025-07-14T21:55:01.276129475Z" level=info msg="shim disconnected" id=5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86 namespace=k8s.io Jul 14 21:55:01.276462 containerd[1439]: time="2025-07-14T21:55:01.276441916Z" level=warning msg="cleaning up after shim disconnected" id=5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86 namespace=k8s.io Jul 14 21:55:01.276644 containerd[1439]: time="2025-07-14T21:55:01.276622356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:55:01.282592 containerd[1439]: time="2025-07-14T21:55:01.281871126Z" level=info msg="shim disconnected" id=42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d namespace=k8s.io Jul 14 21:55:01.283435 containerd[1439]: time="2025-07-14T21:55:01.283398409Z" level=warning msg="cleaning up after shim disconnected" id=42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d namespace=k8s.io Jul 14 21:55:01.283435 containerd[1439]: time="2025-07-14T21:55:01.283430569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:55:01.289377 containerd[1439]: time="2025-07-14T21:55:01.289334580Z" level=info msg="TearDown network for sandbox \"5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86\" successfully" Jul 14 21:55:01.289501 containerd[1439]: time="2025-07-14T21:55:01.289486460Z" level=info msg="StopPodSandbox for \"5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86\" returns successfully" Jul 14 21:55:01.296678 containerd[1439]: time="2025-07-14T21:55:01.296603233Z" level=info msg="TearDown network for sandbox \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" successfully" Jul 14 21:55:01.296678 containerd[1439]: time="2025-07-14T21:55:01.296671074Z" level=info msg="StopPodSandbox for \"42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d\" returns successfully" Jul 14 21:55:01.400144 kubelet[2460]: I0714 21:55:01.400030 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cni-path\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400144 kubelet[2460]: I0714 21:55:01.400076 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-xtables-lock\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400144 kubelet[2460]: I0714 21:55:01.400102 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w47j5\" (UniqueName: \"kubernetes.io/projected/e7f25c1f-1902-473a-bdc1-cd04cdd099b3-kube-api-access-w47j5\") pod \"e7f25c1f-1902-473a-bdc1-cd04cdd099b3\" (UID: \"e7f25c1f-1902-473a-bdc1-cd04cdd099b3\") " Jul 14 21:55:01.400144 kubelet[2460]: I0714 21:55:01.400123 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f2350fc-31f6-452b-9b51-78f61b63831f-hubble-tls\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400144 kubelet[2460]: I0714 21:55:01.400140 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7f25c1f-1902-473a-bdc1-cd04cdd099b3-cilium-config-path\") pod \"e7f25c1f-1902-473a-bdc1-cd04cdd099b3\" (UID: \"e7f25c1f-1902-473a-bdc1-cd04cdd099b3\") " Jul 14 21:55:01.400674 kubelet[2460]: I0714 21:55:01.400155 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-run\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400674 kubelet[2460]: I0714 21:55:01.400169 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-etc-cni-netd\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400674 kubelet[2460]: I0714 21:55:01.400207 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-host-proc-sys-net\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400674 kubelet[2460]: I0714 21:55:01.400225 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h96gw\" (UniqueName: \"kubernetes.io/projected/1f2350fc-31f6-452b-9b51-78f61b63831f-kube-api-access-h96gw\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400674 kubelet[2460]: I0714 21:55:01.400241 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-config-path\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400674 kubelet[2460]: I0714 21:55:01.400257 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-hostproc\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400807 kubelet[2460]: I0714 21:55:01.400280 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-cgroup\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400807 kubelet[2460]: I0714 21:55:01.400294 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-bpf-maps\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400807 kubelet[2460]: I0714 21:55:01.400309 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-lib-modules\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400807 kubelet[2460]: I0714 21:55:01.400335 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-host-proc-sys-kernel\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.400807 kubelet[2460]: I0714 21:55:01.400354 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f2350fc-31f6-452b-9b51-78f61b63831f-clustermesh-secrets\") pod \"1f2350fc-31f6-452b-9b51-78f61b63831f\" (UID: \"1f2350fc-31f6-452b-9b51-78f61b63831f\") " Jul 14 21:55:01.402945 kubelet[2460]: I0714 21:55:01.402904 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:55:01.403005 kubelet[2460]: I0714 21:55:01.402967 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cni-path" (OuterVolumeSpecName: "cni-path") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:55:01.403005 kubelet[2460]: I0714 21:55:01.402983 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:55:01.403314 kubelet[2460]: I0714 21:55:01.403107 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:55:01.403314 kubelet[2460]: I0714 21:55:01.403193 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:55:01.403314 kubelet[2460]: I0714 21:55:01.403220 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:55:01.403314 kubelet[2460]: I0714 21:55:01.403235 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:55:01.408594 kubelet[2460]: I0714 21:55:01.408561 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f25c1f-1902-473a-bdc1-cd04cdd099b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e7f25c1f-1902-473a-bdc1-cd04cdd099b3" (UID: "e7f25c1f-1902-473a-bdc1-cd04cdd099b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 21:55:01.415228 kubelet[2460]: I0714 21:55:01.415174 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2350fc-31f6-452b-9b51-78f61b63831f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 21:55:01.415308 kubelet[2460]: I0714 21:55:01.415244 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:55:01.416569 kubelet[2460]: I0714 21:55:01.416460 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2350fc-31f6-452b-9b51-78f61b63831f-kube-api-access-h96gw" (OuterVolumeSpecName: "kube-api-access-h96gw") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "kube-api-access-h96gw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:55:01.416569 kubelet[2460]: I0714 21:55:01.416514 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-hostproc" (OuterVolumeSpecName: "hostproc") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:55:01.416569 kubelet[2460]: I0714 21:55:01.416534 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:55:01.416835 kubelet[2460]: I0714 21:55:01.416805 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2350fc-31f6-452b-9b51-78f61b63831f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:55:01.416835 kubelet[2460]: I0714 21:55:01.416805 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f25c1f-1902-473a-bdc1-cd04cdd099b3-kube-api-access-w47j5" (OuterVolumeSpecName: "kube-api-access-w47j5") pod "e7f25c1f-1902-473a-bdc1-cd04cdd099b3" (UID: "e7f25c1f-1902-473a-bdc1-cd04cdd099b3"). InnerVolumeSpecName "kube-api-access-w47j5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:55:01.417137 kubelet[2460]: I0714 21:55:01.417092 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1f2350fc-31f6-452b-9b51-78f61b63831f" (UID: "1f2350fc-31f6-452b-9b51-78f61b63831f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 21:55:01.501410 kubelet[2460]: I0714 21:55:01.501225 2460 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h96gw\" (UniqueName: \"kubernetes.io/projected/1f2350fc-31f6-452b-9b51-78f61b63831f-kube-api-access-h96gw\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501410 kubelet[2460]: I0714 21:55:01.501263 2460 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501410 kubelet[2460]: I0714 21:55:01.501273 2460 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501410 kubelet[2460]: I0714 21:55:01.501281 2460 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501410 kubelet[2460]: I0714 21:55:01.501290 2460 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501410 kubelet[2460]: I0714 21:55:01.501297 2460 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501410 kubelet[2460]: I0714 21:55:01.501306 2460 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501410 kubelet[2460]: I0714 21:55:01.501313 2460 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f2350fc-31f6-452b-9b51-78f61b63831f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501735 kubelet[2460]: I0714 21:55:01.501331 2460 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501735 kubelet[2460]: I0714 21:55:01.501341 2460 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501735 kubelet[2460]: I0714 21:55:01.501349 2460 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w47j5\" (UniqueName: \"kubernetes.io/projected/e7f25c1f-1902-473a-bdc1-cd04cdd099b3-kube-api-access-w47j5\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501735 kubelet[2460]: I0714 21:55:01.501356 2460 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f2350fc-31f6-452b-9b51-78f61b63831f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501735 kubelet[2460]: I0714 21:55:01.501364 2460 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7f25c1f-1902-473a-bdc1-cd04cdd099b3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501735 kubelet[2460]: I0714 21:55:01.501371 2460 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501735 kubelet[2460]: I0714 21:55:01.501379 2460 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.501735 kubelet[2460]: I0714 21:55:01.501387 2460 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f2350fc-31f6-452b-9b51-78f61b63831f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 21:55:01.617454 systemd[1]: Removed slice kubepods-burstable-pod1f2350fc_31f6_452b_9b51_78f61b63831f.slice - libcontainer container kubepods-burstable-pod1f2350fc_31f6_452b_9b51_78f61b63831f.slice. Jul 14 21:55:01.617543 systemd[1]: kubepods-burstable-pod1f2350fc_31f6_452b_9b51_78f61b63831f.slice: Consumed 6.774s CPU time. Jul 14 21:55:01.618474 systemd[1]: Removed slice kubepods-besteffort-pode7f25c1f_1902_473a_bdc1_cd04cdd099b3.slice - libcontainer container kubepods-besteffort-pode7f25c1f_1902_473a_bdc1_cd04cdd099b3.slice. Jul 14 21:55:01.927844 kubelet[2460]: I0714 21:55:01.927458 2460 scope.go:117] "RemoveContainer" containerID="56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a" Jul 14 21:55:01.934027 containerd[1439]: time="2025-07-14T21:55:01.933995505Z" level=info msg="RemoveContainer for \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\"" Jul 14 21:55:01.941060 containerd[1439]: time="2025-07-14T21:55:01.941012838Z" level=info msg="RemoveContainer for \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\" returns successfully" Jul 14 21:55:01.941340 kubelet[2460]: I0714 21:55:01.941306 2460 scope.go:117] "RemoveContainer" containerID="d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5" Jul 14 21:55:01.942636 containerd[1439]: time="2025-07-14T21:55:01.942597481Z" level=info msg="RemoveContainer for \"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5\"" Jul 14 21:55:01.961948 containerd[1439]: time="2025-07-14T21:55:01.961898477Z" level=info msg="RemoveContainer for \"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5\" returns successfully" Jul 14 21:55:01.962188 kubelet[2460]: I0714 21:55:01.962125 2460 scope.go:117] "RemoveContainer" containerID="75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c" Jul 14 21:55:01.963138 containerd[1439]: time="2025-07-14T21:55:01.963115840Z" level=info msg="RemoveContainer for \"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c\"" Jul 14 21:55:01.965434 containerd[1439]: time="2025-07-14T21:55:01.965397924Z" level=info msg="RemoveContainer for \"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c\" returns successfully" Jul 14 21:55:01.965572 kubelet[2460]: I0714 21:55:01.965547 2460 scope.go:117] "RemoveContainer" containerID="53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07" Jul 14 21:55:01.966480 containerd[1439]: time="2025-07-14T21:55:01.966461406Z" level=info msg="RemoveContainer for \"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07\"" Jul 14 21:55:01.968659 containerd[1439]: time="2025-07-14T21:55:01.968596170Z" level=info msg="RemoveContainer for \"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07\" returns successfully" Jul 14 21:55:01.968843 kubelet[2460]: I0714 21:55:01.968813 2460 scope.go:117] "RemoveContainer" containerID="537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94" Jul 14 21:55:01.969685 containerd[1439]: time="2025-07-14T21:55:01.969663372Z" level=info msg="RemoveContainer for \"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94\"" Jul 14 21:55:01.971578 containerd[1439]: time="2025-07-14T21:55:01.971541136Z" level=info msg="RemoveContainer for \"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94\" returns successfully" Jul 14 21:55:01.971695 kubelet[2460]: I0714 21:55:01.971662 2460 scope.go:117] "RemoveContainer" containerID="56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a" Jul 14 21:55:01.971847 containerd[1439]: time="2025-07-14T21:55:01.971809096Z" level=error msg="ContainerStatus for \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\": not found" Jul 14 21:55:01.977438 kubelet[2460]: E0714 21:55:01.977413 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\": not found" containerID="56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a" Jul 14 21:55:01.982020 kubelet[2460]: I0714 21:55:01.981906 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a"} err="failed to get container status \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"56d5b6aca355cf7c9089e68100ecba5e75dc136bd433bbb13fc1dcd3ac3dbf9a\": not found" Jul 14 21:55:01.982020 kubelet[2460]: I0714 21:55:01.982011 2460 scope.go:117] "RemoveContainer" containerID="d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5" Jul 14 21:55:01.982250 containerd[1439]: time="2025-07-14T21:55:01.982204635Z" level=error msg="ContainerStatus for \"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5\": not found" Jul 14 21:55:01.982378 kubelet[2460]: E0714 21:55:01.982353 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5\": not found" containerID="d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5" Jul 14 21:55:01.982414 kubelet[2460]: I0714 21:55:01.982385 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5"} err="failed to get container status \"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d15f3686268de648f97d791c0ba0ad93a6b6bf43b42ced21312bc72cdcee88a5\": not found" Jul 14 21:55:01.982414 kubelet[2460]: I0714 21:55:01.982406 2460 scope.go:117] "RemoveContainer" containerID="75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c" Jul 14 21:55:01.982599 containerd[1439]: time="2025-07-14T21:55:01.982571476Z" level=error msg="ContainerStatus for \"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c\": not found" Jul 14 21:55:01.982755 kubelet[2460]: E0714 21:55:01.982735 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c\": not found" containerID="75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c" Jul 14 21:55:01.982803 kubelet[2460]: I0714 21:55:01.982759 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c"} err="failed to get container status \"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"75e5ce958b2325bc12570967fdccd741c0ea58c99e032323c8c1d6f9effb7e0c\": not found" Jul 14 21:55:01.982803 kubelet[2460]: I0714 21:55:01.982779 2460 scope.go:117] "RemoveContainer" containerID="53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07" Jul 14 21:55:01.982977 containerd[1439]: time="2025-07-14T21:55:01.982943957Z" level=error msg="ContainerStatus for \"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07\": not found" Jul 14 21:55:01.983212 kubelet[2460]: E0714 21:55:01.983091 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07\": not found" containerID="53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07" Jul 14 21:55:01.983212 kubelet[2460]: I0714 21:55:01.983118 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07"} err="failed to get container status \"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07\": rpc error: code = NotFound desc = an error occurred when try to find container \"53cb7f18a3eba1796b3c9dce4e6c3e826936e4c0f137e7da448ea064911a1b07\": not found" Jul 14 21:55:01.983212 kubelet[2460]: I0714 21:55:01.983134 2460 scope.go:117] "RemoveContainer" containerID="537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94" Jul 14 21:55:01.983595 containerd[1439]: time="2025-07-14T21:55:01.983512478Z" level=error msg="ContainerStatus for \"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94\": not found" Jul 14 21:55:01.983675 kubelet[2460]: E0714 21:55:01.983646 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94\": not found" containerID="537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94" Jul 14 21:55:01.983675 kubelet[2460]: I0714 21:55:01.983667 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94"} err="failed to get container status \"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94\": rpc error: code = NotFound desc = an error occurred when try to find container \"537a86bf5268ba4236c020b4d880ece371aad2f254475ce257ce5703628ccf94\": not found" Jul 14 21:55:01.983728 kubelet[2460]: I0714 21:55:01.983683 2460 scope.go:117] "RemoveContainer" containerID="b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8" Jul 14 21:55:01.984553 containerd[1439]: time="2025-07-14T21:55:01.984529120Z" level=info msg="RemoveContainer for \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\"" Jul 14 21:55:01.986899 containerd[1439]: time="2025-07-14T21:55:01.986864324Z" level=info msg="RemoveContainer for \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\" returns successfully" Jul 14 21:55:01.987135 kubelet[2460]: I0714 21:55:01.987048 2460 scope.go:117] "RemoveContainer" containerID="b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8" Jul 14 21:55:01.987244 containerd[1439]: time="2025-07-14T21:55:01.987212805Z" level=error msg="ContainerStatus for \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\": not found" Jul 14 21:55:01.987409 kubelet[2460]: E0714 21:55:01.987345 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\": not found" containerID="b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8" Jul 14 21:55:01.987409 kubelet[2460]: I0714 21:55:01.987389 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8"} err="failed to get container status \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1bcfb64e48eee0e7e352631ebb4c0541b413e23d05fce476e88fa4ef7bbb6b8\": not found" Jul 14 21:55:02.147839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42a3740ab72615e0045500a7ff7359bd9e5f6c712603e292c587e2bb0ddfc83d-rootfs.mount: Deactivated successfully. Jul 14 21:55:02.147946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5255f7487ad8422fb3b4559a258784cda9bcbf4077675510cf87834ab956af86-rootfs.mount: Deactivated successfully. Jul 14 21:55:02.148001 systemd[1]: var-lib-kubelet-pods-e7f25c1f\x2d1902\x2d473a\x2dbdc1\x2dcd04cdd099b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw47j5.mount: Deactivated successfully. Jul 14 21:55:02.148061 systemd[1]: var-lib-kubelet-pods-1f2350fc\x2d31f6\x2d452b\x2d9b51\x2d78f61b63831f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh96gw.mount: Deactivated successfully. Jul 14 21:55:02.148111 systemd[1]: var-lib-kubelet-pods-1f2350fc\x2d31f6\x2d452b\x2d9b51\x2d78f61b63831f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 21:55:02.148159 systemd[1]: var-lib-kubelet-pods-1f2350fc\x2d31f6\x2d452b\x2d9b51\x2d78f61b63831f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 21:55:03.073193 sshd[4139]: pam_unix(sshd:session): session closed for user core Jul 14 21:55:03.082929 systemd[1]: sshd@24-10.0.0.65:22-10.0.0.1:40454.service: Deactivated successfully. Jul 14 21:55:03.084302 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 21:55:03.084871 systemd[1]: session-25.scope: Consumed 1.080s CPU time. Jul 14 21:55:03.085323 systemd-logind[1423]: Session 25 logged out. Waiting for processes to exit. Jul 14 21:55:03.107916 systemd[1]: Started sshd@25-10.0.0.65:22-10.0.0.1:37312.service - OpenSSH per-connection server daemon (10.0.0.1:37312). Jul 14 21:55:03.111528 systemd-logind[1423]: Removed session 25. Jul 14 21:55:03.139701 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 37312 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:55:03.140874 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:55:03.145361 systemd-logind[1423]: New session 26 of user core. Jul 14 21:55:03.156751 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 21:55:03.612534 kubelet[2460]: I0714 21:55:03.612487 2460 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2350fc-31f6-452b-9b51-78f61b63831f" path="/var/lib/kubelet/pods/1f2350fc-31f6-452b-9b51-78f61b63831f/volumes" Jul 14 21:55:03.613155 kubelet[2460]: I0714 21:55:03.613124 2460 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f25c1f-1902-473a-bdc1-cd04cdd099b3" path="/var/lib/kubelet/pods/e7f25c1f-1902-473a-bdc1-cd04cdd099b3/volumes" Jul 14 21:55:04.447844 sshd[4299]: pam_unix(sshd:session): session closed for user core Jul 14 21:55:04.455210 systemd[1]: sshd@25-10.0.0.65:22-10.0.0.1:37312.service: Deactivated successfully. Jul 14 21:55:04.460583 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 21:55:04.463694 systemd[1]: session-26.scope: Consumed 1.149s CPU time. Jul 14 21:55:04.468534 kubelet[2460]: I0714 21:55:04.465300 2460 memory_manager.go:355] "RemoveStaleState removing state" podUID="e7f25c1f-1902-473a-bdc1-cd04cdd099b3" containerName="cilium-operator" Jul 14 21:55:04.468534 kubelet[2460]: I0714 21:55:04.465344 2460 memory_manager.go:355] "RemoveStaleState removing state" podUID="1f2350fc-31f6-452b-9b51-78f61b63831f" containerName="cilium-agent" Jul 14 21:55:04.467464 systemd-logind[1423]: Session 26 logged out. Waiting for processes to exit. Jul 14 21:55:04.475961 systemd[1]: Started sshd@26-10.0.0.65:22-10.0.0.1:37318.service - OpenSSH per-connection server daemon (10.0.0.1:37318). Jul 14 21:55:04.485607 systemd-logind[1423]: Removed session 26. Jul 14 21:55:04.488385 systemd[1]: Created slice kubepods-burstable-podf122fb7a_f63b_496e_bbcb_86fc339c76c4.slice - libcontainer container kubepods-burstable-podf122fb7a_f63b_496e_bbcb_86fc339c76c4.slice. Jul 14 21:55:04.521944 kubelet[2460]: I0714 21:55:04.521908 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f122fb7a-f63b-496e-bbcb-86fc339c76c4-xtables-lock\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522441 kubelet[2460]: I0714 21:55:04.522077 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f122fb7a-f63b-496e-bbcb-86fc339c76c4-hostproc\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522441 kubelet[2460]: I0714 21:55:04.522106 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f122fb7a-f63b-496e-bbcb-86fc339c76c4-lib-modules\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522441 kubelet[2460]: I0714 21:55:04.522121 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f122fb7a-f63b-496e-bbcb-86fc339c76c4-cilium-config-path\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522441 kubelet[2460]: I0714 21:55:04.522148 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f122fb7a-f63b-496e-bbcb-86fc339c76c4-hubble-tls\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522441 kubelet[2460]: I0714 21:55:04.522163 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f122fb7a-f63b-496e-bbcb-86fc339c76c4-bpf-maps\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522441 kubelet[2460]: I0714 21:55:04.522180 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f122fb7a-f63b-496e-bbcb-86fc339c76c4-etc-cni-netd\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522628 kubelet[2460]: I0714 21:55:04.522195 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f122fb7a-f63b-496e-bbcb-86fc339c76c4-clustermesh-secrets\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522628 kubelet[2460]: I0714 21:55:04.522209 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f122fb7a-f63b-496e-bbcb-86fc339c76c4-host-proc-sys-net\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522628 kubelet[2460]: I0714 21:55:04.522226 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f122fb7a-f63b-496e-bbcb-86fc339c76c4-cni-path\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522628 kubelet[2460]: I0714 21:55:04.522253 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f122fb7a-f63b-496e-bbcb-86fc339c76c4-host-proc-sys-kernel\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522628 kubelet[2460]: I0714 21:55:04.522274 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f122fb7a-f63b-496e-bbcb-86fc339c76c4-cilium-run\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522628 kubelet[2460]: I0714 21:55:04.522290 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f122fb7a-f63b-496e-bbcb-86fc339c76c4-cilium-cgroup\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522753 kubelet[2460]: I0714 21:55:04.522304 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f122fb7a-f63b-496e-bbcb-86fc339c76c4-cilium-ipsec-secrets\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.522753 kubelet[2460]: I0714 21:55:04.522334 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cchl\" (UniqueName: \"kubernetes.io/projected/f122fb7a-f63b-496e-bbcb-86fc339c76c4-kube-api-access-5cchl\") pod \"cilium-nt746\" (UID: \"f122fb7a-f63b-496e-bbcb-86fc339c76c4\") " pod="kube-system/cilium-nt746" Jul 14 21:55:04.528566 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 37318 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:55:04.531332 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:55:04.535709 systemd-logind[1423]: New session 27 of user core. Jul 14 21:55:04.547780 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 14 21:55:04.598837 sshd[4312]: pam_unix(sshd:session): session closed for user core Jul 14 21:55:04.609766 systemd[1]: sshd@26-10.0.0.65:22-10.0.0.1:37318.service: Deactivated successfully. Jul 14 21:55:04.611765 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 21:55:04.612460 systemd-logind[1423]: Session 27 logged out. Waiting for processes to exit. Jul 14 21:55:04.614548 systemd[1]: Started sshd@27-10.0.0.65:22-10.0.0.1:37322.service - OpenSSH per-connection server daemon (10.0.0.1:37322). Jul 14 21:55:04.615968 systemd-logind[1423]: Removed session 27. Jul 14 21:55:04.653739 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 37322 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:55:04.654997 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:55:04.660054 systemd-logind[1423]: New session 28 of user core. Jul 14 21:55:04.668807 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 14 21:55:04.794147 kubelet[2460]: E0714 21:55:04.794112 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:04.794739 containerd[1439]: time="2025-07-14T21:55:04.794688082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nt746,Uid:f122fb7a-f63b-496e-bbcb-86fc339c76c4,Namespace:kube-system,Attempt:0,}" Jul 14 21:55:04.821406 containerd[1439]: time="2025-07-14T21:55:04.821284409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:55:04.821406 containerd[1439]: time="2025-07-14T21:55:04.821361970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:55:04.821679 containerd[1439]: time="2025-07-14T21:55:04.821378850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:55:04.821679 containerd[1439]: time="2025-07-14T21:55:04.821461890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:55:04.843798 systemd[1]: Started cri-containerd-8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e.scope - libcontainer container 8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e. Jul 14 21:55:04.868009 containerd[1439]: time="2025-07-14T21:55:04.867904212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nt746,Uid:f122fb7a-f63b-496e-bbcb-86fc339c76c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e\"" Jul 14 21:55:04.868661 kubelet[2460]: E0714 21:55:04.868622 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:04.871605 containerd[1439]: time="2025-07-14T21:55:04.871563219Z" level=info msg="CreateContainer within sandbox \"8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:55:04.882867 containerd[1439]: time="2025-07-14T21:55:04.882550638Z" level=info msg="CreateContainer within sandbox \"8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"50eb1cdf65719527536d92734f8867b0556ba856c2b2ff3b1f275eee1fcc5032\"" Jul 14 21:55:04.883429 containerd[1439]: time="2025-07-14T21:55:04.883345680Z" level=info msg="StartContainer for \"50eb1cdf65719527536d92734f8867b0556ba856c2b2ff3b1f275eee1fcc5032\"" Jul 14 21:55:04.917793 systemd[1]: Started cri-containerd-50eb1cdf65719527536d92734f8867b0556ba856c2b2ff3b1f275eee1fcc5032.scope - libcontainer container 50eb1cdf65719527536d92734f8867b0556ba856c2b2ff3b1f275eee1fcc5032. Jul 14 21:55:04.942902 containerd[1439]: time="2025-07-14T21:55:04.942857305Z" level=info msg="StartContainer for \"50eb1cdf65719527536d92734f8867b0556ba856c2b2ff3b1f275eee1fcc5032\" returns successfully" Jul 14 21:55:04.948167 kubelet[2460]: E0714 21:55:04.948134 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:04.955490 systemd[1]: cri-containerd-50eb1cdf65719527536d92734f8867b0556ba856c2b2ff3b1f275eee1fcc5032.scope: Deactivated successfully. Jul 14 21:55:05.001158 containerd[1439]: time="2025-07-14T21:55:05.001090649Z" level=info msg="shim disconnected" id=50eb1cdf65719527536d92734f8867b0556ba856c2b2ff3b1f275eee1fcc5032 namespace=k8s.io Jul 14 21:55:05.001158 containerd[1439]: time="2025-07-14T21:55:05.001151409Z" level=warning msg="cleaning up after shim disconnected" id=50eb1cdf65719527536d92734f8867b0556ba856c2b2ff3b1f275eee1fcc5032 namespace=k8s.io Jul 14 21:55:05.001158 containerd[1439]: time="2025-07-14T21:55:05.001160409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:55:05.680472 kubelet[2460]: E0714 21:55:05.680429 2460 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 21:55:05.956450 kubelet[2460]: E0714 21:55:05.955325 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:05.964791 containerd[1439]: time="2025-07-14T21:55:05.964743410Z" level=info msg="CreateContainer within sandbox \"8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:55:05.979937 containerd[1439]: time="2025-07-14T21:55:05.979882917Z" level=info msg="CreateContainer within sandbox \"8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"de6e443c184192de28058e3ee9b962455a8be738ac0b09d43fffd5a1f1cd387d\"" Jul 14 21:55:05.980485 containerd[1439]: time="2025-07-14T21:55:05.980458518Z" level=info msg="StartContainer for \"de6e443c184192de28058e3ee9b962455a8be738ac0b09d43fffd5a1f1cd387d\"" Jul 14 21:55:06.004825 systemd[1]: Started cri-containerd-de6e443c184192de28058e3ee9b962455a8be738ac0b09d43fffd5a1f1cd387d.scope - libcontainer container de6e443c184192de28058e3ee9b962455a8be738ac0b09d43fffd5a1f1cd387d. Jul 14 21:55:06.028874 containerd[1439]: time="2025-07-14T21:55:06.028831841Z" level=info msg="StartContainer for \"de6e443c184192de28058e3ee9b962455a8be738ac0b09d43fffd5a1f1cd387d\" returns successfully" Jul 14 21:55:06.044016 systemd[1]: cri-containerd-de6e443c184192de28058e3ee9b962455a8be738ac0b09d43fffd5a1f1cd387d.scope: Deactivated successfully. Jul 14 21:55:06.064053 containerd[1439]: time="2025-07-14T21:55:06.063833382Z" level=info msg="shim disconnected" id=de6e443c184192de28058e3ee9b962455a8be738ac0b09d43fffd5a1f1cd387d namespace=k8s.io Jul 14 21:55:06.064053 containerd[1439]: time="2025-07-14T21:55:06.063885062Z" level=warning msg="cleaning up after shim disconnected" id=de6e443c184192de28058e3ee9b962455a8be738ac0b09d43fffd5a1f1cd387d namespace=k8s.io Jul 14 21:55:06.064053 containerd[1439]: time="2025-07-14T21:55:06.063895222Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:55:06.626904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de6e443c184192de28058e3ee9b962455a8be738ac0b09d43fffd5a1f1cd387d-rootfs.mount: Deactivated successfully. Jul 14 21:55:06.958220 kubelet[2460]: E0714 21:55:06.958102 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:06.963639 containerd[1439]: time="2025-07-14T21:55:06.962013843Z" level=info msg="CreateContainer within sandbox \"8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:55:06.985956 containerd[1439]: time="2025-07-14T21:55:06.985896764Z" level=info msg="CreateContainer within sandbox \"8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b0fc7820ce44eba786d68da85924139afd621518b5cdd5b6333f4722222ac587\"" Jul 14 21:55:06.986541 containerd[1439]: time="2025-07-14T21:55:06.986509085Z" level=info msg="StartContainer for \"b0fc7820ce44eba786d68da85924139afd621518b5cdd5b6333f4722222ac587\"" Jul 14 21:55:07.011002 systemd[1]: run-containerd-runc-k8s.io-b0fc7820ce44eba786d68da85924139afd621518b5cdd5b6333f4722222ac587-runc.dhOC4H.mount: Deactivated successfully. Jul 14 21:55:07.021864 systemd[1]: Started cri-containerd-b0fc7820ce44eba786d68da85924139afd621518b5cdd5b6333f4722222ac587.scope - libcontainer container b0fc7820ce44eba786d68da85924139afd621518b5cdd5b6333f4722222ac587. Jul 14 21:55:07.045922 systemd[1]: cri-containerd-b0fc7820ce44eba786d68da85924139afd621518b5cdd5b6333f4722222ac587.scope: Deactivated successfully. Jul 14 21:55:07.047020 containerd[1439]: time="2025-07-14T21:55:07.046976348Z" level=info msg="StartContainer for \"b0fc7820ce44eba786d68da85924139afd621518b5cdd5b6333f4722222ac587\" returns successfully" Jul 14 21:55:07.075449 containerd[1439]: time="2025-07-14T21:55:07.075391276Z" level=info msg="shim disconnected" id=b0fc7820ce44eba786d68da85924139afd621518b5cdd5b6333f4722222ac587 namespace=k8s.io Jul 14 21:55:07.075449 containerd[1439]: time="2025-07-14T21:55:07.075444396Z" level=warning msg="cleaning up after shim disconnected" id=b0fc7820ce44eba786d68da85924139afd621518b5cdd5b6333f4722222ac587 namespace=k8s.io Jul 14 21:55:07.075449 containerd[1439]: time="2025-07-14T21:55:07.075453556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:55:07.627028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0fc7820ce44eba786d68da85924139afd621518b5cdd5b6333f4722222ac587-rootfs.mount: Deactivated successfully. Jul 14 21:55:07.962773 kubelet[2460]: E0714 21:55:07.962373 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:07.965388 containerd[1439]: time="2025-07-14T21:55:07.965351778Z" level=info msg="CreateContainer within sandbox \"8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:55:07.976572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240346571.mount: Deactivated successfully. Jul 14 21:55:07.978396 containerd[1439]: time="2025-07-14T21:55:07.978352400Z" level=info msg="CreateContainer within sandbox \"8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ae6e371d61ce94e2423848353b475e22d92dceac58cfe0150c953bca5bd2d04e\"" Jul 14 21:55:07.978849 containerd[1439]: time="2025-07-14T21:55:07.978809121Z" level=info msg="StartContainer for \"ae6e371d61ce94e2423848353b475e22d92dceac58cfe0150c953bca5bd2d04e\"" Jul 14 21:55:08.002105 systemd[1]: Started cri-containerd-ae6e371d61ce94e2423848353b475e22d92dceac58cfe0150c953bca5bd2d04e.scope - libcontainer container ae6e371d61ce94e2423848353b475e22d92dceac58cfe0150c953bca5bd2d04e. Jul 14 21:55:08.020801 systemd[1]: cri-containerd-ae6e371d61ce94e2423848353b475e22d92dceac58cfe0150c953bca5bd2d04e.scope: Deactivated successfully. Jul 14 21:55:08.040201 containerd[1439]: time="2025-07-14T21:55:08.040068264Z" level=info msg="StartContainer for \"ae6e371d61ce94e2423848353b475e22d92dceac58cfe0150c953bca5bd2d04e\" returns successfully" Jul 14 21:55:08.048103 containerd[1439]: time="2025-07-14T21:55:08.045789633Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf122fb7a_f63b_496e_bbcb_86fc339c76c4.slice/cri-containerd-ae6e371d61ce94e2423848353b475e22d92dceac58cfe0150c953bca5bd2d04e.scope/memory.events\": no such file or directory" Jul 14 21:55:08.061332 containerd[1439]: time="2025-07-14T21:55:08.061267379Z" level=info msg="shim disconnected" id=ae6e371d61ce94e2423848353b475e22d92dceac58cfe0150c953bca5bd2d04e namespace=k8s.io Jul 14 21:55:08.061332 containerd[1439]: time="2025-07-14T21:55:08.061329739Z" level=warning msg="cleaning up after shim disconnected" id=ae6e371d61ce94e2423848353b475e22d92dceac58cfe0150c953bca5bd2d04e namespace=k8s.io Jul 14 21:55:08.061332 containerd[1439]: time="2025-07-14T21:55:08.061339499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:55:08.500443 kubelet[2460]: I0714 21:55:08.500173 2460 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T21:55:08Z","lastTransitionTime":"2025-07-14T21:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 21:55:08.627138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae6e371d61ce94e2423848353b475e22d92dceac58cfe0150c953bca5bd2d04e-rootfs.mount: Deactivated successfully. Jul 14 21:55:08.967261 kubelet[2460]: E0714 21:55:08.966326 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:08.970776 containerd[1439]: time="2025-07-14T21:55:08.970621970Z" level=info msg="CreateContainer within sandbox \"8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:55:08.995220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3689416477.mount: Deactivated successfully. Jul 14 21:55:08.998775 containerd[1439]: time="2025-07-14T21:55:08.998720576Z" level=info msg="CreateContainer within sandbox \"8350618b2e963d49dc63962199b8401f1a6ef70f745efecec17b70323d5f536e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5a55611eb2a940739e4c00e846e67a887736eaf6f6b70122e927c80b8dfbc058\"" Jul 14 21:55:08.999394 containerd[1439]: time="2025-07-14T21:55:08.999353657Z" level=info msg="StartContainer for \"5a55611eb2a940739e4c00e846e67a887736eaf6f6b70122e927c80b8dfbc058\"" Jul 14 21:55:09.032806 systemd[1]: Started cri-containerd-5a55611eb2a940739e4c00e846e67a887736eaf6f6b70122e927c80b8dfbc058.scope - libcontainer container 5a55611eb2a940739e4c00e846e67a887736eaf6f6b70122e927c80b8dfbc058. Jul 14 21:55:09.057491 containerd[1439]: time="2025-07-14T21:55:09.057436593Z" level=info msg="StartContainer for \"5a55611eb2a940739e4c00e846e67a887736eaf6f6b70122e927c80b8dfbc058\" returns successfully" Jul 14 21:55:09.336647 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 14 21:55:09.971867 kubelet[2460]: E0714 21:55:09.971547 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:09.997272 kubelet[2460]: I0714 21:55:09.997175 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nt746" podStartSLOduration=5.997161489 podStartE2EDuration="5.997161489s" podCreationTimestamp="2025-07-14 21:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:55:09.996912169 +0000 UTC m=+134.483158341" watchObservedRunningTime="2025-07-14 21:55:09.997161489 +0000 UTC m=+134.483407661" Jul 14 21:55:10.973962 kubelet[2460]: E0714 21:55:10.973926 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:10.994090 systemd[1]: run-containerd-runc-k8s.io-5a55611eb2a940739e4c00e846e67a887736eaf6f6b70122e927c80b8dfbc058-runc.lhNXTh.mount: Deactivated successfully. Jul 14 21:55:21.610697 kubelet[2460]: E0714 21:55:21.610422 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:29.610118 kubelet[2460]: E0714 21:55:29.609991 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:31.615828 kubelet[2460]: E0714 21:55:31.611995 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:34.795288 kubelet[2460]: E0714 21:55:34.795242 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:36.610775 kubelet[2460]: E0714 21:55:36.610680 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:44.253224 systemd-networkd[1376]: lxc_health: Link UP Jul 14 21:55:44.267561 systemd-networkd[1376]: lxc_health: Gained carrier Jul 14 21:55:44.796222 kubelet[2460]: E0714 21:55:44.796075 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:45.039661 kubelet[2460]: E0714 21:55:45.039254 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:46.191985 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jul 14 21:55:49.610770 kubelet[2460]: E0714 21:55:49.610684 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:55:51.092756 sshd[4320]: pam_unix(sshd:session): session closed for user core Jul 14 21:55:51.095648 systemd[1]: sshd@27-10.0.0.65:22-10.0.0.1:37322.service: Deactivated successfully. Jul 14 21:55:51.097228 systemd[1]: session-28.scope: Deactivated successfully. Jul 14 21:55:51.099000 systemd-logind[1423]: Session 28 logged out. Waiting for processes to exit. Jul 14 21:55:51.100311 systemd-logind[1423]: Removed session 28.