Jul 14 21:19:42.866020 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 21:19:42.866043 kernel: Linux version 6.12.37-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Mon Jul 14 19:48:49 -00 2025 Jul 14 21:19:42.866052 kernel: KASLR enabled Jul 14 21:19:42.866058 kernel: efi: EFI v2.7 by EDK II Jul 14 21:19:42.866070 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 14 21:19:42.866076 kernel: random: crng init done Jul 14 21:19:42.866085 kernel: secureboot: Secure boot disabled Jul 14 21:19:42.866091 kernel: ACPI: Early table checksum verification disabled Jul 14 21:19:42.866097 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 14 21:19:42.866105 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:19:42.866111 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:19:42.866116 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:19:42.866122 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:19:42.866128 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:19:42.866135 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:19:42.866142 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:19:42.866148 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:19:42.866154 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:19:42.866161 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:19:42.866166 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 21:19:42.866172 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 14 21:19:42.866179 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:19:42.866185 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 14 21:19:42.866191 kernel: Zone ranges: Jul 14 21:19:42.866197 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:19:42.866204 kernel: DMA32 empty Jul 14 21:19:42.866210 kernel: Normal empty Jul 14 21:19:42.866216 kernel: Device empty Jul 14 21:19:42.866221 kernel: Movable zone start for each node Jul 14 21:19:42.866227 kernel: Early memory node ranges Jul 14 21:19:42.866234 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 14 21:19:42.866240 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 14 21:19:42.866246 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 14 21:19:42.866251 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 14 21:19:42.866258 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 14 21:19:42.866263 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 14 21:19:42.866269 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 14 21:19:42.866277 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 14 21:19:42.866283 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 14 21:19:42.866289 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 14 21:19:42.866297 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 14 21:19:42.866304 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 14 21:19:42.866310 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 14 21:19:42.866318 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:19:42.866324 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 21:19:42.866331 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 14 21:19:42.866337 kernel: psci: probing for conduit method from ACPI. Jul 14 21:19:42.866343 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 21:19:42.866350 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 21:19:42.866356 kernel: psci: Trusted OS migration not required Jul 14 21:19:42.866362 kernel: psci: SMC Calling Convention v1.1 Jul 14 21:19:42.866369 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 21:19:42.866375 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 14 21:19:42.866383 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 14 21:19:42.866390 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 21:19:42.866396 kernel: Detected PIPT I-cache on CPU0 Jul 14 21:19:42.866402 kernel: CPU features: detected: GIC system register CPU interface Jul 14 21:19:42.866409 kernel: CPU features: detected: Spectre-v4 Jul 14 21:19:42.866415 kernel: CPU features: detected: Spectre-BHB Jul 14 21:19:42.866421 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 21:19:42.866428 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 21:19:42.866434 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 21:19:42.866441 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 21:19:42.866447 kernel: alternatives: applying boot alternatives Jul 14 21:19:42.866454 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=67789a938d81feeebc020d9415b455585ce5bf173608fce319087a5433c30d80 Jul 14 21:19:42.866462 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:19:42.866469 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:19:42.866475 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:19:42.866481 kernel: Fallback order for Node 0: 0 Jul 14 21:19:42.866488 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 14 21:19:42.866494 kernel: Policy zone: DMA Jul 14 21:19:42.866501 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:19:42.866507 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 14 21:19:42.866513 kernel: software IO TLB: area num 4. Jul 14 21:19:42.866520 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 14 21:19:42.866526 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 14 21:19:42.866534 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:19:42.866540 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:19:42.866547 kernel: rcu: RCU event tracing is enabled. Jul 14 21:19:42.866554 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:19:42.866560 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:19:42.866567 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:19:42.866573 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:19:42.866579 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:19:42.866586 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:19:42.866592 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:19:42.866599 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 21:19:42.866607 kernel: GICv3: 256 SPIs implemented Jul 14 21:19:42.866613 kernel: GICv3: 0 Extended SPIs implemented Jul 14 21:19:42.866619 kernel: Root IRQ handler: gic_handle_irq Jul 14 21:19:42.866626 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 14 21:19:42.866632 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 14 21:19:42.866638 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 21:19:42.866645 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 21:19:42.866651 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 14 21:19:42.866657 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 14 21:19:42.866664 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 14 21:19:42.866670 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 14 21:19:42.866677 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 21:19:42.866684 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:19:42.866691 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 21:19:42.866709 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 21:19:42.866716 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 21:19:42.866723 kernel: arm-pv: using stolen time PV Jul 14 21:19:42.866729 kernel: Console: colour dummy device 80x25 Jul 14 21:19:42.866736 kernel: ACPI: Core revision 20240827 Jul 14 21:19:42.866743 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 21:19:42.866749 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:19:42.866756 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 14 21:19:42.866764 kernel: landlock: Up and running. Jul 14 21:19:42.866771 kernel: SELinux: Initializing. Jul 14 21:19:42.866778 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:19:42.866785 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:19:42.866791 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:19:42.866798 kernel: rcu: Max phase no-delay instances is 400. Jul 14 21:19:42.866805 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 14 21:19:42.866811 kernel: Remapping and enabling EFI services. Jul 14 21:19:42.866818 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:19:42.866830 kernel: Detected PIPT I-cache on CPU1 Jul 14 21:19:42.866837 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 21:19:42.866844 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 14 21:19:42.866852 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:19:42.866859 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 21:19:42.866866 kernel: Detected PIPT I-cache on CPU2 Jul 14 21:19:42.866873 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 21:19:42.866880 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 14 21:19:42.866888 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:19:42.866895 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 21:19:42.866902 kernel: Detected PIPT I-cache on CPU3 Jul 14 21:19:42.866908 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 21:19:42.866916 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 14 21:19:42.866923 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:19:42.866929 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 21:19:42.866936 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:19:42.866943 kernel: SMP: Total of 4 processors activated. Jul 14 21:19:42.866951 kernel: CPU: All CPU(s) started at EL1 Jul 14 21:19:42.866964 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 21:19:42.866972 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 21:19:42.866979 kernel: CPU features: detected: Common not Private translations Jul 14 21:19:42.866986 kernel: CPU features: detected: CRC32 instructions Jul 14 21:19:42.866993 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 14 21:19:42.867000 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 21:19:42.867007 kernel: CPU features: detected: LSE atomic instructions Jul 14 21:19:42.867014 kernel: CPU features: detected: Privileged Access Never Jul 14 21:19:42.867022 kernel: CPU features: detected: RAS Extension Support Jul 14 21:19:42.867032 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 21:19:42.867039 kernel: alternatives: applying system-wide alternatives Jul 14 21:19:42.867046 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 14 21:19:42.867054 kernel: Memory: 2424032K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 39424K init, 1038K bss, 125920K reserved, 16384K cma-reserved) Jul 14 21:19:42.867060 kernel: devtmpfs: initialized Jul 14 21:19:42.867068 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:19:42.867075 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:19:42.867082 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 14 21:19:42.867090 kernel: 0 pages in range for non-PLT usage Jul 14 21:19:42.867100 kernel: 508448 pages in range for PLT usage Jul 14 21:19:42.867107 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:19:42.867114 kernel: SMBIOS 3.0.0 present. Jul 14 21:19:42.867121 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 14 21:19:42.867128 kernel: DMI: Memory slots populated: 1/1 Jul 14 21:19:42.867134 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:19:42.867141 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 21:19:42.867148 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 21:19:42.867157 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 21:19:42.867164 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:19:42.867171 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Jul 14 21:19:42.867178 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:19:42.867185 kernel: cpuidle: using governor menu Jul 14 21:19:42.867192 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 21:19:42.867199 kernel: ASID allocator initialised with 32768 entries Jul 14 21:19:42.867206 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:19:42.867213 kernel: Serial: AMBA PL011 UART driver Jul 14 21:19:42.867221 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:19:42.867228 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 21:19:42.867235 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 21:19:42.867242 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 14 21:19:42.867249 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:19:42.867255 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 21:19:42.867262 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 21:19:42.867269 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 14 21:19:42.867276 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:19:42.867284 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:19:42.867291 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:19:42.867298 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:19:42.867305 kernel: ACPI: Interpreter enabled Jul 14 21:19:42.867312 kernel: ACPI: Using GIC for interrupt routing Jul 14 21:19:42.867319 kernel: ACPI: MCFG table detected, 1 entries Jul 14 21:19:42.867326 kernel: ACPI: CPU0 has been hot-added Jul 14 21:19:42.867333 kernel: ACPI: CPU1 has been hot-added Jul 14 21:19:42.867340 kernel: ACPI: CPU2 has been hot-added Jul 14 21:19:42.867347 kernel: ACPI: CPU3 has been hot-added Jul 14 21:19:42.867355 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 21:19:42.867362 kernel: printk: legacy console [ttyAMA0] enabled Jul 14 21:19:42.867369 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:19:42.867511 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:19:42.867577 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 21:19:42.867636 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 21:19:42.867723 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 21:19:42.867794 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 21:19:42.867803 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 21:19:42.867810 kernel: PCI host bridge to bus 0000:00 Jul 14 21:19:42.867877 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 21:19:42.867931 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 21:19:42.867994 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 21:19:42.868053 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:19:42.868138 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 14 21:19:42.868208 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 14 21:19:42.868269 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 14 21:19:42.868330 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 14 21:19:42.868389 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:19:42.868448 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 14 21:19:42.868509 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 14 21:19:42.868571 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 14 21:19:42.868624 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 21:19:42.868677 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 21:19:42.868790 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 21:19:42.868802 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 21:19:42.868809 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 21:19:42.868816 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 21:19:42.868826 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 21:19:42.868833 kernel: iommu: Default domain type: Translated Jul 14 21:19:42.868840 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 21:19:42.868847 kernel: efivars: Registered efivars operations Jul 14 21:19:42.868854 kernel: vgaarb: loaded Jul 14 21:19:42.868861 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 21:19:42.868868 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:19:42.868875 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:19:42.868882 kernel: pnp: PnP ACPI init Jul 14 21:19:42.868965 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 21:19:42.868976 kernel: pnp: PnP ACPI: found 1 devices Jul 14 21:19:42.868983 kernel: NET: Registered PF_INET protocol family Jul 14 21:19:42.868990 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:19:42.868997 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:19:42.869004 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:19:42.869011 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:19:42.869018 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 21:19:42.869027 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:19:42.869034 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:19:42.869042 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:19:42.869048 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:19:42.869055 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:19:42.869062 kernel: kvm [1]: HYP mode not available Jul 14 21:19:42.869069 kernel: Initialise system trusted keyrings Jul 14 21:19:42.869076 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:19:42.869084 kernel: Key type asymmetric registered Jul 14 21:19:42.869091 kernel: Asymmetric key parser 'x509' registered Jul 14 21:19:42.869099 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 14 21:19:42.869106 kernel: io scheduler mq-deadline registered Jul 14 21:19:42.869113 kernel: io scheduler kyber registered Jul 14 21:19:42.869120 kernel: io scheduler bfq registered Jul 14 21:19:42.869127 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 21:19:42.869134 kernel: ACPI: button: Power Button [PWRB] Jul 14 21:19:42.869141 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 21:19:42.869204 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 21:19:42.869215 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:19:42.869223 kernel: thunder_xcv, ver 1.0 Jul 14 21:19:42.869229 kernel: thunder_bgx, ver 1.0 Jul 14 21:19:42.869237 kernel: nicpf, ver 1.0 Jul 14 21:19:42.869243 kernel: nicvf, ver 1.0 Jul 14 21:19:42.869318 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 21:19:42.869374 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T21:19:42 UTC (1752527982) Jul 14 21:19:42.869383 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 21:19:42.869391 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 14 21:19:42.869399 kernel: watchdog: NMI not fully supported Jul 14 21:19:42.869406 kernel: watchdog: Hard watchdog permanently disabled Jul 14 21:19:42.869413 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:19:42.869420 kernel: Segment Routing with IPv6 Jul 14 21:19:42.869427 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:19:42.869434 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:19:42.869441 kernel: Key type dns_resolver registered Jul 14 21:19:42.869448 kernel: registered taskstats version 1 Jul 14 21:19:42.869455 kernel: Loading compiled-in X.509 certificates Jul 14 21:19:42.869464 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.37-flatcar: df8d0778d0d903123f31d838371daafc849980e6' Jul 14 21:19:42.869471 kernel: Demotion targets for Node 0: null Jul 14 21:19:42.869477 kernel: Key type .fscrypt registered Jul 14 21:19:42.869484 kernel: Key type fscrypt-provisioning registered Jul 14 21:19:42.869491 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:19:42.869498 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:19:42.869505 kernel: ima: No architecture policies found Jul 14 21:19:42.869512 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 21:19:42.869520 kernel: clk: Disabling unused clocks Jul 14 21:19:42.869539 kernel: PM: genpd: Disabling unused power domains Jul 14 21:19:42.869546 kernel: Warning: unable to open an initial console. Jul 14 21:19:42.869554 kernel: Freeing unused kernel memory: 39424K Jul 14 21:19:42.869561 kernel: Run /init as init process Jul 14 21:19:42.869568 kernel: with arguments: Jul 14 21:19:42.869575 kernel: /init Jul 14 21:19:42.869582 kernel: with environment: Jul 14 21:19:42.869588 kernel: HOME=/ Jul 14 21:19:42.869595 kernel: TERM=linux Jul 14 21:19:42.869604 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:19:42.869613 systemd[1]: Successfully made /usr/ read-only. Jul 14 21:19:42.869623 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 21:19:42.869631 systemd[1]: Detected virtualization kvm. Jul 14 21:19:42.869638 systemd[1]: Detected architecture arm64. Jul 14 21:19:42.869645 systemd[1]: Running in initrd. Jul 14 21:19:42.869652 systemd[1]: No hostname configured, using default hostname. Jul 14 21:19:42.869661 systemd[1]: Hostname set to . Jul 14 21:19:42.869669 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:19:42.869676 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:19:42.869684 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:19:42.869691 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:19:42.869709 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 21:19:42.870097 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:19:42.870117 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 21:19:42.870129 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 21:19:42.870137 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 21:19:42.870145 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 21:19:42.870153 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:19:42.870160 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:19:42.870168 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:19:42.870175 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:19:42.870184 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:19:42.870192 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:19:42.870199 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:19:42.870207 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:19:42.870214 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 21:19:42.870222 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 14 21:19:42.870229 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:19:42.870237 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:19:42.870245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:19:42.870253 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:19:42.870260 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 21:19:42.870268 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:19:42.870276 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 21:19:42.870284 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 14 21:19:42.870291 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:19:42.870299 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:19:42.870306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:19:42.870315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:19:42.870323 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 21:19:42.870331 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:19:42.870339 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:19:42.870347 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:19:42.870373 systemd-journald[244]: Collecting audit messages is disabled. Jul 14 21:19:42.871199 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:19:42.871208 systemd-journald[244]: Journal started Jul 14 21:19:42.871229 systemd-journald[244]: Runtime Journal (/run/log/journal/b0c8073632104e599d01f693ab4485f2) is 6M, max 48.5M, 42.4M free. Jul 14 21:19:42.860344 systemd-modules-load[246]: Inserted module 'overlay' Jul 14 21:19:42.874213 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:19:42.877720 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:19:42.876458 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:19:42.879815 kernel: Bridge firewalling registered Jul 14 21:19:42.878401 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:19:42.879631 systemd-modules-load[246]: Inserted module 'br_netfilter' Jul 14 21:19:42.880931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:19:42.889854 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:19:42.892163 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:19:42.894915 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:19:42.896352 systemd-tmpfiles[263]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 14 21:19:42.899636 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:19:42.907259 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:19:42.910825 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 21:19:42.913738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:19:42.915176 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:19:42.918641 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:19:42.928557 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=67789a938d81feeebc020d9415b455585ce5bf173608fce319087a5433c30d80 Jul 14 21:19:42.952932 systemd-resolved[289]: Positive Trust Anchors: Jul 14 21:19:42.952950 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:19:42.952990 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:19:42.957792 systemd-resolved[289]: Defaulting to hostname 'linux'. Jul 14 21:19:42.958932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:19:42.961586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:19:43.004717 kernel: SCSI subsystem initialized Jul 14 21:19:43.009710 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:19:43.017725 kernel: iscsi: registered transport (tcp) Jul 14 21:19:43.029934 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:19:43.029951 kernel: QLogic iSCSI HBA Driver Jul 14 21:19:43.049422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 21:19:43.073056 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 21:19:43.076641 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 21:19:43.121900 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 21:19:43.124218 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 21:19:43.192740 kernel: raid6: neonx8 gen() 15758 MB/s Jul 14 21:19:43.209724 kernel: raid6: neonx4 gen() 15786 MB/s Jul 14 21:19:43.226727 kernel: raid6: neonx2 gen() 13177 MB/s Jul 14 21:19:43.243721 kernel: raid6: neonx1 gen() 10454 MB/s Jul 14 21:19:43.260723 kernel: raid6: int64x8 gen() 6886 MB/s Jul 14 21:19:43.277721 kernel: raid6: int64x4 gen() 7334 MB/s Jul 14 21:19:43.294723 kernel: raid6: int64x2 gen() 6098 MB/s Jul 14 21:19:43.311830 kernel: raid6: int64x1 gen() 5046 MB/s Jul 14 21:19:43.311845 kernel: raid6: using algorithm neonx4 gen() 15786 MB/s Jul 14 21:19:43.329862 kernel: raid6: .... xor() 12309 MB/s, rmw enabled Jul 14 21:19:43.329879 kernel: raid6: using neon recovery algorithm Jul 14 21:19:43.334720 kernel: xor: measuring software checksum speed Jul 14 21:19:43.336021 kernel: 8regs : 18565 MB/sec Jul 14 21:19:43.336051 kernel: 32regs : 21664 MB/sec Jul 14 21:19:43.337300 kernel: arm64_neon : 27984 MB/sec Jul 14 21:19:43.337315 kernel: xor: using function: arm64_neon (27984 MB/sec) Jul 14 21:19:43.394734 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 21:19:43.401232 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:19:43.403791 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:19:43.432600 systemd-udevd[498]: Using default interface naming scheme 'v255'. Jul 14 21:19:43.436806 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:19:43.438814 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 21:19:43.464169 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jul 14 21:19:43.486682 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:19:43.489017 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:19:43.543781 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:19:43.547370 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 21:19:43.591327 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 14 21:19:43.592841 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:19:43.594799 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:19:43.594813 kernel: GPT:9289727 != 19775487 Jul 14 21:19:43.594822 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:19:43.596039 kernel: GPT:9289727 != 19775487 Jul 14 21:19:43.596065 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:19:43.596303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:19:43.597830 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:19:43.596424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:19:43.600832 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:19:43.604450 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:19:43.631043 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 21:19:43.638378 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 21:19:43.640414 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:19:43.656202 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 21:19:43.664802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:19:43.671965 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 21:19:43.673176 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 21:19:43.675379 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:19:43.678065 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:19:43.679998 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:19:43.682530 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 21:19:43.684375 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 21:19:43.711568 disk-uuid[591]: Primary Header is updated. Jul 14 21:19:43.711568 disk-uuid[591]: Secondary Entries is updated. Jul 14 21:19:43.711568 disk-uuid[591]: Secondary Header is updated. Jul 14 21:19:43.716742 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:19:43.715740 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:19:44.729724 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:19:44.732645 disk-uuid[597]: The operation has completed successfully. Jul 14 21:19:44.758746 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:19:44.758846 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 21:19:44.786458 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 21:19:44.809567 sh[611]: Success Jul 14 21:19:44.825725 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:19:44.825913 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:19:44.825930 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 14 21:19:44.837195 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 14 21:19:44.862651 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 21:19:44.865089 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 21:19:44.877069 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 21:19:44.884338 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 14 21:19:44.884389 kernel: BTRFS: device fsid babe610d-6a90-4bd8-ba2e-f272110d82d6 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (623) Jul 14 21:19:44.885684 kernel: BTRFS info (device dm-0): first mount of filesystem babe610d-6a90-4bd8-ba2e-f272110d82d6 Jul 14 21:19:44.887257 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:19:44.887270 kernel: BTRFS info (device dm-0): using free-space-tree Jul 14 21:19:44.890586 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 21:19:44.891882 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 14 21:19:44.893403 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 21:19:44.894151 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 21:19:44.895812 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 21:19:44.922893 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (654) Jul 14 21:19:44.922935 kernel: BTRFS info (device vda6): first mount of filesystem 8f9582c9-032b-4eae-a997-04ddea724807 Jul 14 21:19:44.922945 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:19:44.924373 kernel: BTRFS info (device vda6): using free-space-tree Jul 14 21:19:44.935020 kernel: BTRFS info (device vda6): last unmount of filesystem 8f9582c9-032b-4eae-a997-04ddea724807 Jul 14 21:19:44.935065 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 21:19:44.937047 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 21:19:44.998646 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:19:45.002157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:19:45.046846 systemd-networkd[798]: lo: Link UP Jul 14 21:19:45.046855 systemd-networkd[798]: lo: Gained carrier Jul 14 21:19:45.047606 systemd-networkd[798]: Enumeration completed Jul 14 21:19:45.047724 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:19:45.048972 systemd[1]: Reached target network.target - Network. Jul 14 21:19:45.050657 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:19:45.050661 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:19:45.051199 systemd-networkd[798]: eth0: Link UP Jul 14 21:19:45.051201 systemd-networkd[798]: eth0: Gained carrier Jul 14 21:19:45.051209 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:19:45.083750 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:19:45.091570 ignition[709]: Ignition 2.21.0 Jul 14 21:19:45.091583 ignition[709]: Stage: fetch-offline Jul 14 21:19:45.091614 ignition[709]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:19:45.091622 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:19:45.091821 ignition[709]: parsed url from cmdline: "" Jul 14 21:19:45.091824 ignition[709]: no config URL provided Jul 14 21:19:45.091828 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:19:45.091835 ignition[709]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:19:45.091852 ignition[709]: op(1): [started] loading QEMU firmware config module Jul 14 21:19:45.091856 ignition[709]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:19:45.099327 ignition[709]: op(1): [finished] loading QEMU firmware config module Jul 14 21:19:45.137965 ignition[709]: parsing config with SHA512: ee0b1da0b9fee719c0707490752ce8ebec39a03fcd529020d68a914cdde5f4406f8dc771cb9219b0a89049f0ca50233dc7aa5972996e94eca7237b4621b3ffc6 Jul 14 21:19:45.144449 unknown[709]: fetched base config from "system" Jul 14 21:19:45.144461 unknown[709]: fetched user config from "qemu" Jul 14 21:19:45.144867 ignition[709]: fetch-offline: fetch-offline passed Jul 14 21:19:45.146604 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:19:45.144925 ignition[709]: Ignition finished successfully Jul 14 21:19:45.148409 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:19:45.149195 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 21:19:45.178051 ignition[811]: Ignition 2.21.0 Jul 14 21:19:45.178073 ignition[811]: Stage: kargs Jul 14 21:19:45.178244 ignition[811]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:19:45.178253 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:19:45.180675 ignition[811]: kargs: kargs passed Jul 14 21:19:45.180746 ignition[811]: Ignition finished successfully Jul 14 21:19:45.184381 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 21:19:45.186357 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 21:19:45.219511 ignition[819]: Ignition 2.21.0 Jul 14 21:19:45.219522 ignition[819]: Stage: disks Jul 14 21:19:45.219675 ignition[819]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:19:45.222844 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 21:19:45.219684 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:19:45.224086 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 21:19:45.220511 ignition[819]: disks: disks passed Jul 14 21:19:45.225835 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 21:19:45.220556 ignition[819]: Ignition finished successfully Jul 14 21:19:45.227797 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:19:45.229638 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:19:45.231026 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:19:45.233829 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 21:19:45.257832 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 14 21:19:45.261651 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 21:19:45.263875 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 21:19:45.331719 kernel: EXT4-fs (vda9): mounted filesystem f000bd66-e59e-4cb0-8952-aa4d390a49a2 r/w with ordered data mode. Quota mode: none. Jul 14 21:19:45.332181 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 21:19:45.333388 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 21:19:45.336671 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:19:45.338981 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 21:19:45.339901 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 21:19:45.339945 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:19:45.339980 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:19:45.350666 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 21:19:45.352710 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 21:19:45.360736 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (838) Jul 14 21:19:45.360770 kernel: BTRFS info (device vda6): first mount of filesystem 8f9582c9-032b-4eae-a997-04ddea724807 Jul 14 21:19:45.360781 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:19:45.362713 kernel: BTRFS info (device vda6): using free-space-tree Jul 14 21:19:45.366514 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:19:45.398817 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:19:45.403121 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:19:45.406831 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:19:45.411274 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:19:45.483966 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 21:19:45.486031 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 21:19:45.488916 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 21:19:45.503723 kernel: BTRFS info (device vda6): last unmount of filesystem 8f9582c9-032b-4eae-a997-04ddea724807 Jul 14 21:19:45.516838 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 21:19:45.528954 ignition[951]: INFO : Ignition 2.21.0 Jul 14 21:19:45.528954 ignition[951]: INFO : Stage: mount Jul 14 21:19:45.530535 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:19:45.530535 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:19:45.530535 ignition[951]: INFO : mount: mount passed Jul 14 21:19:45.530535 ignition[951]: INFO : Ignition finished successfully Jul 14 21:19:45.532056 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 21:19:45.535404 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 21:19:45.883181 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 21:19:45.887844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:19:45.904557 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (964) Jul 14 21:19:45.904599 kernel: BTRFS info (device vda6): first mount of filesystem 8f9582c9-032b-4eae-a997-04ddea724807 Jul 14 21:19:45.904610 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:19:45.906192 kernel: BTRFS info (device vda6): using free-space-tree Jul 14 21:19:45.908822 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:19:45.942892 ignition[981]: INFO : Ignition 2.21.0 Jul 14 21:19:45.943872 ignition[981]: INFO : Stage: files Jul 14 21:19:45.944556 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:19:45.944556 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:19:45.946871 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:19:45.946871 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:19:45.946871 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:19:45.951036 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:19:45.951036 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:19:45.951036 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:19:45.951036 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 14 21:19:45.951036 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 14 21:19:45.947756 unknown[981]: wrote ssh authorized keys file for user: core Jul 14 21:19:46.829807 systemd-networkd[798]: eth0: Gained IPv6LL Jul 14 21:19:46.853175 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 21:19:50.293072 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 14 21:19:50.293072 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 21:19:50.296521 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 14 21:19:50.670829 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 21:19:50.754633 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 21:19:50.756362 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 21:19:50.756362 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 21:19:50.756362 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:19:50.756362 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:19:50.756362 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:19:50.756362 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:19:50.756362 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:19:50.756362 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:19:50.768279 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:19:50.768279 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:19:50.768279 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:19:50.768279 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:19:50.768279 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:19:50.768279 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 14 21:19:51.186684 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 21:19:51.677205 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:19:51.677205 ignition[981]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 14 21:19:51.680784 ignition[981]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:19:51.682743 ignition[981]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:19:51.682743 ignition[981]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 14 21:19:51.682743 ignition[981]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 14 21:19:51.682743 ignition[981]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:19:51.682743 ignition[981]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:19:51.682743 ignition[981]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 14 21:19:51.682743 ignition[981]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 21:19:51.701057 ignition[981]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:19:51.704238 ignition[981]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:19:51.706828 ignition[981]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 21:19:51.706828 ignition[981]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 14 21:19:51.706828 ignition[981]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 21:19:51.706828 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:19:51.706828 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:19:51.706828 ignition[981]: INFO : files: files passed Jul 14 21:19:51.706828 ignition[981]: INFO : Ignition finished successfully Jul 14 21:19:51.708622 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 21:19:51.712835 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 21:19:51.715223 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 21:19:51.726412 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 21:19:51.726507 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 21:19:51.729633 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 21:19:51.731210 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:19:51.733789 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:19:51.732632 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:19:51.737295 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:19:51.734038 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 21:19:51.736777 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 21:19:51.789382 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 21:19:51.789479 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 21:19:51.791647 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 21:19:51.793482 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 21:19:51.795233 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 21:19:51.795907 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 21:19:51.828270 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:19:51.830506 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 21:19:51.852260 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:19:51.853451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:19:51.855388 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 21:19:51.857078 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 21:19:51.857192 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:19:51.859435 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 21:19:51.860496 systemd[1]: Stopped target basic.target - Basic System. Jul 14 21:19:51.862188 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 21:19:51.863864 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:19:51.865516 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 21:19:51.867302 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 14 21:19:51.869128 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 21:19:51.870791 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:19:51.872743 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 21:19:51.874378 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 21:19:51.876122 systemd[1]: Stopped target swap.target - Swaps. Jul 14 21:19:51.877526 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 21:19:51.877647 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:19:51.879748 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:19:51.881478 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:19:51.883210 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 21:19:51.883318 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:19:51.885047 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 21:19:51.885162 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 21:19:51.887506 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 21:19:51.887628 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:19:51.889755 systemd[1]: Stopped target paths.target - Path Units. Jul 14 21:19:51.891126 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 21:19:51.891781 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:19:51.893018 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 21:19:51.894581 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 21:19:51.896309 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 21:19:51.896394 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:19:51.897775 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 21:19:51.897859 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:19:51.899453 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 21:19:51.899575 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:19:51.901628 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 21:19:51.901747 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 21:19:51.903919 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 21:19:51.905940 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 21:19:51.906781 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 21:19:51.906920 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:19:51.908568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 21:19:51.908673 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:19:51.913780 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 21:19:51.918718 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 21:19:51.924430 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 21:19:51.927104 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 21:19:51.927524 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 21:19:51.930142 ignition[1037]: INFO : Ignition 2.21.0 Jul 14 21:19:51.930142 ignition[1037]: INFO : Stage: umount Jul 14 21:19:51.931462 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:19:51.931462 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:19:51.933283 ignition[1037]: INFO : umount: umount passed Jul 14 21:19:51.933283 ignition[1037]: INFO : Ignition finished successfully Jul 14 21:19:51.933122 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 21:19:51.933212 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 21:19:51.934336 systemd[1]: Stopped target network.target - Network. Jul 14 21:19:51.935546 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 21:19:51.935599 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 21:19:51.937080 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 21:19:51.937122 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 21:19:51.938637 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 21:19:51.938687 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 21:19:51.940327 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 21:19:51.940364 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 21:19:51.941830 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 21:19:51.941877 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 21:19:51.943526 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 21:19:51.945123 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 21:19:51.948789 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 21:19:51.948877 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 21:19:51.951850 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 14 21:19:51.952101 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 21:19:51.952142 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:19:51.955501 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 14 21:19:51.955690 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 21:19:51.955846 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 21:19:51.960374 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 14 21:19:51.960725 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 14 21:19:51.962131 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 21:19:51.962175 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:19:51.964802 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 21:19:51.966012 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 21:19:51.966066 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:19:51.967878 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:19:51.967920 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:19:51.970583 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 21:19:51.970626 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 21:19:51.972512 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:19:51.974821 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 21:19:51.983035 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 21:19:51.983195 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:19:51.985173 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 21:19:51.985251 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 21:19:51.986407 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 21:19:51.986442 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 21:19:51.987551 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 21:19:51.987582 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:19:51.989265 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 21:19:51.989310 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:19:51.991870 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 21:19:51.991914 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 21:19:51.994396 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:19:51.994452 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:19:51.997880 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 21:19:51.999180 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 14 21:19:51.999243 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 21:19:52.002222 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 21:19:52.002265 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:19:52.005051 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 14 21:19:52.005092 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:19:52.008058 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 21:19:52.008097 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:19:52.010206 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:19:52.010252 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:19:52.014375 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 21:19:52.014483 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 21:19:52.016882 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 21:19:52.019005 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 21:19:52.033160 systemd[1]: Switching root. Jul 14 21:19:52.060024 systemd-journald[244]: Journal stopped Jul 14 21:19:52.902575 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 14 21:19:52.902623 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 21:19:52.902635 kernel: SELinux: policy capability open_perms=1 Jul 14 21:19:52.902644 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 21:19:52.902655 kernel: SELinux: policy capability always_check_network=0 Jul 14 21:19:52.902666 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 21:19:52.902678 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 21:19:52.902689 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 21:19:52.902720 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 21:19:52.902732 kernel: SELinux: policy capability userspace_initial_context=0 Jul 14 21:19:52.902741 kernel: audit: type=1403 audit(1752527992.296:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 21:19:52.902755 systemd[1]: Successfully loaded SELinux policy in 52.863ms. Jul 14 21:19:52.902773 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.244ms. Jul 14 21:19:52.902784 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 21:19:52.902795 systemd[1]: Detected virtualization kvm. Jul 14 21:19:52.902806 systemd[1]: Detected architecture arm64. Jul 14 21:19:52.902816 systemd[1]: Detected first boot. Jul 14 21:19:52.902826 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:19:52.902838 kernel: NET: Registered PF_VSOCK protocol family Jul 14 21:19:52.902848 zram_generator::config[1083]: No configuration found. Jul 14 21:19:52.902858 systemd[1]: Populated /etc with preset unit settings. Jul 14 21:19:52.902869 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 14 21:19:52.902879 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 21:19:52.902891 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 21:19:52.902900 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 21:19:52.902911 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 21:19:52.902921 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 21:19:52.902941 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 21:19:52.902953 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 21:19:52.902963 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 21:19:52.902973 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 21:19:52.902984 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 21:19:52.902996 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 21:19:52.903006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:19:52.903017 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:19:52.903027 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 21:19:52.903038 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 21:19:52.903048 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 21:19:52.903058 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:19:52.903072 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 14 21:19:52.903084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:19:52.903094 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:19:52.903105 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 21:19:52.903115 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 21:19:52.903125 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 21:19:52.903135 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 21:19:52.903145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:19:52.903156 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:19:52.903167 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:19:52.903177 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:19:52.903187 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 21:19:52.903200 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 21:19:52.903211 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 14 21:19:52.903222 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:19:52.903232 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:19:52.903242 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:19:52.903252 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 21:19:52.903263 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 21:19:52.903274 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 21:19:52.903285 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 21:19:52.903295 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 21:19:52.903306 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 21:19:52.903316 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 21:19:52.903327 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 21:19:52.903337 systemd[1]: Reached target machines.target - Containers. Jul 14 21:19:52.903347 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 21:19:52.903361 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:19:52.903374 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:19:52.903384 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 21:19:52.903394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:19:52.903405 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:19:52.903415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:19:52.903425 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 21:19:52.903435 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:19:52.903446 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 21:19:52.903458 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 21:19:52.903468 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 21:19:52.903478 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 21:19:52.903489 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 21:19:52.903500 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:19:52.903510 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:19:52.903520 kernel: loop: module loaded Jul 14 21:19:52.903529 kernel: fuse: init (API version 7.41) Jul 14 21:19:52.903541 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:19:52.903551 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 21:19:52.903561 kernel: ACPI: bus type drm_connector registered Jul 14 21:19:52.903572 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 21:19:52.903582 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 14 21:19:52.903592 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:19:52.903604 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 21:19:52.903614 systemd[1]: Stopped verity-setup.service. Jul 14 21:19:52.903624 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 21:19:52.903634 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 21:19:52.903645 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 21:19:52.903655 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 21:19:52.903665 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 21:19:52.903717 systemd-journald[1159]: Collecting audit messages is disabled. Jul 14 21:19:52.903746 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 21:19:52.903758 systemd-journald[1159]: Journal started Jul 14 21:19:52.903781 systemd-journald[1159]: Runtime Journal (/run/log/journal/b0c8073632104e599d01f693ab4485f2) is 6M, max 48.5M, 42.4M free. Jul 14 21:19:52.667679 systemd[1]: Queued start job for default target multi-user.target. Jul 14 21:19:52.688547 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 21:19:52.688925 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 21:19:52.907004 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 21:19:52.908997 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:19:52.909826 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:19:52.911169 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 21:19:52.912748 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 21:19:52.914083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:19:52.914232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:19:52.915583 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:19:52.915771 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:19:52.917071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:19:52.917226 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:19:52.918507 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 21:19:52.918650 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 21:19:52.919866 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:19:52.920027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:19:52.921401 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:19:52.922755 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 21:19:52.924132 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 21:19:52.925585 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 14 21:19:52.937659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:19:52.941467 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 21:19:52.943886 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 21:19:52.945873 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 21:19:52.946866 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 21:19:52.946919 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:19:52.948615 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 14 21:19:52.957450 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 21:19:52.958582 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:19:52.959885 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 21:19:52.961826 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 21:19:52.963081 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:19:52.965149 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 21:19:52.966372 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:19:52.968704 systemd-journald[1159]: Time spent on flushing to /var/log/journal/b0c8073632104e599d01f693ab4485f2 is 20.993ms for 886 entries. Jul 14 21:19:52.968704 systemd-journald[1159]: System Journal (/var/log/journal/b0c8073632104e599d01f693ab4485f2) is 8M, max 195.6M, 187.6M free. Jul 14 21:19:53.001846 systemd-journald[1159]: Received client request to flush runtime journal. Jul 14 21:19:53.001900 kernel: loop0: detected capacity change from 0 to 134232 Jul 14 21:19:52.970333 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:19:52.979908 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 21:19:52.982018 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:19:52.984421 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 21:19:52.985952 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 21:19:52.998935 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 21:19:53.000341 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 21:19:53.003874 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 14 21:19:53.005691 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 21:19:53.008717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:19:53.013861 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jul 14 21:19:53.013876 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jul 14 21:19:53.017527 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:19:53.020832 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 21:19:53.024726 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 21:19:53.033944 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 21:19:53.035954 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 14 21:19:53.044727 kernel: loop1: detected capacity change from 0 to 105936 Jul 14 21:19:53.056454 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 21:19:53.059037 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:19:53.065774 kernel: loop2: detected capacity change from 0 to 207008 Jul 14 21:19:53.081949 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jul 14 21:19:53.081962 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jul 14 21:19:53.085373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:19:53.089720 kernel: loop3: detected capacity change from 0 to 134232 Jul 14 21:19:53.096794 kernel: loop4: detected capacity change from 0 to 105936 Jul 14 21:19:53.101728 kernel: loop5: detected capacity change from 0 to 207008 Jul 14 21:19:53.107998 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 21:19:53.108365 (sd-merge)[1226]: Merged extensions into '/usr'. Jul 14 21:19:53.111761 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 21:19:53.111776 systemd[1]: Reloading... Jul 14 21:19:53.173837 zram_generator::config[1252]: No configuration found. Jul 14 21:19:53.234078 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 21:19:53.249461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:19:53.311146 systemd[1]: Reloading finished in 199 ms. Jul 14 21:19:53.348730 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 21:19:53.350154 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 21:19:53.366923 systemd[1]: Starting ensure-sysext.service... Jul 14 21:19:53.368851 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:19:53.377429 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Jul 14 21:19:53.377442 systemd[1]: Reloading... Jul 14 21:19:53.381791 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 14 21:19:53.381834 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 14 21:19:53.382102 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 21:19:53.382279 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 21:19:53.382883 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 21:19:53.383101 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jul 14 21:19:53.383150 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jul 14 21:19:53.385648 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:19:53.385665 systemd-tmpfiles[1287]: Skipping /boot Jul 14 21:19:53.391315 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:19:53.391333 systemd-tmpfiles[1287]: Skipping /boot Jul 14 21:19:53.430103 zram_generator::config[1314]: No configuration found. Jul 14 21:19:53.493996 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:19:53.554990 systemd[1]: Reloading finished in 177 ms. Jul 14 21:19:53.576034 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 21:19:53.577589 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:19:53.591558 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 21:19:53.593812 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 21:19:53.605343 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 21:19:53.610845 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:19:53.613092 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:19:53.615139 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 21:19:53.623918 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 21:19:53.630199 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:19:53.631621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:19:53.636378 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:19:53.638894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:19:53.640029 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:19:53.640135 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:19:53.642749 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 21:19:53.644689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:19:53.644882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:19:53.646493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:19:53.646618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:19:53.648327 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:19:53.648459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:19:53.655563 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:19:53.656677 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:19:53.662045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:19:53.665313 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:19:53.666455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:19:53.666616 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:19:53.671655 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 21:19:53.673411 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Jul 14 21:19:53.675140 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 21:19:53.676770 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 21:19:53.680188 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:19:53.680339 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:19:53.684052 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 21:19:53.686117 augenrules[1393]: No rules Jul 14 21:19:53.688232 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:19:53.688421 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 21:19:53.690461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:19:53.690627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:19:53.694244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:19:53.694420 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:19:53.698236 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:19:53.700993 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 21:19:53.719738 systemd[1]: Finished ensure-sysext.service. Jul 14 21:19:53.725876 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 21:19:53.726966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:19:53.729922 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:19:53.734054 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:19:53.738985 systemd-resolved[1353]: Positive Trust Anchors: Jul 14 21:19:53.739314 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:19:53.739374 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:19:53.739460 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:19:53.749237 systemd-resolved[1353]: Defaulting to hostname 'linux'. Jul 14 21:19:53.755783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:19:53.757268 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:19:53.757315 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:19:53.760195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:19:53.763810 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 21:19:53.764890 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:19:53.765280 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:19:53.766688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:19:53.766878 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:19:53.769999 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:19:53.770138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:19:53.771392 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:19:53.771531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:19:53.773094 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:19:53.773252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:19:53.774889 augenrules[1431]: /sbin/augenrules: No change Jul 14 21:19:53.778186 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:19:53.779462 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:19:53.779520 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:19:53.785456 augenrules[1460]: No rules Jul 14 21:19:53.786763 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:19:53.786948 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 21:19:53.801501 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 14 21:19:53.843156 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 21:19:53.844710 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:19:53.845846 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 21:19:53.847128 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 21:19:53.848368 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 21:19:53.849621 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 21:19:53.849652 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:19:53.850509 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 21:19:53.851532 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 21:19:53.852624 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 21:19:53.853756 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:19:53.855300 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 21:19:53.857187 systemd-networkd[1437]: lo: Link UP Jul 14 21:19:53.857417 systemd-networkd[1437]: lo: Gained carrier Jul 14 21:19:53.857624 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 21:19:53.858574 systemd-networkd[1437]: Enumeration completed Jul 14 21:19:53.860272 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 14 21:19:53.861633 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 14 21:19:53.862772 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 14 21:19:53.864170 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:19:53.864415 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:19:53.865371 systemd-networkd[1437]: eth0: Link UP Jul 14 21:19:53.865522 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 21:19:53.865671 systemd-networkd[1437]: eth0: Gained carrier Jul 14 21:19:53.866053 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:19:53.866821 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 14 21:19:53.868735 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:19:53.869996 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 21:19:53.874258 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:19:53.875588 systemd[1]: Reached target network.target - Network. Jul 14 21:19:53.876504 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:19:53.877457 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:19:53.878421 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:19:53.878445 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:19:53.878763 systemd-networkd[1437]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:19:53.879217 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Jul 14 21:19:53.881528 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 21:19:53.881858 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 21:19:53.881910 systemd-timesyncd[1439]: Initial clock synchronization to Mon 2025-07-14 21:19:53.994430 UTC. Jul 14 21:19:53.883781 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 21:19:53.886876 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 21:19:53.892909 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 21:19:53.894644 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 21:19:53.895794 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 21:19:53.897934 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 21:19:53.900321 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 21:19:53.909915 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 21:19:53.911863 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 21:19:53.914910 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 21:19:53.921292 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 21:19:53.924772 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 14 21:19:53.926828 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 21:19:53.928674 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 21:19:53.929103 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 21:19:53.930820 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 21:19:53.936941 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 21:19:53.939525 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 21:19:53.943125 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 21:19:53.943299 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 21:19:53.946735 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 21:19:53.951717 jq[1489]: false Jul 14 21:19:53.951708 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 21:19:53.952783 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 21:19:53.965704 jq[1515]: true Jul 14 21:19:53.965538 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 21:19:53.965750 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 21:19:53.967127 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 14 21:19:53.975381 extend-filesystems[1490]: Found /dev/vda6 Jul 14 21:19:53.978010 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 21:19:53.980290 extend-filesystems[1490]: Found /dev/vda9 Jul 14 21:19:53.996814 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:19:53.999244 extend-filesystems[1490]: Checking size of /dev/vda9 Jul 14 21:19:54.014760 tar[1518]: linux-arm64/LICENSE Jul 14 21:19:54.015108 tar[1518]: linux-arm64/helm Jul 14 21:19:54.022865 jq[1532]: true Jul 14 21:19:54.025603 update_engine[1511]: I20250714 21:19:54.025383 1511 main.cc:92] Flatcar Update Engine starting Jul 14 21:19:54.027737 extend-filesystems[1490]: Resized partition /dev/vda9 Jul 14 21:19:54.033034 dbus-daemon[1483]: [system] SELinux support is enabled Jul 14 21:19:54.035350 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 21:19:54.037168 update_engine[1511]: I20250714 21:19:54.037117 1511 update_check_scheduler.cc:74] Next update check in 6m17s Jul 14 21:19:54.038795 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 21:19:54.038826 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 21:19:54.040799 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 21:19:54.040816 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 21:19:54.041954 extend-filesystems[1542]: resize2fs 1.47.2 (1-Jan-2025) Jul 14 21:19:54.046805 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 21:19:54.043068 systemd[1]: Started update-engine.service - Update Engine. Jul 14 21:19:54.046870 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 21:19:54.070731 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 21:19:54.085494 extend-filesystems[1542]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 21:19:54.085494 extend-filesystems[1542]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 21:19:54.085494 extend-filesystems[1542]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 21:19:54.105213 extend-filesystems[1490]: Resized filesystem in /dev/vda9 Jul 14 21:19:54.086435 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 21:19:54.087977 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 21:19:54.163011 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Jul 14 21:19:54.169793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:19:54.172826 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 21:19:54.179178 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 21:19:54.194388 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 21:19:54.195081 systemd-logind[1503]: New seat seat0. Jul 14 21:19:54.196308 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 21:19:54.203668 locksmithd[1544]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 21:19:54.257743 containerd[1527]: time="2025-07-14T21:19:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 14 21:19:54.260720 containerd[1527]: time="2025-07-14T21:19:54.260660747Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 14 21:19:54.275692 containerd[1527]: time="2025-07-14T21:19:54.275482364Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.763µs" Jul 14 21:19:54.275692 containerd[1527]: time="2025-07-14T21:19:54.275528648Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 14 21:19:54.275692 containerd[1527]: time="2025-07-14T21:19:54.275548345Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 14 21:19:54.275888 containerd[1527]: time="2025-07-14T21:19:54.275768162Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 14 21:19:54.275888 containerd[1527]: time="2025-07-14T21:19:54.275788222Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 14 21:19:54.275888 containerd[1527]: time="2025-07-14T21:19:54.275813721Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 14 21:19:54.275888 containerd[1527]: time="2025-07-14T21:19:54.275875029Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 14 21:19:54.275888 containerd[1527]: time="2025-07-14T21:19:54.275887799Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 14 21:19:54.276196 containerd[1527]: time="2025-07-14T21:19:54.276133879Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 14 21:19:54.276196 containerd[1527]: time="2025-07-14T21:19:54.276157081Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 14 21:19:54.276196 containerd[1527]: time="2025-07-14T21:19:54.276169568Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 14 21:19:54.276196 containerd[1527]: time="2025-07-14T21:19:54.276178350Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 14 21:19:54.276288 containerd[1527]: time="2025-07-14T21:19:54.276246547Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 14 21:19:54.276457 containerd[1527]: time="2025-07-14T21:19:54.276436273Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 14 21:19:54.276486 containerd[1527]: time="2025-07-14T21:19:54.276471398Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 14 21:19:54.276486 containerd[1527]: time="2025-07-14T21:19:54.276483322Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 14 21:19:54.276533 containerd[1527]: time="2025-07-14T21:19:54.276523684Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 14 21:19:54.276890 containerd[1527]: time="2025-07-14T21:19:54.276862654Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 14 21:19:54.276948 containerd[1527]: time="2025-07-14T21:19:54.276939068Z" level=info msg="metadata content store policy set" policy=shared Jul 14 21:19:54.280899 containerd[1527]: time="2025-07-14T21:19:54.280825075Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 14 21:19:54.280899 containerd[1527]: time="2025-07-14T21:19:54.280896091Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 14 21:19:54.281024 containerd[1527]: time="2025-07-14T21:19:54.280912607Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 14 21:19:54.281024 containerd[1527]: time="2025-07-14T21:19:54.280925013Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 14 21:19:54.281024 containerd[1527]: time="2025-07-14T21:19:54.280938669Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 14 21:19:54.281024 containerd[1527]: time="2025-07-14T21:19:54.280949424Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 14 21:19:54.281024 containerd[1527]: time="2025-07-14T21:19:54.280961106Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 14 21:19:54.281024 containerd[1527]: time="2025-07-14T21:19:54.280971982Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 14 21:19:54.281024 containerd[1527]: time="2025-07-14T21:19:54.280984026Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 14 21:19:54.281024 containerd[1527]: time="2025-07-14T21:19:54.280993734Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 14 21:19:54.281024 containerd[1527]: time="2025-07-14T21:19:54.281002999Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 14 21:19:54.281024 containerd[1527]: time="2025-07-14T21:19:54.281015768Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 14 21:19:54.281174 containerd[1527]: time="2025-07-14T21:19:54.281147207Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 14 21:19:54.281174 containerd[1527]: time="2025-07-14T21:19:54.281167348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 14 21:19:54.281209 containerd[1527]: time="2025-07-14T21:19:54.281180842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 14 21:19:54.281209 containerd[1527]: time="2025-07-14T21:19:54.281191073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 14 21:19:54.281240 containerd[1527]: time="2025-07-14T21:19:54.281209482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 14 21:19:54.281240 containerd[1527]: time="2025-07-14T21:19:54.281223742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 14 21:19:54.281240 containerd[1527]: time="2025-07-14T21:19:54.281234980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 14 21:19:54.281291 containerd[1527]: time="2025-07-14T21:19:54.281245575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 14 21:19:54.281291 containerd[1527]: time="2025-07-14T21:19:54.281256410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 14 21:19:54.281291 containerd[1527]: time="2025-07-14T21:19:54.281267367Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 14 21:19:54.281291 containerd[1527]: time="2025-07-14T21:19:54.281278807Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 14 21:19:54.281640 containerd[1527]: time="2025-07-14T21:19:54.281463256Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 14 21:19:54.281640 containerd[1527]: time="2025-07-14T21:19:54.281487828Z" level=info msg="Start snapshots syncer" Jul 14 21:19:54.281640 containerd[1527]: time="2025-07-14T21:19:54.281509701Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 14 21:19:54.281850 containerd[1527]: time="2025-07-14T21:19:54.281751391Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 14 21:19:54.281850 containerd[1527]: time="2025-07-14T21:19:54.281796184Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 14 21:19:54.282735 containerd[1527]: time="2025-07-14T21:19:54.282676216Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 14 21:19:54.283014 containerd[1527]: time="2025-07-14T21:19:54.282888863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 14 21:19:54.283014 containerd[1527]: time="2025-07-14T21:19:54.282923384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 14 21:19:54.283014 containerd[1527]: time="2025-07-14T21:19:54.282951783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 14 21:19:54.283014 containerd[1527]: time="2025-07-14T21:19:54.282965599Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 14 21:19:54.283014 containerd[1527]: time="2025-07-14T21:19:54.282979134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 14 21:19:54.283014 containerd[1527]: time="2025-07-14T21:19:54.282990292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 14 21:19:54.283014 containerd[1527]: time="2025-07-14T21:19:54.283002094Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 14 21:19:54.283154 containerd[1527]: time="2025-07-14T21:19:54.283055306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 14 21:19:54.283154 containerd[1527]: time="2025-07-14T21:19:54.283069647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 14 21:19:54.283154 containerd[1527]: time="2025-07-14T21:19:54.283103201Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 14 21:19:54.283154 containerd[1527]: time="2025-07-14T21:19:54.283150290Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 14 21:19:54.283386 containerd[1527]: time="2025-07-14T21:19:54.283239877Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 14 21:19:54.283386 containerd[1527]: time="2025-07-14T21:19:54.283268718Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 14 21:19:54.283386 containerd[1527]: time="2025-07-14T21:19:54.283282656Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 14 21:19:54.283386 containerd[1527]: time="2025-07-14T21:19:54.283291880Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 14 21:19:54.283386 containerd[1527]: time="2025-07-14T21:19:54.283310410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 14 21:19:54.283386 containerd[1527]: time="2025-07-14T21:19:54.283331558Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 14 21:19:54.283835 containerd[1527]: time="2025-07-14T21:19:54.283505413Z" level=info msg="runtime interface created" Jul 14 21:19:54.283835 containerd[1527]: time="2025-07-14T21:19:54.283513912Z" level=info msg="created NRI interface" Jul 14 21:19:54.283835 containerd[1527]: time="2025-07-14T21:19:54.283523379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 14 21:19:54.283835 containerd[1527]: time="2025-07-14T21:19:54.283537235Z" level=info msg="Connect containerd service" Jul 14 21:19:54.283835 containerd[1527]: time="2025-07-14T21:19:54.283566560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 21:19:54.284489 containerd[1527]: time="2025-07-14T21:19:54.284458758Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:19:54.386252 tar[1518]: linux-arm64/README.md Jul 14 21:19:54.390690 containerd[1527]: time="2025-07-14T21:19:54.390566553Z" level=info msg="Start subscribing containerd event" Jul 14 21:19:54.390690 containerd[1527]: time="2025-07-14T21:19:54.390643088Z" level=info msg="Start recovering state" Jul 14 21:19:54.390822 containerd[1527]: time="2025-07-14T21:19:54.390785241Z" level=info msg="Start event monitor" Jul 14 21:19:54.390822 containerd[1527]: time="2025-07-14T21:19:54.390809330Z" level=info msg="Start cni network conf syncer for default" Jul 14 21:19:54.390822 containerd[1527]: time="2025-07-14T21:19:54.390816983Z" level=info msg="Start streaming server" Jul 14 21:19:54.390892 containerd[1527]: time="2025-07-14T21:19:54.390828705Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 14 21:19:54.390892 containerd[1527]: time="2025-07-14T21:19:54.390836319Z" level=info msg="runtime interface starting up..." Jul 14 21:19:54.390892 containerd[1527]: time="2025-07-14T21:19:54.390841716Z" level=info msg="starting plugins..." Jul 14 21:19:54.391141 containerd[1527]: time="2025-07-14T21:19:54.390855331Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 14 21:19:54.391260 containerd[1527]: time="2025-07-14T21:19:54.391119780Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 21:19:54.391494 containerd[1527]: time="2025-07-14T21:19:54.391415407Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 21:19:54.391661 containerd[1527]: time="2025-07-14T21:19:54.391630390Z" level=info msg="containerd successfully booted in 0.134341s" Jul 14 21:19:54.391729 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 21:19:54.403766 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 21:19:54.834585 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 21:19:54.853429 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 21:19:54.856449 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 21:19:54.876235 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 21:19:54.876486 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 21:19:54.879611 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 21:19:54.909731 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 21:19:54.912489 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 21:19:54.914512 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 14 21:19:54.915941 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 21:19:55.149987 systemd-networkd[1437]: eth0: Gained IPv6LL Jul 14 21:19:55.152403 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 21:19:55.154267 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 21:19:55.156862 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 21:19:55.159157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:19:55.179124 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 21:19:55.195078 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 21:19:55.195318 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 21:19:55.197421 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 21:19:55.201914 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 21:19:55.732237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:19:55.733835 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 21:19:55.737028 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:19:55.738891 systemd[1]: Startup finished in 2.131s (kernel) + 9.649s (initrd) + 3.495s (userspace) = 15.276s. Jul 14 21:19:55.964261 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 21:19:55.965343 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:54238.service - OpenSSH per-connection server daemon (10.0.0.1:54238). Jul 14 21:19:56.051878 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 54238 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:19:56.053745 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:19:56.059604 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 21:19:56.060520 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 21:19:56.066741 systemd-logind[1503]: New session 1 of user core. Jul 14 21:19:56.082047 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 21:19:56.085827 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 21:19:56.105664 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:19:56.108931 systemd-logind[1503]: New session c1 of user core. Jul 14 21:19:56.185109 kubelet[1637]: E0714 21:19:56.185057 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:19:56.187439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:19:56.187570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:19:56.187927 systemd[1]: kubelet.service: Consumed 863ms CPU time, 257.3M memory peak. Jul 14 21:19:56.218338 systemd[1654]: Queued start job for default target default.target. Jul 14 21:19:56.234552 systemd[1654]: Created slice app.slice - User Application Slice. Jul 14 21:19:56.234580 systemd[1654]: Reached target paths.target - Paths. Jul 14 21:19:56.234614 systemd[1654]: Reached target timers.target - Timers. Jul 14 21:19:56.235744 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 21:19:56.244733 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 21:19:56.244792 systemd[1654]: Reached target sockets.target - Sockets. Jul 14 21:19:56.244831 systemd[1654]: Reached target basic.target - Basic System. Jul 14 21:19:56.244861 systemd[1654]: Reached target default.target - Main User Target. Jul 14 21:19:56.244893 systemd[1654]: Startup finished in 130ms. Jul 14 21:19:56.245013 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 21:19:56.246447 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 21:19:56.310982 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:54248.service - OpenSSH per-connection server daemon (10.0.0.1:54248). Jul 14 21:19:56.353822 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 54248 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:19:56.354987 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:19:56.358535 systemd-logind[1503]: New session 2 of user core. Jul 14 21:19:56.376870 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 21:19:56.427906 sshd[1671]: Connection closed by 10.0.0.1 port 54248 Jul 14 21:19:56.428328 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Jul 14 21:19:56.442689 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:54248.service: Deactivated successfully. Jul 14 21:19:56.444368 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 21:19:56.446186 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. Jul 14 21:19:56.448430 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:54254.service - OpenSSH per-connection server daemon (10.0.0.1:54254). Jul 14 21:19:56.449074 systemd-logind[1503]: Removed session 2. Jul 14 21:19:56.508940 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 54254 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:19:56.510005 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:19:56.513774 systemd-logind[1503]: New session 3 of user core. Jul 14 21:19:56.519902 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 21:19:56.565999 sshd[1681]: Connection closed by 10.0.0.1 port 54254 Jul 14 21:19:56.566434 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Jul 14 21:19:56.581578 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:54254.service: Deactivated successfully. Jul 14 21:19:56.583968 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 21:19:56.585205 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. Jul 14 21:19:56.587783 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:54264.service - OpenSSH per-connection server daemon (10.0.0.1:54264). Jul 14 21:19:56.589400 systemd-logind[1503]: Removed session 3. Jul 14 21:19:56.638553 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 54264 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:19:56.639627 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:19:56.643216 systemd-logind[1503]: New session 4 of user core. Jul 14 21:19:56.652845 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 21:19:56.703412 sshd[1690]: Connection closed by 10.0.0.1 port 54264 Jul 14 21:19:56.703691 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Jul 14 21:19:56.714558 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:54264.service: Deactivated successfully. Jul 14 21:19:56.716978 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 21:19:56.717600 systemd-logind[1503]: Session 4 logged out. Waiting for processes to exit. Jul 14 21:19:56.719611 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:54278.service - OpenSSH per-connection server daemon (10.0.0.1:54278). Jul 14 21:19:56.720200 systemd-logind[1503]: Removed session 4. Jul 14 21:19:56.760914 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 54278 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:19:56.761947 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:19:56.765581 systemd-logind[1503]: New session 5 of user core. Jul 14 21:19:56.772838 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 21:19:56.831453 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 21:19:56.831756 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:19:56.846477 sudo[1700]: pam_unix(sudo:session): session closed for user root Jul 14 21:19:56.847760 sshd[1699]: Connection closed by 10.0.0.1 port 54278 Jul 14 21:19:56.848245 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jul 14 21:19:56.858672 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:54278.service: Deactivated successfully. Jul 14 21:19:56.860938 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 21:19:56.862199 systemd-logind[1503]: Session 5 logged out. Waiting for processes to exit. Jul 14 21:19:56.863963 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:54284.service - OpenSSH per-connection server daemon (10.0.0.1:54284). Jul 14 21:19:56.864905 systemd-logind[1503]: Removed session 5. Jul 14 21:19:56.915010 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 54284 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:19:56.916205 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:19:56.920781 systemd-logind[1503]: New session 6 of user core. Jul 14 21:19:56.934878 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 21:19:56.985781 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 21:19:56.986299 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:19:56.991052 sudo[1711]: pam_unix(sudo:session): session closed for user root Jul 14 21:19:56.995373 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 14 21:19:56.995625 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:19:57.003237 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 21:19:57.036301 augenrules[1733]: No rules Jul 14 21:19:57.037574 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:19:57.039749 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 21:19:57.040628 sudo[1710]: pam_unix(sudo:session): session closed for user root Jul 14 21:19:57.041811 sshd[1709]: Connection closed by 10.0.0.1 port 54284 Jul 14 21:19:57.042282 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jul 14 21:19:57.052667 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:54284.service: Deactivated successfully. Jul 14 21:19:57.054868 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 21:19:57.057243 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. Jul 14 21:19:57.060178 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:54298.service - OpenSSH per-connection server daemon (10.0.0.1:54298). Jul 14 21:19:57.060855 systemd-logind[1503]: Removed session 6. Jul 14 21:19:57.112844 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 54298 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:19:57.113270 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:19:57.117092 systemd-logind[1503]: New session 7 of user core. Jul 14 21:19:57.128835 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 21:19:57.178246 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 21:19:57.178771 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:19:57.514105 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 21:19:57.526081 (dockerd)[1767]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 21:19:57.775396 dockerd[1767]: time="2025-07-14T21:19:57.775260673Z" level=info msg="Starting up" Jul 14 21:19:57.776146 dockerd[1767]: time="2025-07-14T21:19:57.776063825Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 14 21:19:57.787328 dockerd[1767]: time="2025-07-14T21:19:57.787265148Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 14 21:19:57.999128 dockerd[1767]: time="2025-07-14T21:19:57.998961339Z" level=info msg="Loading containers: start." Jul 14 21:19:58.015744 kernel: Initializing XFRM netlink socket Jul 14 21:19:58.227597 systemd-networkd[1437]: docker0: Link UP Jul 14 21:19:58.231904 dockerd[1767]: time="2025-07-14T21:19:58.231855315Z" level=info msg="Loading containers: done." Jul 14 21:19:58.249043 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3803036005-merged.mount: Deactivated successfully. Jul 14 21:19:58.251436 dockerd[1767]: time="2025-07-14T21:19:58.251390058Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 21:19:58.251508 dockerd[1767]: time="2025-07-14T21:19:58.251473670Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 14 21:19:58.251581 dockerd[1767]: time="2025-07-14T21:19:58.251556920Z" level=info msg="Initializing buildkit" Jul 14 21:19:58.279981 dockerd[1767]: time="2025-07-14T21:19:58.279939561Z" level=info msg="Completed buildkit initialization" Jul 14 21:19:58.286652 dockerd[1767]: time="2025-07-14T21:19:58.286609719Z" level=info msg="Daemon has completed initialization" Jul 14 21:19:58.287187 dockerd[1767]: time="2025-07-14T21:19:58.286670568Z" level=info msg="API listen on /run/docker.sock" Jul 14 21:19:58.286871 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 21:19:58.918834 containerd[1527]: time="2025-07-14T21:19:58.918790473Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 14 21:19:59.633273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2020329967.mount: Deactivated successfully. Jul 14 21:20:01.038610 containerd[1527]: time="2025-07-14T21:20:01.038452057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:01.039196 containerd[1527]: time="2025-07-14T21:20:01.039153948Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 14 21:20:01.040152 containerd[1527]: time="2025-07-14T21:20:01.040114594Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:01.042601 containerd[1527]: time="2025-07-14T21:20:01.042564724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:01.043861 containerd[1527]: time="2025-07-14T21:20:01.043807147Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.124968466s" Jul 14 21:20:01.043938 containerd[1527]: time="2025-07-14T21:20:01.043874728Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 14 21:20:01.044538 containerd[1527]: time="2025-07-14T21:20:01.044502247Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 14 21:20:02.463354 containerd[1527]: time="2025-07-14T21:20:02.463300185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:02.463877 containerd[1527]: time="2025-07-14T21:20:02.463832974Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 14 21:20:02.464742 containerd[1527]: time="2025-07-14T21:20:02.464680800Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:02.467061 containerd[1527]: time="2025-07-14T21:20:02.467019866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:02.467937 containerd[1527]: time="2025-07-14T21:20:02.467907780Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.423371826s" Jul 14 21:20:02.467970 containerd[1527]: time="2025-07-14T21:20:02.467938388Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 14 21:20:02.468429 containerd[1527]: time="2025-07-14T21:20:02.468398112Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 14 21:20:03.974692 containerd[1527]: time="2025-07-14T21:20:03.974626249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:03.976034 containerd[1527]: time="2025-07-14T21:20:03.975959961Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 14 21:20:03.977722 containerd[1527]: time="2025-07-14T21:20:03.977198018Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:03.980765 containerd[1527]: time="2025-07-14T21:20:03.980724131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:03.982296 containerd[1527]: time="2025-07-14T21:20:03.982257145Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.513823889s" Jul 14 21:20:03.982444 containerd[1527]: time="2025-07-14T21:20:03.982300073Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 14 21:20:03.982876 containerd[1527]: time="2025-07-14T21:20:03.982851758Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 14 21:20:05.038537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215100628.mount: Deactivated successfully. Jul 14 21:20:05.406735 containerd[1527]: time="2025-07-14T21:20:05.406580347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:05.407271 containerd[1527]: time="2025-07-14T21:20:05.407213251Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 14 21:20:05.408046 containerd[1527]: time="2025-07-14T21:20:05.408011524Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:05.410411 containerd[1527]: time="2025-07-14T21:20:05.410377245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:05.411374 containerd[1527]: time="2025-07-14T21:20:05.411336193Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.428451751s" Jul 14 21:20:05.411405 containerd[1527]: time="2025-07-14T21:20:05.411374646Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 14 21:20:05.411829 containerd[1527]: time="2025-07-14T21:20:05.411809225Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 21:20:05.966602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3856980778.mount: Deactivated successfully. Jul 14 21:20:06.438079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 21:20:06.439657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:20:06.591726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:20:06.595893 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:20:06.642967 kubelet[2114]: E0714 21:20:06.642905 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:20:06.646287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:20:06.646435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:20:06.646771 systemd[1]: kubelet.service: Consumed 155ms CPU time, 107.2M memory peak. Jul 14 21:20:06.857756 containerd[1527]: time="2025-07-14T21:20:06.857631272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:06.858643 containerd[1527]: time="2025-07-14T21:20:06.858603176Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 14 21:20:06.859568 containerd[1527]: time="2025-07-14T21:20:06.859504491Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:06.863339 containerd[1527]: time="2025-07-14T21:20:06.863240135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:06.863819 containerd[1527]: time="2025-07-14T21:20:06.863778797Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.451940353s" Jul 14 21:20:06.863886 containerd[1527]: time="2025-07-14T21:20:06.863819689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 14 21:20:06.864531 containerd[1527]: time="2025-07-14T21:20:06.864247955Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 21:20:07.334273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560255209.mount: Deactivated successfully. Jul 14 21:20:07.338138 containerd[1527]: time="2025-07-14T21:20:07.338096746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:20:07.338831 containerd[1527]: time="2025-07-14T21:20:07.338798073Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 14 21:20:07.339447 containerd[1527]: time="2025-07-14T21:20:07.339408445Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:20:07.341566 containerd[1527]: time="2025-07-14T21:20:07.341526308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:20:07.342329 containerd[1527]: time="2025-07-14T21:20:07.342275461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 477.996287ms" Jul 14 21:20:07.342329 containerd[1527]: time="2025-07-14T21:20:07.342302583Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 21:20:07.342993 containerd[1527]: time="2025-07-14T21:20:07.342823082Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 14 21:20:07.899208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1494523890.mount: Deactivated successfully. Jul 14 21:20:09.611390 containerd[1527]: time="2025-07-14T21:20:09.611339719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:09.612187 containerd[1527]: time="2025-07-14T21:20:09.612152767Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 14 21:20:09.614605 containerd[1527]: time="2025-07-14T21:20:09.614551885Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:09.754452 containerd[1527]: time="2025-07-14T21:20:09.754148283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:09.755197 containerd[1527]: time="2025-07-14T21:20:09.754820114Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.411966824s" Jul 14 21:20:09.755197 containerd[1527]: time="2025-07-14T21:20:09.754856050Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 14 21:20:13.908332 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:20:13.908483 systemd[1]: kubelet.service: Consumed 155ms CPU time, 107.2M memory peak. Jul 14 21:20:13.910489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:20:13.931766 systemd[1]: Reload requested from client PID 2212 ('systemctl') (unit session-7.scope)... Jul 14 21:20:13.931783 systemd[1]: Reloading... Jul 14 21:20:14.004774 zram_generator::config[2254]: No configuration found. Jul 14 21:20:14.080888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:20:14.167777 systemd[1]: Reloading finished in 235 ms. Jul 14 21:20:14.213526 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 21:20:14.213629 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 21:20:14.214012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:20:14.217968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:20:14.364547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:20:14.389109 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:20:14.428817 kubelet[2298]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:20:14.428817 kubelet[2298]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 21:20:14.428817 kubelet[2298]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:20:14.429169 kubelet[2298]: I0714 21:20:14.428807 2298 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:20:14.921974 kubelet[2298]: I0714 21:20:14.921934 2298 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 21:20:14.921974 kubelet[2298]: I0714 21:20:14.921962 2298 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:20:14.922248 kubelet[2298]: I0714 21:20:14.922225 2298 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 21:20:14.956096 kubelet[2298]: E0714 21:20:14.956060 2298 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:20:14.959719 kubelet[2298]: I0714 21:20:14.959517 2298 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:20:14.965272 kubelet[2298]: I0714 21:20:14.965246 2298 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 14 21:20:14.968386 kubelet[2298]: I0714 21:20:14.968236 2298 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:20:14.969718 kubelet[2298]: I0714 21:20:14.968859 2298 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:20:14.969718 kubelet[2298]: I0714 21:20:14.968896 2298 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:20:14.969718 kubelet[2298]: I0714 21:20:14.969279 2298 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:20:14.969718 kubelet[2298]: I0714 21:20:14.969288 2298 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 21:20:14.969884 kubelet[2298]: I0714 21:20:14.969479 2298 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:20:14.974082 kubelet[2298]: I0714 21:20:14.973899 2298 kubelet.go:446] "Attempting to sync node with API server" Jul 14 21:20:14.974082 kubelet[2298]: I0714 21:20:14.973926 2298 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:20:14.974082 kubelet[2298]: I0714 21:20:14.973948 2298 kubelet.go:352] "Adding apiserver pod source" Jul 14 21:20:14.974082 kubelet[2298]: I0714 21:20:14.973963 2298 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:20:14.976528 kubelet[2298]: I0714 21:20:14.976507 2298 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 14 21:20:14.977396 kubelet[2298]: I0714 21:20:14.977182 2298 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:20:14.977396 kubelet[2298]: W0714 21:20:14.977298 2298 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 21:20:14.977770 kubelet[2298]: W0714 21:20:14.977682 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 14 21:20:14.977835 kubelet[2298]: E0714 21:20:14.977778 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:20:14.977835 kubelet[2298]: W0714 21:20:14.977725 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 14 21:20:14.977835 kubelet[2298]: E0714 21:20:14.977806 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:20:14.978089 kubelet[2298]: I0714 21:20:14.978070 2298 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 21:20:14.978134 kubelet[2298]: I0714 21:20:14.978107 2298 server.go:1287] "Started kubelet" Jul 14 21:20:14.978680 kubelet[2298]: I0714 21:20:14.978595 2298 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:20:14.979048 kubelet[2298]: I0714 21:20:14.978949 2298 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:20:14.979048 kubelet[2298]: I0714 21:20:14.979037 2298 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:20:14.979574 kubelet[2298]: I0714 21:20:14.979328 2298 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:20:14.979944 kubelet[2298]: I0714 21:20:14.979923 2298 server.go:479] "Adding debug handlers to kubelet server" Jul 14 21:20:14.981594 kubelet[2298]: E0714 21:20:14.981343 2298 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523af1bfb076a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:20:14.978086567 +0000 UTC m=+0.585413881,LastTimestamp:2025-07-14 21:20:14.978086567 +0000 UTC m=+0.585413881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:20:14.982095 kubelet[2298]: E0714 21:20:14.982070 2298 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:20:14.982250 kubelet[2298]: I0714 21:20:14.982237 2298 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 21:20:14.982844 kubelet[2298]: I0714 21:20:14.982828 2298 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 21:20:14.982948 kubelet[2298]: I0714 21:20:14.982401 2298 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:20:14.983125 kubelet[2298]: I0714 21:20:14.983110 2298 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:20:14.983464 kubelet[2298]: I0714 21:20:14.983434 2298 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:20:14.983579 kubelet[2298]: I0714 21:20:14.983530 2298 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:20:14.984191 kubelet[2298]: W0714 21:20:14.984148 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 14 21:20:14.984314 kubelet[2298]: E0714 21:20:14.984296 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:20:14.987768 kubelet[2298]: E0714 21:20:14.985433 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" Jul 14 21:20:14.988839 kubelet[2298]: I0714 21:20:14.988819 2298 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:20:14.989514 kubelet[2298]: E0714 21:20:14.989010 2298 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:20:14.997715 kubelet[2298]: I0714 21:20:14.997592 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:20:14.998716 kubelet[2298]: I0714 21:20:14.998682 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:20:14.998816 kubelet[2298]: I0714 21:20:14.998805 2298 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 21:20:14.998878 kubelet[2298]: I0714 21:20:14.998869 2298 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 21:20:14.998926 kubelet[2298]: I0714 21:20:14.998918 2298 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 21:20:14.999032 kubelet[2298]: E0714 21:20:14.999009 2298 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:20:15.002479 kubelet[2298]: W0714 21:20:15.002441 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 14 21:20:15.002596 kubelet[2298]: E0714 21:20:15.002576 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:20:15.003848 kubelet[2298]: I0714 21:20:15.003821 2298 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 21:20:15.003848 kubelet[2298]: I0714 21:20:15.003841 2298 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 21:20:15.003936 kubelet[2298]: I0714 21:20:15.003856 2298 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:20:15.024556 kubelet[2298]: I0714 21:20:15.024529 2298 policy_none.go:49] "None policy: Start" Jul 14 21:20:15.024556 kubelet[2298]: I0714 21:20:15.024560 2298 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 21:20:15.024658 kubelet[2298]: I0714 21:20:15.024572 2298 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:20:15.031723 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 21:20:15.046580 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 21:20:15.049246 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 21:20:15.064509 kubelet[2298]: I0714 21:20:15.064482 2298 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:20:15.064991 kubelet[2298]: I0714 21:20:15.064819 2298 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:20:15.065039 kubelet[2298]: I0714 21:20:15.064989 2298 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:20:15.065258 kubelet[2298]: I0714 21:20:15.065223 2298 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:20:15.066046 kubelet[2298]: E0714 21:20:15.066026 2298 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 21:20:15.066523 kubelet[2298]: E0714 21:20:15.066507 2298 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 21:20:15.107349 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 14 21:20:15.132190 kubelet[2298]: E0714 21:20:15.132153 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:20:15.134923 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 14 21:20:15.136367 kubelet[2298]: E0714 21:20:15.136343 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:20:15.138537 systemd[1]: Created slice kubepods-burstable-pod1a453709d19b23c6517be52587651de1.slice - libcontainer container kubepods-burstable-pod1a453709d19b23c6517be52587651de1.slice. Jul 14 21:20:15.139822 kubelet[2298]: E0714 21:20:15.139793 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:20:15.166933 kubelet[2298]: I0714 21:20:15.166896 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:20:15.167320 kubelet[2298]: E0714 21:20:15.167297 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 14 21:20:15.184805 kubelet[2298]: I0714 21:20:15.184691 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:15.184805 kubelet[2298]: I0714 21:20:15.184750 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:15.184805 kubelet[2298]: I0714 21:20:15.184788 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:15.184805 kubelet[2298]: I0714 21:20:15.184806 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a453709d19b23c6517be52587651de1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a453709d19b23c6517be52587651de1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:20:15.184930 kubelet[2298]: I0714 21:20:15.184824 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a453709d19b23c6517be52587651de1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a453709d19b23c6517be52587651de1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:20:15.184930 kubelet[2298]: I0714 21:20:15.184838 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:15.184930 kubelet[2298]: I0714 21:20:15.184852 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:15.184930 kubelet[2298]: I0714 21:20:15.184866 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:20:15.184930 kubelet[2298]: I0714 21:20:15.184880 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a453709d19b23c6517be52587651de1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a453709d19b23c6517be52587651de1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:20:15.186305 kubelet[2298]: E0714 21:20:15.186105 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" Jul 14 21:20:15.368981 kubelet[2298]: I0714 21:20:15.368936 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:20:15.369291 kubelet[2298]: E0714 21:20:15.369259 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 14 21:20:15.433260 kubelet[2298]: E0714 21:20:15.433230 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:15.433847 containerd[1527]: time="2025-07-14T21:20:15.433807886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 14 21:20:15.437055 kubelet[2298]: E0714 21:20:15.436994 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:15.437538 containerd[1527]: time="2025-07-14T21:20:15.437410334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 14 21:20:15.440905 kubelet[2298]: E0714 21:20:15.440864 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:15.441471 containerd[1527]: time="2025-07-14T21:20:15.441302468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a453709d19b23c6517be52587651de1,Namespace:kube-system,Attempt:0,}" Jul 14 21:20:15.455597 containerd[1527]: time="2025-07-14T21:20:15.455557179Z" level=info msg="connecting to shim c6c95fd373ce469bd52e13119c74f2dbf03931bc04212713510ccb5079de7342" address="unix:///run/containerd/s/05074c521e4fee06f91843f15d6fccf86c52f2dfed645c3848a627f4e4ffd5f1" namespace=k8s.io protocol=ttrpc version=3 Jul 14 21:20:15.465377 containerd[1527]: time="2025-07-14T21:20:15.465183744Z" level=info msg="connecting to shim 503693016742c7ba305cb256d26c18cf1d233cd2cf6bc7eb6b6228a9baab794b" address="unix:///run/containerd/s/4cb91bf356fe692944c1ae99fec4a259c4f603302804ed8d3c595032e3af7106" namespace=k8s.io protocol=ttrpc version=3 Jul 14 21:20:15.476212 containerd[1527]: time="2025-07-14T21:20:15.476149215Z" level=info msg="connecting to shim 852f8779496e2a93f7a6497accdbe4d01f5af0005993c60423fb1ed005c56e0e" address="unix:///run/containerd/s/26815ba899e65953f050fc273d255eb3eab70d4cc4492c315ba13d65223cd3ba" namespace=k8s.io protocol=ttrpc version=3 Jul 14 21:20:15.485884 systemd[1]: Started cri-containerd-c6c95fd373ce469bd52e13119c74f2dbf03931bc04212713510ccb5079de7342.scope - libcontainer container c6c95fd373ce469bd52e13119c74f2dbf03931bc04212713510ccb5079de7342. Jul 14 21:20:15.491506 systemd[1]: Started cri-containerd-503693016742c7ba305cb256d26c18cf1d233cd2cf6bc7eb6b6228a9baab794b.scope - libcontainer container 503693016742c7ba305cb256d26c18cf1d233cd2cf6bc7eb6b6228a9baab794b. Jul 14 21:20:15.498366 systemd[1]: Started cri-containerd-852f8779496e2a93f7a6497accdbe4d01f5af0005993c60423fb1ed005c56e0e.scope - libcontainer container 852f8779496e2a93f7a6497accdbe4d01f5af0005993c60423fb1ed005c56e0e. Jul 14 21:20:15.529995 containerd[1527]: time="2025-07-14T21:20:15.529948547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6c95fd373ce469bd52e13119c74f2dbf03931bc04212713510ccb5079de7342\"" Jul 14 21:20:15.532357 kubelet[2298]: E0714 21:20:15.532289 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:15.535872 containerd[1527]: time="2025-07-14T21:20:15.535745251Z" level=info msg="CreateContainer within sandbox \"c6c95fd373ce469bd52e13119c74f2dbf03931bc04212713510ccb5079de7342\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 21:20:15.536189 containerd[1527]: time="2025-07-14T21:20:15.536143693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"503693016742c7ba305cb256d26c18cf1d233cd2cf6bc7eb6b6228a9baab794b\"" Jul 14 21:20:15.536803 kubelet[2298]: E0714 21:20:15.536777 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:15.538682 containerd[1527]: time="2025-07-14T21:20:15.538648352Z" level=info msg="CreateContainer within sandbox \"503693016742c7ba305cb256d26c18cf1d233cd2cf6bc7eb6b6228a9baab794b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 21:20:15.544715 containerd[1527]: time="2025-07-14T21:20:15.544663773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a453709d19b23c6517be52587651de1,Namespace:kube-system,Attempt:0,} returns sandbox id \"852f8779496e2a93f7a6497accdbe4d01f5af0005993c60423fb1ed005c56e0e\"" Jul 14 21:20:15.545413 kubelet[2298]: E0714 21:20:15.545385 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:15.546911 containerd[1527]: time="2025-07-14T21:20:15.546770751Z" level=info msg="CreateContainer within sandbox \"852f8779496e2a93f7a6497accdbe4d01f5af0005993c60423fb1ed005c56e0e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 21:20:15.549035 containerd[1527]: time="2025-07-14T21:20:15.549008486Z" level=info msg="Container 12014e589313a688193794bab3aa64f54d6f47638ffc64bac8c62d9e64f1d386: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:15.549861 containerd[1527]: time="2025-07-14T21:20:15.549832339Z" level=info msg="Container 0f4347734ebc9302c3476926b400e4c46db8cddb639d73680d3cdb6c2ff77e54: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:15.556235 containerd[1527]: time="2025-07-14T21:20:15.556199758Z" level=info msg="CreateContainer within sandbox \"503693016742c7ba305cb256d26c18cf1d233cd2cf6bc7eb6b6228a9baab794b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0f4347734ebc9302c3476926b400e4c46db8cddb639d73680d3cdb6c2ff77e54\"" Jul 14 21:20:15.556973 containerd[1527]: time="2025-07-14T21:20:15.556735929Z" level=info msg="StartContainer for \"0f4347734ebc9302c3476926b400e4c46db8cddb639d73680d3cdb6c2ff77e54\"" Jul 14 21:20:15.557862 containerd[1527]: time="2025-07-14T21:20:15.557837446Z" level=info msg="connecting to shim 0f4347734ebc9302c3476926b400e4c46db8cddb639d73680d3cdb6c2ff77e54" address="unix:///run/containerd/s/4cb91bf356fe692944c1ae99fec4a259c4f603302804ed8d3c595032e3af7106" protocol=ttrpc version=3 Jul 14 21:20:15.558265 containerd[1527]: time="2025-07-14T21:20:15.558236569Z" level=info msg="Container 8eaf3a1a471dc8b6d415d362e93e911ce9ccf5cd2d90d62938203e5411fb6a12: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:15.560835 containerd[1527]: time="2025-07-14T21:20:15.560761064Z" level=info msg="CreateContainer within sandbox \"c6c95fd373ce469bd52e13119c74f2dbf03931bc04212713510ccb5079de7342\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"12014e589313a688193794bab3aa64f54d6f47638ffc64bac8c62d9e64f1d386\"" Jul 14 21:20:15.561260 containerd[1527]: time="2025-07-14T21:20:15.561230474Z" level=info msg="StartContainer for \"12014e589313a688193794bab3aa64f54d6f47638ffc64bac8c62d9e64f1d386\"" Jul 14 21:20:15.562573 containerd[1527]: time="2025-07-14T21:20:15.562532754Z" level=info msg="connecting to shim 12014e589313a688193794bab3aa64f54d6f47638ffc64bac8c62d9e64f1d386" address="unix:///run/containerd/s/05074c521e4fee06f91843f15d6fccf86c52f2dfed645c3848a627f4e4ffd5f1" protocol=ttrpc version=3 Jul 14 21:20:15.565002 containerd[1527]: time="2025-07-14T21:20:15.564917556Z" level=info msg="CreateContainer within sandbox \"852f8779496e2a93f7a6497accdbe4d01f5af0005993c60423fb1ed005c56e0e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8eaf3a1a471dc8b6d415d362e93e911ce9ccf5cd2d90d62938203e5411fb6a12\"" Jul 14 21:20:15.565487 containerd[1527]: time="2025-07-14T21:20:15.565465308Z" level=info msg="StartContainer for \"8eaf3a1a471dc8b6d415d362e93e911ce9ccf5cd2d90d62938203e5411fb6a12\"" Jul 14 21:20:15.566828 containerd[1527]: time="2025-07-14T21:20:15.566796601Z" level=info msg="connecting to shim 8eaf3a1a471dc8b6d415d362e93e911ce9ccf5cd2d90d62938203e5411fb6a12" address="unix:///run/containerd/s/26815ba899e65953f050fc273d255eb3eab70d4cc4492c315ba13d65223cd3ba" protocol=ttrpc version=3 Jul 14 21:20:15.576854 systemd[1]: Started cri-containerd-0f4347734ebc9302c3476926b400e4c46db8cddb639d73680d3cdb6c2ff77e54.scope - libcontainer container 0f4347734ebc9302c3476926b400e4c46db8cddb639d73680d3cdb6c2ff77e54. Jul 14 21:20:15.580333 systemd[1]: Started cri-containerd-12014e589313a688193794bab3aa64f54d6f47638ffc64bac8c62d9e64f1d386.scope - libcontainer container 12014e589313a688193794bab3aa64f54d6f47638ffc64bac8c62d9e64f1d386. Jul 14 21:20:15.586911 kubelet[2298]: E0714 21:20:15.586501 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" Jul 14 21:20:15.599921 systemd[1]: Started cri-containerd-8eaf3a1a471dc8b6d415d362e93e911ce9ccf5cd2d90d62938203e5411fb6a12.scope - libcontainer container 8eaf3a1a471dc8b6d415d362e93e911ce9ccf5cd2d90d62938203e5411fb6a12. Jul 14 21:20:15.658398 containerd[1527]: time="2025-07-14T21:20:15.655455303Z" level=info msg="StartContainer for \"12014e589313a688193794bab3aa64f54d6f47638ffc64bac8c62d9e64f1d386\" returns successfully" Jul 14 21:20:15.676109 containerd[1527]: time="2025-07-14T21:20:15.672739585Z" level=info msg="StartContainer for \"8eaf3a1a471dc8b6d415d362e93e911ce9ccf5cd2d90d62938203e5411fb6a12\" returns successfully" Jul 14 21:20:15.676109 containerd[1527]: time="2025-07-14T21:20:15.674236617Z" level=info msg="StartContainer for \"0f4347734ebc9302c3476926b400e4c46db8cddb639d73680d3cdb6c2ff77e54\" returns successfully" Jul 14 21:20:15.773840 kubelet[2298]: I0714 21:20:15.773474 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:20:15.773942 kubelet[2298]: E0714 21:20:15.773916 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 14 21:20:16.008011 kubelet[2298]: E0714 21:20:16.007980 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:20:16.008139 kubelet[2298]: E0714 21:20:16.008098 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:16.009998 kubelet[2298]: E0714 21:20:16.009977 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:20:16.011755 kubelet[2298]: E0714 21:20:16.010371 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:16.013066 kubelet[2298]: E0714 21:20:16.013046 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:20:16.013251 kubelet[2298]: E0714 21:20:16.013235 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:16.575902 kubelet[2298]: I0714 21:20:16.575839 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:20:17.015140 kubelet[2298]: E0714 21:20:17.015048 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:20:17.015224 kubelet[2298]: E0714 21:20:17.015163 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:17.015430 kubelet[2298]: E0714 21:20:17.015409 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:20:17.015649 kubelet[2298]: E0714 21:20:17.015631 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:17.701626 kubelet[2298]: E0714 21:20:17.701588 2298 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 21:20:17.776433 kubelet[2298]: I0714 21:20:17.776393 2298 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 21:20:17.776433 kubelet[2298]: E0714 21:20:17.776430 2298 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 21:20:17.786859 kubelet[2298]: I0714 21:20:17.786445 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:20:17.795546 kubelet[2298]: E0714 21:20:17.795519 2298 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 14 21:20:17.795546 kubelet[2298]: I0714 21:20:17.795542 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:17.798263 kubelet[2298]: E0714 21:20:17.798220 2298 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:17.798263 kubelet[2298]: I0714 21:20:17.798245 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:20:17.800216 kubelet[2298]: E0714 21:20:17.800183 2298 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 14 21:20:17.978914 kubelet[2298]: I0714 21:20:17.978770 2298 apiserver.go:52] "Watching apiserver" Jul 14 21:20:17.983333 kubelet[2298]: I0714 21:20:17.983290 2298 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 21:20:19.580419 kubelet[2298]: I0714 21:20:19.580377 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:19.586505 kubelet[2298]: E0714 21:20:19.586465 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:19.832796 systemd[1]: Reload requested from client PID 2573 ('systemctl') (unit session-7.scope)... Jul 14 21:20:19.832825 systemd[1]: Reloading... Jul 14 21:20:19.899744 zram_generator::config[2617]: No configuration found. Jul 14 21:20:19.967954 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:20:20.019365 kubelet[2298]: E0714 21:20:20.019288 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:20.065175 systemd[1]: Reloading finished in 231 ms. Jul 14 21:20:20.088820 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:20:20.105642 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:20:20.106079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:20:20.106225 systemd[1]: kubelet.service: Consumed 984ms CPU time, 128.1M memory peak. Jul 14 21:20:20.108927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:20:20.260845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:20:20.270901 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:20:20.322038 kubelet[2658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:20:20.322038 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 21:20:20.322038 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:20:20.322374 kubelet[2658]: I0714 21:20:20.322159 2658 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:20:20.329575 kubelet[2658]: I0714 21:20:20.329532 2658 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 21:20:20.329575 kubelet[2658]: I0714 21:20:20.329561 2658 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:20:20.329850 kubelet[2658]: I0714 21:20:20.329823 2658 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 21:20:20.331028 kubelet[2658]: I0714 21:20:20.331006 2658 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 21:20:20.333414 kubelet[2658]: I0714 21:20:20.333375 2658 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:20:20.338080 kubelet[2658]: I0714 21:20:20.338040 2658 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 14 21:20:20.341085 kubelet[2658]: I0714 21:20:20.341006 2658 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:20:20.341723 kubelet[2658]: I0714 21:20:20.341661 2658 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:20:20.341999 kubelet[2658]: I0714 21:20:20.341807 2658 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:20:20.342117 kubelet[2658]: I0714 21:20:20.342105 2658 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:20:20.342163 kubelet[2658]: I0714 21:20:20.342156 2658 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 21:20:20.342256 kubelet[2658]: I0714 21:20:20.342246 2658 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:20:20.342453 kubelet[2658]: I0714 21:20:20.342438 2658 kubelet.go:446] "Attempting to sync node with API server" Jul 14 21:20:20.342541 kubelet[2658]: I0714 21:20:20.342530 2658 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:20:20.342613 kubelet[2658]: I0714 21:20:20.342604 2658 kubelet.go:352] "Adding apiserver pod source" Jul 14 21:20:20.342670 kubelet[2658]: I0714 21:20:20.342660 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:20:20.343216 kubelet[2658]: I0714 21:20:20.343197 2658 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 14 21:20:20.343904 kubelet[2658]: I0714 21:20:20.343888 2658 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:20:20.344470 kubelet[2658]: I0714 21:20:20.344455 2658 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 21:20:20.344578 kubelet[2658]: I0714 21:20:20.344569 2658 server.go:1287] "Started kubelet" Jul 14 21:20:20.345924 kubelet[2658]: I0714 21:20:20.345877 2658 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:20:20.346149 kubelet[2658]: I0714 21:20:20.346103 2658 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:20:20.346194 kubelet[2658]: I0714 21:20:20.346158 2658 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:20:20.346352 kubelet[2658]: I0714 21:20:20.346338 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:20:20.346970 kubelet[2658]: I0714 21:20:20.346943 2658 server.go:479] "Adding debug handlers to kubelet server" Jul 14 21:20:20.352160 kubelet[2658]: E0714 21:20:20.352121 2658 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:20:20.352393 kubelet[2658]: I0714 21:20:20.352361 2658 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:20:20.354836 kubelet[2658]: E0714 21:20:20.354798 2658 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:20:20.354902 kubelet[2658]: I0714 21:20:20.354852 2658 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 21:20:20.355056 kubelet[2658]: I0714 21:20:20.355034 2658 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 21:20:20.355164 kubelet[2658]: I0714 21:20:20.355150 2658 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:20:20.358691 kubelet[2658]: I0714 21:20:20.358672 2658 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:20:20.358818 kubelet[2658]: I0714 21:20:20.358807 2658 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:20:20.358963 kubelet[2658]: I0714 21:20:20.358942 2658 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:20:20.369881 kubelet[2658]: I0714 21:20:20.369840 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:20:20.371179 kubelet[2658]: I0714 21:20:20.370900 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:20:20.371179 kubelet[2658]: I0714 21:20:20.370927 2658 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 21:20:20.371179 kubelet[2658]: I0714 21:20:20.370952 2658 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 21:20:20.371179 kubelet[2658]: I0714 21:20:20.370959 2658 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 21:20:20.371179 kubelet[2658]: E0714 21:20:20.370997 2658 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:20:20.397786 kubelet[2658]: I0714 21:20:20.397756 2658 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 21:20:20.397786 kubelet[2658]: I0714 21:20:20.397779 2658 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 21:20:20.397939 kubelet[2658]: I0714 21:20:20.397800 2658 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:20:20.397981 kubelet[2658]: I0714 21:20:20.397964 2658 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 21:20:20.398008 kubelet[2658]: I0714 21:20:20.397980 2658 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 21:20:20.398008 kubelet[2658]: I0714 21:20:20.397997 2658 policy_none.go:49] "None policy: Start" Jul 14 21:20:20.398008 kubelet[2658]: I0714 21:20:20.398006 2658 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 21:20:20.398067 kubelet[2658]: I0714 21:20:20.398014 2658 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:20:20.398109 kubelet[2658]: I0714 21:20:20.398100 2658 state_mem.go:75] "Updated machine memory state" Jul 14 21:20:20.401792 kubelet[2658]: I0714 21:20:20.401772 2658 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:20:20.401997 kubelet[2658]: I0714 21:20:20.401955 2658 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:20:20.401997 kubelet[2658]: I0714 21:20:20.401972 2658 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:20:20.402182 kubelet[2658]: I0714 21:20:20.402151 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:20:20.403086 kubelet[2658]: E0714 21:20:20.403058 2658 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 21:20:20.473336 kubelet[2658]: I0714 21:20:20.473289 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:20:20.473713 kubelet[2658]: I0714 21:20:20.473668 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:20:20.474348 kubelet[2658]: I0714 21:20:20.473637 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:20.481161 kubelet[2658]: E0714 21:20:20.481119 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:20.505947 kubelet[2658]: I0714 21:20:20.505910 2658 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:20:20.515808 kubelet[2658]: I0714 21:20:20.515769 2658 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 14 21:20:20.515950 kubelet[2658]: I0714 21:20:20.515851 2658 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 21:20:20.657140 kubelet[2658]: I0714 21:20:20.656800 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a453709d19b23c6517be52587651de1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a453709d19b23c6517be52587651de1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:20:20.657140 kubelet[2658]: I0714 21:20:20.656845 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:20.657140 kubelet[2658]: I0714 21:20:20.656872 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:20.657140 kubelet[2658]: I0714 21:20:20.656888 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a453709d19b23c6517be52587651de1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a453709d19b23c6517be52587651de1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:20:20.657140 kubelet[2658]: I0714 21:20:20.656905 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a453709d19b23c6517be52587651de1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a453709d19b23c6517be52587651de1\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:20:20.657394 kubelet[2658]: I0714 21:20:20.656930 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:20.657394 kubelet[2658]: I0714 21:20:20.656950 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:20.657394 kubelet[2658]: I0714 21:20:20.656967 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:20:20.657394 kubelet[2658]: I0714 21:20:20.656983 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:20:20.779524 kubelet[2658]: E0714 21:20:20.779480 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:20.780284 kubelet[2658]: E0714 21:20:20.779862 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:20.782831 kubelet[2658]: E0714 21:20:20.781797 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:20.832232 sudo[2693]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 21:20:20.832875 sudo[2693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 14 21:20:21.139473 sudo[2693]: pam_unix(sudo:session): session closed for user root Jul 14 21:20:21.343128 kubelet[2658]: I0714 21:20:21.342868 2658 apiserver.go:52] "Watching apiserver" Jul 14 21:20:21.355637 kubelet[2658]: I0714 21:20:21.355584 2658 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 21:20:21.386421 kubelet[2658]: E0714 21:20:21.386353 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:21.386749 kubelet[2658]: E0714 21:20:21.386688 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:21.386959 kubelet[2658]: I0714 21:20:21.386946 2658 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:20:21.403556 kubelet[2658]: E0714 21:20:21.402951 2658 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 21:20:21.403556 kubelet[2658]: E0714 21:20:21.403118 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:21.440889 kubelet[2658]: I0714 21:20:21.440241 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.440220426 podStartE2EDuration="1.440220426s" podCreationTimestamp="2025-07-14 21:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:20:21.440161393 +0000 UTC m=+1.165958449" watchObservedRunningTime="2025-07-14 21:20:21.440220426 +0000 UTC m=+1.166017482" Jul 14 21:20:21.440889 kubelet[2658]: I0714 21:20:21.440477 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4404709740000001 podStartE2EDuration="1.440470974s" podCreationTimestamp="2025-07-14 21:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:20:21.432423387 +0000 UTC m=+1.158220443" watchObservedRunningTime="2025-07-14 21:20:21.440470974 +0000 UTC m=+1.166268030" Jul 14 21:20:21.457991 kubelet[2658]: I0714 21:20:21.457913 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.457897106 podStartE2EDuration="2.457897106s" podCreationTimestamp="2025-07-14 21:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:20:21.449414224 +0000 UTC m=+1.175211280" watchObservedRunningTime="2025-07-14 21:20:21.457897106 +0000 UTC m=+1.183694162" Jul 14 21:20:22.387735 kubelet[2658]: E0714 21:20:22.387631 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:22.388060 kubelet[2658]: E0714 21:20:22.387856 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:22.932231 sudo[1746]: pam_unix(sudo:session): session closed for user root Jul 14 21:20:22.933511 sshd[1745]: Connection closed by 10.0.0.1 port 54298 Jul 14 21:20:22.934979 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jul 14 21:20:22.938775 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:54298.service: Deactivated successfully. Jul 14 21:20:22.941487 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 21:20:22.941688 systemd[1]: session-7.scope: Consumed 6.432s CPU time, 259.6M memory peak. Jul 14 21:20:22.943657 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. Jul 14 21:20:22.944834 systemd-logind[1503]: Removed session 7. Jul 14 21:20:25.237983 kubelet[2658]: E0714 21:20:25.237942 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:25.392225 kubelet[2658]: E0714 21:20:25.392142 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:26.165185 kubelet[2658]: I0714 21:20:26.165145 2658 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 21:20:26.166677 containerd[1527]: time="2025-07-14T21:20:26.165449879Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 21:20:26.167476 kubelet[2658]: I0714 21:20:26.165596 2658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 21:20:26.393612 kubelet[2658]: E0714 21:20:26.393583 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:27.200800 kubelet[2658]: I0714 21:20:27.200687 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-etc-cni-netd\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.200995 kubelet[2658]: I0714 21:20:27.200901 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvn6x\" (UniqueName: \"kubernetes.io/projected/7cdf7a1a-de82-4a83-b687-6da9221a99a3-kube-api-access-dvn6x\") pod \"kube-proxy-k9ppn\" (UID: \"7cdf7a1a-de82-4a83-b687-6da9221a99a3\") " pod="kube-system/kube-proxy-k9ppn" Jul 14 21:20:27.200995 kubelet[2658]: I0714 21:20:27.200932 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-bpf-maps\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.201812 kubelet[2658]: I0714 21:20:27.201002 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-lib-modules\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.201812 kubelet[2658]: I0714 21:20:27.201022 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2ff7662-b733-4065-8d73-dcd869390744-clustermesh-secrets\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.201812 kubelet[2658]: I0714 21:20:27.201054 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cdf7a1a-de82-4a83-b687-6da9221a99a3-lib-modules\") pod \"kube-proxy-k9ppn\" (UID: \"7cdf7a1a-de82-4a83-b687-6da9221a99a3\") " pod="kube-system/kube-proxy-k9ppn" Jul 14 21:20:27.201812 kubelet[2658]: I0714 21:20:27.201073 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cilium-run\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.201812 kubelet[2658]: I0714 21:20:27.201090 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cni-path\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.201812 kubelet[2658]: I0714 21:20:27.201112 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2ff7662-b733-4065-8d73-dcd869390744-hubble-tls\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.202008 kubelet[2658]: I0714 21:20:27.201138 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzd4f\" (UniqueName: \"kubernetes.io/projected/c2ff7662-b733-4065-8d73-dcd869390744-kube-api-access-mzd4f\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.202008 kubelet[2658]: I0714 21:20:27.201156 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-xtables-lock\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.202008 kubelet[2658]: I0714 21:20:27.201176 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7cdf7a1a-de82-4a83-b687-6da9221a99a3-kube-proxy\") pod \"kube-proxy-k9ppn\" (UID: \"7cdf7a1a-de82-4a83-b687-6da9221a99a3\") " pod="kube-system/kube-proxy-k9ppn" Jul 14 21:20:27.202008 kubelet[2658]: I0714 21:20:27.201191 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cdf7a1a-de82-4a83-b687-6da9221a99a3-xtables-lock\") pod \"kube-proxy-k9ppn\" (UID: \"7cdf7a1a-de82-4a83-b687-6da9221a99a3\") " pod="kube-system/kube-proxy-k9ppn" Jul 14 21:20:27.202008 kubelet[2658]: I0714 21:20:27.201213 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-hostproc\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.202008 kubelet[2658]: I0714 21:20:27.201232 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cilium-cgroup\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.202124 kubelet[2658]: I0714 21:20:27.201251 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2ff7662-b733-4065-8d73-dcd869390744-cilium-config-path\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.202124 kubelet[2658]: I0714 21:20:27.201265 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-host-proc-sys-net\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.202124 kubelet[2658]: I0714 21:20:27.201284 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-host-proc-sys-kernel\") pod \"cilium-wmpt9\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " pod="kube-system/cilium-wmpt9" Jul 14 21:20:27.204372 systemd[1]: Created slice kubepods-besteffort-pod7cdf7a1a_de82_4a83_b687_6da9221a99a3.slice - libcontainer container kubepods-besteffort-pod7cdf7a1a_de82_4a83_b687_6da9221a99a3.slice. Jul 14 21:20:27.210279 systemd[1]: Created slice kubepods-burstable-podc2ff7662_b733_4065_8d73_dcd869390744.slice - libcontainer container kubepods-burstable-podc2ff7662_b733_4065_8d73_dcd869390744.slice. Jul 14 21:20:27.297953 systemd[1]: Created slice kubepods-besteffort-podafa2b4b1_083c_4dbf_9943_dc585b244cb8.slice - libcontainer container kubepods-besteffort-podafa2b4b1_083c_4dbf_9943_dc585b244cb8.slice. Jul 14 21:20:27.301556 kubelet[2658]: I0714 21:20:27.301426 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afa2b4b1-083c-4dbf-9943-dc585b244cb8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-5knzc\" (UID: \"afa2b4b1-083c-4dbf-9943-dc585b244cb8\") " pod="kube-system/cilium-operator-6c4d7847fc-5knzc" Jul 14 21:20:27.301833 kubelet[2658]: I0714 21:20:27.301813 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vkg6\" (UniqueName: \"kubernetes.io/projected/afa2b4b1-083c-4dbf-9943-dc585b244cb8-kube-api-access-2vkg6\") pod \"cilium-operator-6c4d7847fc-5knzc\" (UID: \"afa2b4b1-083c-4dbf-9943-dc585b244cb8\") " pod="kube-system/cilium-operator-6c4d7847fc-5knzc" Jul 14 21:20:27.521030 kubelet[2658]: E0714 21:20:27.520893 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:27.521874 containerd[1527]: time="2025-07-14T21:20:27.521590055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9ppn,Uid:7cdf7a1a-de82-4a83-b687-6da9221a99a3,Namespace:kube-system,Attempt:0,}" Jul 14 21:20:27.523420 kubelet[2658]: E0714 21:20:27.523374 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:27.523859 containerd[1527]: time="2025-07-14T21:20:27.523804987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wmpt9,Uid:c2ff7662-b733-4065-8d73-dcd869390744,Namespace:kube-system,Attempt:0,}" Jul 14 21:20:27.539480 containerd[1527]: time="2025-07-14T21:20:27.539132042Z" level=info msg="connecting to shim 96ffc2a784dfe5607e6671f6d47d2e6311e24ef4223ffbb732845e12d121eca2" address="unix:///run/containerd/s/4eacd2c3cfbc79dbc7f6c825fe31a70f8e38c22ce820040f0a734136cb71339e" namespace=k8s.io protocol=ttrpc version=3 Jul 14 21:20:27.542579 containerd[1527]: time="2025-07-14T21:20:27.542506984Z" level=info msg="connecting to shim 1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907" address="unix:///run/containerd/s/5c3a156bdd30586f4688eb585c672ad5328323f2d6ea33b6c6b760e0e7db14e8" namespace=k8s.io protocol=ttrpc version=3 Jul 14 21:20:27.566859 systemd[1]: Started cri-containerd-96ffc2a784dfe5607e6671f6d47d2e6311e24ef4223ffbb732845e12d121eca2.scope - libcontainer container 96ffc2a784dfe5607e6671f6d47d2e6311e24ef4223ffbb732845e12d121eca2. Jul 14 21:20:27.570052 systemd[1]: Started cri-containerd-1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907.scope - libcontainer container 1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907. Jul 14 21:20:27.592956 containerd[1527]: time="2025-07-14T21:20:27.592906724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9ppn,Uid:7cdf7a1a-de82-4a83-b687-6da9221a99a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"96ffc2a784dfe5607e6671f6d47d2e6311e24ef4223ffbb732845e12d121eca2\"" Jul 14 21:20:27.593998 kubelet[2658]: E0714 21:20:27.593969 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:27.596449 containerd[1527]: time="2025-07-14T21:20:27.596408052Z" level=info msg="CreateContainer within sandbox \"96ffc2a784dfe5607e6671f6d47d2e6311e24ef4223ffbb732845e12d121eca2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 21:20:27.597640 containerd[1527]: time="2025-07-14T21:20:27.597601890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wmpt9,Uid:c2ff7662-b733-4065-8d73-dcd869390744,Namespace:kube-system,Attempt:0,} returns sandbox id \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\"" Jul 14 21:20:27.598231 kubelet[2658]: E0714 21:20:27.598210 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:27.599162 containerd[1527]: time="2025-07-14T21:20:27.599127206Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 21:20:27.601294 kubelet[2658]: E0714 21:20:27.601190 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:27.601939 containerd[1527]: time="2025-07-14T21:20:27.601776581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5knzc,Uid:afa2b4b1-083c-4dbf-9943-dc585b244cb8,Namespace:kube-system,Attempt:0,}" Jul 14 21:20:27.607934 containerd[1527]: time="2025-07-14T21:20:27.607907747Z" level=info msg="Container ea3a7435fbb4bcc1bdd86b4e9daec6745d106677c86fee8b29e93fde59d3549a: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:27.615785 containerd[1527]: time="2025-07-14T21:20:27.615742938Z" level=info msg="CreateContainer within sandbox \"96ffc2a784dfe5607e6671f6d47d2e6311e24ef4223ffbb732845e12d121eca2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea3a7435fbb4bcc1bdd86b4e9daec6745d106677c86fee8b29e93fde59d3549a\"" Jul 14 21:20:27.616303 containerd[1527]: time="2025-07-14T21:20:27.616241035Z" level=info msg="StartContainer for \"ea3a7435fbb4bcc1bdd86b4e9daec6745d106677c86fee8b29e93fde59d3549a\"" Jul 14 21:20:27.618016 containerd[1527]: time="2025-07-14T21:20:27.617960553Z" level=info msg="connecting to shim ea3a7435fbb4bcc1bdd86b4e9daec6745d106677c86fee8b29e93fde59d3549a" address="unix:///run/containerd/s/4eacd2c3cfbc79dbc7f6c825fe31a70f8e38c22ce820040f0a734136cb71339e" protocol=ttrpc version=3 Jul 14 21:20:27.624075 containerd[1527]: time="2025-07-14T21:20:27.624032029Z" level=info msg="connecting to shim 7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872" address="unix:///run/containerd/s/03a53a2b96196c4532775cc39ba59c6f9bb92d04f97f2414072ae01a8389f0dd" namespace=k8s.io protocol=ttrpc version=3 Jul 14 21:20:27.638313 systemd[1]: Started cri-containerd-ea3a7435fbb4bcc1bdd86b4e9daec6745d106677c86fee8b29e93fde59d3549a.scope - libcontainer container ea3a7435fbb4bcc1bdd86b4e9daec6745d106677c86fee8b29e93fde59d3549a. Jul 14 21:20:27.641089 systemd[1]: Started cri-containerd-7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872.scope - libcontainer container 7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872. Jul 14 21:20:27.679050 containerd[1527]: time="2025-07-14T21:20:27.677588249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5knzc,Uid:afa2b4b1-083c-4dbf-9943-dc585b244cb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872\"" Jul 14 21:20:27.679050 containerd[1527]: time="2025-07-14T21:20:27.678022812Z" level=info msg="StartContainer for \"ea3a7435fbb4bcc1bdd86b4e9daec6745d106677c86fee8b29e93fde59d3549a\" returns successfully" Jul 14 21:20:27.679167 kubelet[2658]: E0714 21:20:27.678534 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:28.403891 kubelet[2658]: E0714 21:20:28.403404 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:28.415764 kubelet[2658]: I0714 21:20:28.415687 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k9ppn" podStartSLOduration=1.4156729129999999 podStartE2EDuration="1.415672913s" podCreationTimestamp="2025-07-14 21:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:20:28.415304584 +0000 UTC m=+8.141101640" watchObservedRunningTime="2025-07-14 21:20:28.415672913 +0000 UTC m=+8.141469969" Jul 14 21:20:29.457748 kubelet[2658]: E0714 21:20:29.457716 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:29.693784 kubelet[2658]: E0714 21:20:29.693689 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:30.433558 kubelet[2658]: E0714 21:20:30.433531 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:30.434824 kubelet[2658]: E0714 21:20:30.434806 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:31.407456 kubelet[2658]: E0714 21:20:31.407425 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:31.675081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895025102.mount: Deactivated successfully. Jul 14 21:20:32.943089 containerd[1527]: time="2025-07-14T21:20:32.943034392Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:32.943646 containerd[1527]: time="2025-07-14T21:20:32.943621387Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 14 21:20:32.944640 containerd[1527]: time="2025-07-14T21:20:32.944613508Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:32.946169 containerd[1527]: time="2025-07-14T21:20:32.946136510Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.346876236s" Jul 14 21:20:32.946228 containerd[1527]: time="2025-07-14T21:20:32.946177735Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 14 21:20:32.954085 containerd[1527]: time="2025-07-14T21:20:32.954012000Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 21:20:32.958639 containerd[1527]: time="2025-07-14T21:20:32.958240802Z" level=info msg="CreateContainer within sandbox \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:20:32.967056 containerd[1527]: time="2025-07-14T21:20:32.967005510Z" level=info msg="Container 73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:32.970179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823884433.mount: Deactivated successfully. Jul 14 21:20:32.980641 containerd[1527]: time="2025-07-14T21:20:32.980536025Z" level=info msg="CreateContainer within sandbox \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\"" Jul 14 21:20:32.982473 containerd[1527]: time="2025-07-14T21:20:32.982436496Z" level=info msg="StartContainer for \"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\"" Jul 14 21:20:32.983630 containerd[1527]: time="2025-07-14T21:20:32.983527036Z" level=info msg="connecting to shim 73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274" address="unix:///run/containerd/s/5c3a156bdd30586f4688eb585c672ad5328323f2d6ea33b6c6b760e0e7db14e8" protocol=ttrpc version=3 Jul 14 21:20:33.038864 systemd[1]: Started cri-containerd-73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274.scope - libcontainer container 73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274. Jul 14 21:20:33.071529 containerd[1527]: time="2025-07-14T21:20:33.069978936Z" level=info msg="StartContainer for \"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\" returns successfully" Jul 14 21:20:33.114572 systemd[1]: cri-containerd-73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274.scope: Deactivated successfully. Jul 14 21:20:33.141907 containerd[1527]: time="2025-07-14T21:20:33.141855950Z" level=info msg="received exit event container_id:\"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\" id:\"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\" pid:3077 exited_at:{seconds:1752528033 nanos:133551154}" Jul 14 21:20:33.143130 containerd[1527]: time="2025-07-14T21:20:33.142800646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\" id:\"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\" pid:3077 exited_at:{seconds:1752528033 nanos:133551154}" Jul 14 21:20:33.191415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274-rootfs.mount: Deactivated successfully. Jul 14 21:20:33.417783 kubelet[2658]: E0714 21:20:33.417546 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:33.422584 containerd[1527]: time="2025-07-14T21:20:33.421817159Z" level=info msg="CreateContainer within sandbox \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:20:33.436489 containerd[1527]: time="2025-07-14T21:20:33.436449508Z" level=info msg="Container 425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:33.442109 containerd[1527]: time="2025-07-14T21:20:33.442071380Z" level=info msg="CreateContainer within sandbox \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\"" Jul 14 21:20:33.443626 containerd[1527]: time="2025-07-14T21:20:33.442785706Z" level=info msg="StartContainer for \"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\"" Jul 14 21:20:33.443626 containerd[1527]: time="2025-07-14T21:20:33.443526006Z" level=info msg="connecting to shim 425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b" address="unix:///run/containerd/s/5c3a156bdd30586f4688eb585c672ad5328323f2d6ea33b6c6b760e0e7db14e8" protocol=ttrpc version=3 Jul 14 21:20:33.471890 systemd[1]: Started cri-containerd-425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b.scope - libcontainer container 425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b. Jul 14 21:20:33.497909 containerd[1527]: time="2025-07-14T21:20:33.497869304Z" level=info msg="StartContainer for \"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\" returns successfully" Jul 14 21:20:33.523429 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:20:33.523633 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:20:33.524225 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:20:33.525837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:20:33.527349 systemd[1]: cri-containerd-425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b.scope: Deactivated successfully. Jul 14 21:20:33.529552 containerd[1527]: time="2025-07-14T21:20:33.529519916Z" level=info msg="received exit event container_id:\"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\" id:\"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\" pid:3126 exited_at:{seconds:1752528033 nanos:529329288}" Jul 14 21:20:33.529948 containerd[1527]: time="2025-07-14T21:20:33.529573666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\" id:\"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\" pid:3126 exited_at:{seconds:1752528033 nanos:529329288}" Jul 14 21:20:33.558526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:20:34.047927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888258222.mount: Deactivated successfully. Jul 14 21:20:34.421356 kubelet[2658]: E0714 21:20:34.421100 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:34.426615 containerd[1527]: time="2025-07-14T21:20:34.426577108Z" level=info msg="CreateContainer within sandbox \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:20:34.463930 containerd[1527]: time="2025-07-14T21:20:34.463882447Z" level=info msg="Container fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:34.472787 containerd[1527]: time="2025-07-14T21:20:34.472739442Z" level=info msg="CreateContainer within sandbox \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\"" Jul 14 21:20:34.474095 containerd[1527]: time="2025-07-14T21:20:34.474065628Z" level=info msg="StartContainer for \"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\"" Jul 14 21:20:34.475869 containerd[1527]: time="2025-07-14T21:20:34.475833650Z" level=info msg="connecting to shim fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401" address="unix:///run/containerd/s/5c3a156bdd30586f4688eb585c672ad5328323f2d6ea33b6c6b760e0e7db14e8" protocol=ttrpc version=3 Jul 14 21:20:34.484438 containerd[1527]: time="2025-07-14T21:20:34.483723170Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:34.484438 containerd[1527]: time="2025-07-14T21:20:34.484305079Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 14 21:20:34.485077 containerd[1527]: time="2025-07-14T21:20:34.485052918Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:20:34.486178 containerd[1527]: time="2025-07-14T21:20:34.486141577Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.53208331s" Jul 14 21:20:34.486178 containerd[1527]: time="2025-07-14T21:20:34.486177116Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 14 21:20:34.489033 containerd[1527]: time="2025-07-14T21:20:34.489005702Z" level=info msg="CreateContainer within sandbox \"7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 21:20:34.504917 systemd[1]: Started cri-containerd-fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401.scope - libcontainer container fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401. Jul 14 21:20:34.507537 containerd[1527]: time="2025-07-14T21:20:34.506908072Z" level=info msg="Container 8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:34.515202 containerd[1527]: time="2025-07-14T21:20:34.515167389Z" level=info msg="CreateContainer within sandbox \"7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\"" Jul 14 21:20:34.516039 containerd[1527]: time="2025-07-14T21:20:34.516007837Z" level=info msg="StartContainer for \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\"" Jul 14 21:20:34.516897 containerd[1527]: time="2025-07-14T21:20:34.516823431Z" level=info msg="connecting to shim 8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b" address="unix:///run/containerd/s/03a53a2b96196c4532775cc39ba59c6f9bb92d04f97f2414072ae01a8389f0dd" protocol=ttrpc version=3 Jul 14 21:20:34.533864 systemd[1]: Started cri-containerd-8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b.scope - libcontainer container 8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b. Jul 14 21:20:34.545091 containerd[1527]: time="2025-07-14T21:20:34.545029767Z" level=info msg="StartContainer for \"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\" returns successfully" Jul 14 21:20:34.560364 systemd[1]: cri-containerd-fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401.scope: Deactivated successfully. Jul 14 21:20:34.561390 containerd[1527]: time="2025-07-14T21:20:34.561342411Z" level=info msg="received exit event container_id:\"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\" id:\"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\" pid:3190 exited_at:{seconds:1752528034 nanos:561166237}" Jul 14 21:20:34.561624 containerd[1527]: time="2025-07-14T21:20:34.561580378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\" id:\"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\" pid:3190 exited_at:{seconds:1752528034 nanos:561166237}" Jul 14 21:20:34.593748 containerd[1527]: time="2025-07-14T21:20:34.590689914Z" level=info msg="StartContainer for \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\" returns successfully" Jul 14 21:20:35.424541 kubelet[2658]: E0714 21:20:35.424512 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:35.431470 kubelet[2658]: E0714 21:20:35.431447 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:35.434189 containerd[1527]: time="2025-07-14T21:20:35.434155044Z" level=info msg="CreateContainer within sandbox \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:20:35.450905 containerd[1527]: time="2025-07-14T21:20:35.450871307Z" level=info msg="Container 2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:35.460726 containerd[1527]: time="2025-07-14T21:20:35.460546416Z" level=info msg="CreateContainer within sandbox \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\"" Jul 14 21:20:35.461527 containerd[1527]: time="2025-07-14T21:20:35.461498571Z" level=info msg="StartContainer for \"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\"" Jul 14 21:20:35.462570 containerd[1527]: time="2025-07-14T21:20:35.462424954Z" level=info msg="connecting to shim 2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9" address="unix:///run/containerd/s/5c3a156bdd30586f4688eb585c672ad5328323f2d6ea33b6c6b760e0e7db14e8" protocol=ttrpc version=3 Jul 14 21:20:35.463310 kubelet[2658]: I0714 21:20:35.463248 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-5knzc" podStartSLOduration=1.654916898 podStartE2EDuration="8.463232156s" podCreationTimestamp="2025-07-14 21:20:27 +0000 UTC" firstStartedPulling="2025-07-14 21:20:27.678982655 +0000 UTC m=+7.404779711" lastFinishedPulling="2025-07-14 21:20:34.487297913 +0000 UTC m=+14.213094969" observedRunningTime="2025-07-14 21:20:35.436676943 +0000 UTC m=+15.162474079" watchObservedRunningTime="2025-07-14 21:20:35.463232156 +0000 UTC m=+15.189029253" Jul 14 21:20:35.481850 systemd[1]: Started cri-containerd-2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9.scope - libcontainer container 2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9. Jul 14 21:20:35.504853 systemd[1]: cri-containerd-2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9.scope: Deactivated successfully. Jul 14 21:20:35.506250 containerd[1527]: time="2025-07-14T21:20:35.506218491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\" id:\"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\" pid:3264 exited_at:{seconds:1752528035 nanos:505901693}" Jul 14 21:20:35.507947 containerd[1527]: time="2025-07-14T21:20:35.507674898Z" level=info msg="received exit event container_id:\"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\" id:\"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\" pid:3264 exited_at:{seconds:1752528035 nanos:505901693}" Jul 14 21:20:35.514289 containerd[1527]: time="2025-07-14T21:20:35.514257023Z" level=info msg="StartContainer for \"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\" returns successfully" Jul 14 21:20:36.437720 kubelet[2658]: E0714 21:20:36.437645 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:36.438157 kubelet[2658]: E0714 21:20:36.437896 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:36.440329 containerd[1527]: time="2025-07-14T21:20:36.440270431Z" level=info msg="CreateContainer within sandbox \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:20:36.466856 containerd[1527]: time="2025-07-14T21:20:36.465349766Z" level=info msg="Container ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:36.498273 containerd[1527]: time="2025-07-14T21:20:36.498207741Z" level=info msg="CreateContainer within sandbox \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\"" Jul 14 21:20:36.499037 containerd[1527]: time="2025-07-14T21:20:36.499007755Z" level=info msg="StartContainer for \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\"" Jul 14 21:20:36.500223 containerd[1527]: time="2025-07-14T21:20:36.500184306Z" level=info msg="connecting to shim ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66" address="unix:///run/containerd/s/5c3a156bdd30586f4688eb585c672ad5328323f2d6ea33b6c6b760e0e7db14e8" protocol=ttrpc version=3 Jul 14 21:20:36.521931 systemd[1]: Started cri-containerd-ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66.scope - libcontainer container ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66. Jul 14 21:20:36.552072 containerd[1527]: time="2025-07-14T21:20:36.551992628Z" level=info msg="StartContainer for \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" returns successfully" Jul 14 21:20:36.679214 containerd[1527]: time="2025-07-14T21:20:36.679171578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" id:\"54aca1372d2c479dfd7b34d68513a6f8563329a5cc1b9575af2ee9c158de78f7\" pid:3331 exited_at:{seconds:1752528036 nanos:678848146}" Jul 14 21:20:36.721724 kubelet[2658]: I0714 21:20:36.721396 2658 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 21:20:36.964890 kubelet[2658]: I0714 21:20:36.964836 2658 status_manager.go:890] "Failed to get status for pod" podUID="6008a458-15cd-45cd-acde-f202a760fd3a" pod="kube-system/coredns-668d6bf9bc-k6vxd" err="pods \"coredns-668d6bf9bc-k6vxd\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jul 14 21:20:36.976809 systemd[1]: Created slice kubepods-burstable-pod6008a458_15cd_45cd_acde_f202a760fd3a.slice - libcontainer container kubepods-burstable-pod6008a458_15cd_45cd_acde_f202a760fd3a.slice. Jul 14 21:20:36.985148 systemd[1]: Created slice kubepods-burstable-pod2fad3933_69c8_4622_a0de_3dbd21035524.slice - libcontainer container kubepods-burstable-pod2fad3933_69c8_4622_a0de_3dbd21035524.slice. Jul 14 21:20:37.072357 kubelet[2658]: I0714 21:20:37.072309 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6008a458-15cd-45cd-acde-f202a760fd3a-config-volume\") pod \"coredns-668d6bf9bc-k6vxd\" (UID: \"6008a458-15cd-45cd-acde-f202a760fd3a\") " pod="kube-system/coredns-668d6bf9bc-k6vxd" Jul 14 21:20:37.072357 kubelet[2658]: I0714 21:20:37.072354 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lqzl\" (UniqueName: \"kubernetes.io/projected/6008a458-15cd-45cd-acde-f202a760fd3a-kube-api-access-8lqzl\") pod \"coredns-668d6bf9bc-k6vxd\" (UID: \"6008a458-15cd-45cd-acde-f202a760fd3a\") " pod="kube-system/coredns-668d6bf9bc-k6vxd" Jul 14 21:20:37.072515 kubelet[2658]: I0714 21:20:37.072381 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fad3933-69c8-4622-a0de-3dbd21035524-config-volume\") pod \"coredns-668d6bf9bc-7bq6c\" (UID: \"2fad3933-69c8-4622-a0de-3dbd21035524\") " pod="kube-system/coredns-668d6bf9bc-7bq6c" Jul 14 21:20:37.072515 kubelet[2658]: I0714 21:20:37.072404 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjqcx\" (UniqueName: \"kubernetes.io/projected/2fad3933-69c8-4622-a0de-3dbd21035524-kube-api-access-zjqcx\") pod \"coredns-668d6bf9bc-7bq6c\" (UID: \"2fad3933-69c8-4622-a0de-3dbd21035524\") " pod="kube-system/coredns-668d6bf9bc-7bq6c" Jul 14 21:20:37.282394 kubelet[2658]: E0714 21:20:37.282356 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:37.283718 containerd[1527]: time="2025-07-14T21:20:37.283194635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6vxd,Uid:6008a458-15cd-45cd-acde-f202a760fd3a,Namespace:kube-system,Attempt:0,}" Jul 14 21:20:37.289338 kubelet[2658]: E0714 21:20:37.289295 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:37.290580 containerd[1527]: time="2025-07-14T21:20:37.290459342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7bq6c,Uid:2fad3933-69c8-4622-a0de-3dbd21035524,Namespace:kube-system,Attempt:0,}" Jul 14 21:20:37.446918 kubelet[2658]: E0714 21:20:37.446880 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:38.449071 kubelet[2658]: E0714 21:20:38.449037 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:38.776392 systemd-networkd[1437]: cilium_host: Link UP Jul 14 21:20:38.776537 systemd-networkd[1437]: cilium_net: Link UP Jul 14 21:20:38.776661 systemd-networkd[1437]: cilium_net: Gained carrier Jul 14 21:20:38.776833 systemd-networkd[1437]: cilium_host: Gained carrier Jul 14 21:20:38.877543 systemd-networkd[1437]: cilium_vxlan: Link UP Jul 14 21:20:38.877551 systemd-networkd[1437]: cilium_vxlan: Gained carrier Jul 14 21:20:39.181884 systemd-networkd[1437]: cilium_host: Gained IPv6LL Jul 14 21:20:39.218731 kernel: NET: Registered PF_ALG protocol family Jul 14 21:20:39.450779 kubelet[2658]: E0714 21:20:39.450594 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:39.464849 update_engine[1511]: I20250714 21:20:39.464756 1511 update_attempter.cc:509] Updating boot flags... Jul 14 21:20:39.566775 systemd-networkd[1437]: cilium_net: Gained IPv6LL Jul 14 21:20:39.924483 systemd-networkd[1437]: lxc_health: Link UP Jul 14 21:20:39.924727 systemd-networkd[1437]: lxc_health: Gained carrier Jul 14 21:20:40.334251 systemd-networkd[1437]: cilium_vxlan: Gained IPv6LL Jul 14 21:20:40.413417 systemd-networkd[1437]: lxc982fce59ed3f: Link UP Jul 14 21:20:40.426613 systemd-networkd[1437]: lxc283912f93cec: Link UP Jul 14 21:20:40.427738 kernel: eth0: renamed from tmpe843c Jul 14 21:20:40.429880 systemd-networkd[1437]: lxc982fce59ed3f: Gained carrier Jul 14 21:20:40.431768 kernel: eth0: renamed from tmpa0955 Jul 14 21:20:40.434305 systemd-networkd[1437]: lxc283912f93cec: Gained carrier Jul 14 21:20:41.293846 systemd-networkd[1437]: lxc_health: Gained IPv6LL Jul 14 21:20:41.524928 kubelet[2658]: E0714 21:20:41.524881 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:41.555003 kubelet[2658]: I0714 21:20:41.554692 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wmpt9" podStartSLOduration=9.199586483 podStartE2EDuration="14.554676632s" podCreationTimestamp="2025-07-14 21:20:27 +0000 UTC" firstStartedPulling="2025-07-14 21:20:27.598648766 +0000 UTC m=+7.324445822" lastFinishedPulling="2025-07-14 21:20:32.953738915 +0000 UTC m=+12.679535971" observedRunningTime="2025-07-14 21:20:37.466188393 +0000 UTC m=+17.191985449" watchObservedRunningTime="2025-07-14 21:20:41.554676632 +0000 UTC m=+21.280473688" Jul 14 21:20:42.253928 systemd-networkd[1437]: lxc982fce59ed3f: Gained IPv6LL Jul 14 21:20:42.254234 systemd-networkd[1437]: lxc283912f93cec: Gained IPv6LL Jul 14 21:20:42.456962 kubelet[2658]: E0714 21:20:42.456929 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:43.459126 kubelet[2658]: E0714 21:20:43.459069 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:43.973892 containerd[1527]: time="2025-07-14T21:20:43.973852017Z" level=info msg="connecting to shim e843cce585639f38e09e0a2ed0e03846aee7eabaeaab82b036a5f1b339967b35" address="unix:///run/containerd/s/6ef5d8a1464a309f959cb76cf50d45a39a39c1d4e5bd44d7595848503497feda" namespace=k8s.io protocol=ttrpc version=3 Jul 14 21:20:43.975202 containerd[1527]: time="2025-07-14T21:20:43.975168609Z" level=info msg="connecting to shim a0955fbde1bf39c79ebd46b2fd4f1b5c4dbf54970c1960bd74df10a6d518b375" address="unix:///run/containerd/s/2517119a9ee55ef39ac5de7488bc0eb844f0f22872fdb06277010fcc968b958e" namespace=k8s.io protocol=ttrpc version=3 Jul 14 21:20:43.996509 systemd[1]: Started cri-containerd-e843cce585639f38e09e0a2ed0e03846aee7eabaeaab82b036a5f1b339967b35.scope - libcontainer container e843cce585639f38e09e0a2ed0e03846aee7eabaeaab82b036a5f1b339967b35. Jul 14 21:20:44.007213 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:20:44.027481 containerd[1527]: time="2025-07-14T21:20:44.027048551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6vxd,Uid:6008a458-15cd-45cd-acde-f202a760fd3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e843cce585639f38e09e0a2ed0e03846aee7eabaeaab82b036a5f1b339967b35\"" Jul 14 21:20:44.028253 kubelet[2658]: E0714 21:20:44.028222 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:44.030462 containerd[1527]: time="2025-07-14T21:20:44.030348673Z" level=info msg="CreateContainer within sandbox \"e843cce585639f38e09e0a2ed0e03846aee7eabaeaab82b036a5f1b339967b35\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:20:44.038913 systemd[1]: Started cri-containerd-a0955fbde1bf39c79ebd46b2fd4f1b5c4dbf54970c1960bd74df10a6d518b375.scope - libcontainer container a0955fbde1bf39c79ebd46b2fd4f1b5c4dbf54970c1960bd74df10a6d518b375. Jul 14 21:20:44.043735 containerd[1527]: time="2025-07-14T21:20:44.043285646Z" level=info msg="Container ee75d04dd6b53b7de11e062ae1176d94cc3b7c02ee39b9529441ce777a70b53d: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:44.049853 containerd[1527]: time="2025-07-14T21:20:44.049816710Z" level=info msg="CreateContainer within sandbox \"e843cce585639f38e09e0a2ed0e03846aee7eabaeaab82b036a5f1b339967b35\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee75d04dd6b53b7de11e062ae1176d94cc3b7c02ee39b9529441ce777a70b53d\"" Jul 14 21:20:44.051723 containerd[1527]: time="2025-07-14T21:20:44.050963510Z" level=info msg="StartContainer for \"ee75d04dd6b53b7de11e062ae1176d94cc3b7c02ee39b9529441ce777a70b53d\"" Jul 14 21:20:44.051723 containerd[1527]: time="2025-07-14T21:20:44.051672508Z" level=info msg="connecting to shim ee75d04dd6b53b7de11e062ae1176d94cc3b7c02ee39b9529441ce777a70b53d" address="unix:///run/containerd/s/6ef5d8a1464a309f959cb76cf50d45a39a39c1d4e5bd44d7595848503497feda" protocol=ttrpc version=3 Jul 14 21:20:44.055231 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:20:44.071894 systemd[1]: Started cri-containerd-ee75d04dd6b53b7de11e062ae1176d94cc3b7c02ee39b9529441ce777a70b53d.scope - libcontainer container ee75d04dd6b53b7de11e062ae1176d94cc3b7c02ee39b9529441ce777a70b53d. Jul 14 21:20:44.078289 containerd[1527]: time="2025-07-14T21:20:44.078222563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7bq6c,Uid:2fad3933-69c8-4622-a0de-3dbd21035524,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0955fbde1bf39c79ebd46b2fd4f1b5c4dbf54970c1960bd74df10a6d518b375\"" Jul 14 21:20:44.079060 kubelet[2658]: E0714 21:20:44.079035 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:44.081449 containerd[1527]: time="2025-07-14T21:20:44.081255010Z" level=info msg="CreateContainer within sandbox \"a0955fbde1bf39c79ebd46b2fd4f1b5c4dbf54970c1960bd74df10a6d518b375\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:20:44.090245 containerd[1527]: time="2025-07-14T21:20:44.090217352Z" level=info msg="Container 19c4c093f0e9ed84271e4184f969895fa6ac9636e49eab2737694a7b58e0c6db: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:20:44.098077 containerd[1527]: time="2025-07-14T21:20:44.098026213Z" level=info msg="CreateContainer within sandbox \"a0955fbde1bf39c79ebd46b2fd4f1b5c4dbf54970c1960bd74df10a6d518b375\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19c4c093f0e9ed84271e4184f969895fa6ac9636e49eab2737694a7b58e0c6db\"" Jul 14 21:20:44.098739 containerd[1527]: time="2025-07-14T21:20:44.098687678Z" level=info msg="StartContainer for \"19c4c093f0e9ed84271e4184f969895fa6ac9636e49eab2737694a7b58e0c6db\"" Jul 14 21:20:44.101687 containerd[1527]: time="2025-07-14T21:20:44.101638422Z" level=info msg="connecting to shim 19c4c093f0e9ed84271e4184f969895fa6ac9636e49eab2737694a7b58e0c6db" address="unix:///run/containerd/s/2517119a9ee55ef39ac5de7488bc0eb844f0f22872fdb06277010fcc968b958e" protocol=ttrpc version=3 Jul 14 21:20:44.105531 containerd[1527]: time="2025-07-14T21:20:44.105491378Z" level=info msg="StartContainer for \"ee75d04dd6b53b7de11e062ae1176d94cc3b7c02ee39b9529441ce777a70b53d\" returns successfully" Jul 14 21:20:44.130921 systemd[1]: Started cri-containerd-19c4c093f0e9ed84271e4184f969895fa6ac9636e49eab2737694a7b58e0c6db.scope - libcontainer container 19c4c093f0e9ed84271e4184f969895fa6ac9636e49eab2737694a7b58e0c6db. Jul 14 21:20:44.162887 containerd[1527]: time="2025-07-14T21:20:44.162844355Z" level=info msg="StartContainer for \"19c4c093f0e9ed84271e4184f969895fa6ac9636e49eab2737694a7b58e0c6db\" returns successfully" Jul 14 21:20:44.464318 kubelet[2658]: E0714 21:20:44.464244 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:44.467982 kubelet[2658]: E0714 21:20:44.467957 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:44.476278 kubelet[2658]: I0714 21:20:44.476224 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7bq6c" podStartSLOduration=17.476210469 podStartE2EDuration="17.476210469s" podCreationTimestamp="2025-07-14 21:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:20:44.475074632 +0000 UTC m=+24.200871688" watchObservedRunningTime="2025-07-14 21:20:44.476210469 +0000 UTC m=+24.202007525" Jul 14 21:20:44.497533 kubelet[2658]: I0714 21:20:44.497468 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k6vxd" podStartSLOduration=17.497450121 podStartE2EDuration="17.497450121s" podCreationTimestamp="2025-07-14 21:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:20:44.486614135 +0000 UTC m=+24.212411231" watchObservedRunningTime="2025-07-14 21:20:44.497450121 +0000 UTC m=+24.223247137" Jul 14 21:20:44.959416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130087985.mount: Deactivated successfully. Jul 14 21:20:45.469898 kubelet[2658]: E0714 21:20:45.469850 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:45.469898 kubelet[2658]: E0714 21:20:45.469879 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:46.472070 kubelet[2658]: E0714 21:20:46.471980 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:46.472420 kubelet[2658]: E0714 21:20:46.472393 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:20:47.597849 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:55728.service - OpenSSH per-connection server daemon (10.0.0.1:55728). Jul 14 21:20:47.660997 sshd[3991]: Accepted publickey for core from 10.0.0.1 port 55728 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:20:47.662273 sshd-session[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:47.666630 systemd-logind[1503]: New session 8 of user core. Jul 14 21:20:47.677833 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 21:20:47.797205 sshd[3994]: Connection closed by 10.0.0.1 port 55728 Jul 14 21:20:47.796781 sshd-session[3991]: pam_unix(sshd:session): session closed for user core Jul 14 21:20:47.800065 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:55728.service: Deactivated successfully. Jul 14 21:20:47.801610 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 21:20:47.805159 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. Jul 14 21:20:47.806473 systemd-logind[1503]: Removed session 8. Jul 14 21:20:52.811632 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:36642.service - OpenSSH per-connection server daemon (10.0.0.1:36642). Jul 14 21:20:52.861164 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 36642 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:20:52.862278 sshd-session[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:52.866104 systemd-logind[1503]: New session 9 of user core. Jul 14 21:20:52.872830 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 21:20:52.980851 sshd[4014]: Connection closed by 10.0.0.1 port 36642 Jul 14 21:20:52.981157 sshd-session[4011]: pam_unix(sshd:session): session closed for user core Jul 14 21:20:52.985261 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:36642.service: Deactivated successfully. Jul 14 21:20:52.988037 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 21:20:52.988646 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. Jul 14 21:20:52.990322 systemd-logind[1503]: Removed session 9. Jul 14 21:20:57.995150 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:36656.service - OpenSSH per-connection server daemon (10.0.0.1:36656). Jul 14 21:20:58.042821 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 36656 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:20:58.043915 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:20:58.047759 systemd-logind[1503]: New session 10 of user core. Jul 14 21:20:58.066924 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 21:20:58.176460 sshd[4034]: Connection closed by 10.0.0.1 port 36656 Jul 14 21:20:58.176934 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Jul 14 21:20:58.180279 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:36656.service: Deactivated successfully. Jul 14 21:20:58.182271 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 21:20:58.183279 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. Jul 14 21:20:58.184483 systemd-logind[1503]: Removed session 10. Jul 14 21:21:03.192129 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:46156.service - OpenSSH per-connection server daemon (10.0.0.1:46156). Jul 14 21:21:03.254952 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 46156 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:03.256130 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:03.261465 systemd-logind[1503]: New session 11 of user core. Jul 14 21:21:03.275895 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 21:21:03.407710 sshd[4052]: Connection closed by 10.0.0.1 port 46156 Jul 14 21:21:03.408842 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:03.417291 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:46156.service: Deactivated successfully. Jul 14 21:21:03.419374 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 21:21:03.420282 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. Jul 14 21:21:03.422096 systemd-logind[1503]: Removed session 11. Jul 14 21:21:03.423489 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:46164.service - OpenSSH per-connection server daemon (10.0.0.1:46164). Jul 14 21:21:03.489436 sshd[4066]: Accepted publickey for core from 10.0.0.1 port 46164 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:03.490902 sshd-session[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:03.495205 systemd-logind[1503]: New session 12 of user core. Jul 14 21:21:03.510896 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 21:21:03.669133 sshd[4069]: Connection closed by 10.0.0.1 port 46164 Jul 14 21:21:03.669998 sshd-session[4066]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:03.682670 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:46164.service: Deactivated successfully. Jul 14 21:21:03.685925 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 21:21:03.687894 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. Jul 14 21:21:03.692250 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:46172.service - OpenSSH per-connection server daemon (10.0.0.1:46172). Jul 14 21:21:03.693244 systemd-logind[1503]: Removed session 12. Jul 14 21:21:03.742346 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 46172 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:03.743619 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:03.748182 systemd-logind[1503]: New session 13 of user core. Jul 14 21:21:03.754896 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 21:21:03.869226 sshd[4085]: Connection closed by 10.0.0.1 port 46172 Jul 14 21:21:03.869566 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:03.872937 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:46172.service: Deactivated successfully. Jul 14 21:21:03.875153 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 21:21:03.876007 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. Jul 14 21:21:03.877313 systemd-logind[1503]: Removed session 13. Jul 14 21:21:08.896593 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:46188.service - OpenSSH per-connection server daemon (10.0.0.1:46188). Jul 14 21:21:08.965602 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 46188 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:08.967681 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:08.975607 systemd-logind[1503]: New session 14 of user core. Jul 14 21:21:08.985935 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 21:21:09.119825 sshd[4102]: Connection closed by 10.0.0.1 port 46188 Jul 14 21:21:09.121020 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:09.127071 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:46188.service: Deactivated successfully. Jul 14 21:21:09.129863 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 21:21:09.131166 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. Jul 14 21:21:09.132624 systemd-logind[1503]: Removed session 14. Jul 14 21:21:14.133932 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:60290.service - OpenSSH per-connection server daemon (10.0.0.1:60290). Jul 14 21:21:14.183724 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 60290 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:14.186502 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:14.194324 systemd-logind[1503]: New session 15 of user core. Jul 14 21:21:14.213950 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 21:21:14.335079 sshd[4122]: Connection closed by 10.0.0.1 port 60290 Jul 14 21:21:14.335757 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:14.349939 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:60290.service: Deactivated successfully. Jul 14 21:21:14.353460 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 21:21:14.354389 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. Jul 14 21:21:14.360878 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:60296.service - OpenSSH per-connection server daemon (10.0.0.1:60296). Jul 14 21:21:14.361847 systemd-logind[1503]: Removed session 15. Jul 14 21:21:14.423910 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 60296 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:14.426015 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:14.430064 systemd-logind[1503]: New session 16 of user core. Jul 14 21:21:14.440892 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 21:21:14.651351 sshd[4138]: Connection closed by 10.0.0.1 port 60296 Jul 14 21:21:14.650621 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:14.659504 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:60296.service: Deactivated successfully. Jul 14 21:21:14.665065 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 21:21:14.667263 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. Jul 14 21:21:14.670276 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:60302.service - OpenSSH per-connection server daemon (10.0.0.1:60302). Jul 14 21:21:14.671459 systemd-logind[1503]: Removed session 16. Jul 14 21:21:14.726745 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 60302 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:14.728087 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:14.732761 systemd-logind[1503]: New session 17 of user core. Jul 14 21:21:14.742111 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 21:21:15.300678 sshd[4152]: Connection closed by 10.0.0.1 port 60302 Jul 14 21:21:15.301291 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:15.314786 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:60302.service: Deactivated successfully. Jul 14 21:21:15.316299 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 21:21:15.320870 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. Jul 14 21:21:15.327575 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:60316.service - OpenSSH per-connection server daemon (10.0.0.1:60316). Jul 14 21:21:15.329333 systemd-logind[1503]: Removed session 17. Jul 14 21:21:15.377484 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 60316 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:15.378570 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:15.382756 systemd-logind[1503]: New session 18 of user core. Jul 14 21:21:15.395855 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 21:21:15.609848 sshd[4175]: Connection closed by 10.0.0.1 port 60316 Jul 14 21:21:15.609943 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:15.619828 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:60316.service: Deactivated successfully. Jul 14 21:21:15.621797 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 21:21:15.622520 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. Jul 14 21:21:15.625618 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:60318.service - OpenSSH per-connection server daemon (10.0.0.1:60318). Jul 14 21:21:15.626410 systemd-logind[1503]: Removed session 18. Jul 14 21:21:15.691759 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 60318 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:15.693037 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:15.699347 systemd-logind[1503]: New session 19 of user core. Jul 14 21:21:15.709839 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 21:21:15.822904 sshd[4192]: Connection closed by 10.0.0.1 port 60318 Jul 14 21:21:15.823375 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:15.826624 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:60318.service: Deactivated successfully. Jul 14 21:21:15.828244 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 21:21:15.828932 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. Jul 14 21:21:15.829821 systemd-logind[1503]: Removed session 19. Jul 14 21:21:20.834653 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:60326.service - OpenSSH per-connection server daemon (10.0.0.1:60326). Jul 14 21:21:20.894046 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 60326 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:20.894812 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:20.899365 systemd-logind[1503]: New session 20 of user core. Jul 14 21:21:20.910138 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 21:21:21.035811 sshd[4213]: Connection closed by 10.0.0.1 port 60326 Jul 14 21:21:21.036112 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:21.039598 systemd-logind[1503]: Session 20 logged out. Waiting for processes to exit. Jul 14 21:21:21.039866 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:60326.service: Deactivated successfully. Jul 14 21:21:21.041384 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 21:21:21.042624 systemd-logind[1503]: Removed session 20. Jul 14 21:21:26.059220 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:42174.service - OpenSSH per-connection server daemon (10.0.0.1:42174). Jul 14 21:21:26.114486 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 42174 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:26.115635 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:26.119293 systemd-logind[1503]: New session 21 of user core. Jul 14 21:21:26.133878 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 21:21:26.243145 sshd[4230]: Connection closed by 10.0.0.1 port 42174 Jul 14 21:21:26.243477 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:26.246945 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:42174.service: Deactivated successfully. Jul 14 21:21:26.248532 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 21:21:26.249428 systemd-logind[1503]: Session 21 logged out. Waiting for processes to exit. Jul 14 21:21:26.250483 systemd-logind[1503]: Removed session 21. Jul 14 21:21:29.376592 kubelet[2658]: E0714 21:21:29.376491 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:31.258653 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:42186.service - OpenSSH per-connection server daemon (10.0.0.1:42186). Jul 14 21:21:31.314204 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 42186 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:31.315430 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:31.319428 systemd-logind[1503]: New session 22 of user core. Jul 14 21:21:31.332841 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 21:21:31.437596 sshd[4249]: Connection closed by 10.0.0.1 port 42186 Jul 14 21:21:31.438040 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:31.456686 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:42186.service: Deactivated successfully. Jul 14 21:21:31.459386 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 21:21:31.460848 systemd-logind[1503]: Session 22 logged out. Waiting for processes to exit. Jul 14 21:21:31.463143 systemd-logind[1503]: Removed session 22. Jul 14 21:21:31.464852 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:42194.service - OpenSSH per-connection server daemon (10.0.0.1:42194). Jul 14 21:21:31.528817 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 42194 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:31.529799 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:31.533592 systemd-logind[1503]: New session 23 of user core. Jul 14 21:21:31.543856 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 21:21:32.371727 kubelet[2658]: E0714 21:21:32.371571 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:33.163108 containerd[1527]: time="2025-07-14T21:21:33.163058349Z" level=info msg="StopContainer for \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\" with timeout 30 (s)" Jul 14 21:21:33.166945 containerd[1527]: time="2025-07-14T21:21:33.166881342Z" level=info msg="Stop container \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\" with signal terminated" Jul 14 21:21:33.192772 systemd[1]: cri-containerd-8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b.scope: Deactivated successfully. Jul 14 21:21:33.194396 containerd[1527]: time="2025-07-14T21:21:33.194354182Z" level=info msg="received exit event container_id:\"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\" id:\"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\" pid:3216 exited_at:{seconds:1752528093 nanos:193942814}" Jul 14 21:21:33.194481 containerd[1527]: time="2025-07-14T21:21:33.194391423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\" id:\"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\" pid:3216 exited_at:{seconds:1752528093 nanos:193942814}" Jul 14 21:21:33.207991 containerd[1527]: time="2025-07-14T21:21:33.207948520Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:21:33.212946 containerd[1527]: time="2025-07-14T21:21:33.212917854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" id:\"f0457e4b0d178a36b0a3fc537755589cd2ea4f691d2dae221bf08c77d47195f6\" pid:4294 exited_at:{seconds:1752528093 nanos:212494166}" Jul 14 21:21:33.215682 containerd[1527]: time="2025-07-14T21:21:33.215658066Z" level=info msg="StopContainer for \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" with timeout 2 (s)" Jul 14 21:21:33.216260 containerd[1527]: time="2025-07-14T21:21:33.216179916Z" level=info msg="Stop container \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" with signal terminated" Jul 14 21:21:33.218768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b-rootfs.mount: Deactivated successfully. Jul 14 21:21:33.223945 systemd-networkd[1437]: lxc_health: Link DOWN Jul 14 21:21:33.223950 systemd-networkd[1437]: lxc_health: Lost carrier Jul 14 21:21:33.236406 systemd[1]: cri-containerd-ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66.scope: Deactivated successfully. Jul 14 21:21:33.236731 systemd[1]: cri-containerd-ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66.scope: Consumed 6.487s CPU time, 123.7M memory peak, 148K read from disk, 14.3M written to disk. Jul 14 21:21:33.239315 containerd[1527]: time="2025-07-14T21:21:33.239228353Z" level=info msg="received exit event container_id:\"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" id:\"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" pid:3301 exited_at:{seconds:1752528093 nanos:238526219}" Jul 14 21:21:33.239917 containerd[1527]: time="2025-07-14T21:21:33.239867605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" id:\"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" pid:3301 exited_at:{seconds:1752528093 nanos:238526219}" Jul 14 21:21:33.258162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66-rootfs.mount: Deactivated successfully. Jul 14 21:21:33.263113 containerd[1527]: time="2025-07-14T21:21:33.263069524Z" level=info msg="StopContainer for \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\" returns successfully" Jul 14 21:21:33.265290 containerd[1527]: time="2025-07-14T21:21:33.265257526Z" level=info msg="StopContainer for \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" returns successfully" Jul 14 21:21:33.265618 containerd[1527]: time="2025-07-14T21:21:33.265568132Z" level=info msg="StopPodSandbox for \"7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872\"" Jul 14 21:21:33.266000 containerd[1527]: time="2025-07-14T21:21:33.265970579Z" level=info msg="StopPodSandbox for \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\"" Jul 14 21:21:33.268769 containerd[1527]: time="2025-07-14T21:21:33.268553028Z" level=info msg="Container to stop \"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:21:33.268769 containerd[1527]: time="2025-07-14T21:21:33.268607909Z" level=info msg="Container to stop \"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:21:33.268769 containerd[1527]: time="2025-07-14T21:21:33.268617750Z" level=info msg="Container to stop \"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:21:33.268769 containerd[1527]: time="2025-07-14T21:21:33.268626270Z" level=info msg="Container to stop \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:21:33.268769 containerd[1527]: time="2025-07-14T21:21:33.268634670Z" level=info msg="Container to stop \"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:21:33.268769 containerd[1527]: time="2025-07-14T21:21:33.268563549Z" level=info msg="Container to stop \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:21:33.277646 systemd[1]: cri-containerd-1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907.scope: Deactivated successfully. Jul 14 21:21:33.279549 containerd[1527]: time="2025-07-14T21:21:33.279467875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" id:\"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" pid:2810 exit_status:137 exited_at:{seconds:1752528093 nanos:278913625}" Jul 14 21:21:33.284533 systemd[1]: cri-containerd-7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872.scope: Deactivated successfully. Jul 14 21:21:33.305641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907-rootfs.mount: Deactivated successfully. Jul 14 21:21:33.309274 containerd[1527]: time="2025-07-14T21:21:33.309220039Z" level=info msg="shim disconnected" id=1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907 namespace=k8s.io Jul 14 21:21:33.309963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872-rootfs.mount: Deactivated successfully. Jul 14 21:21:33.313946 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907-shm.mount: Deactivated successfully. Jul 14 21:21:33.318256 containerd[1527]: time="2025-07-14T21:21:33.309252480Z" level=warning msg="cleaning up after shim disconnected" id=1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907 namespace=k8s.io Jul 14 21:21:33.318344 containerd[1527]: time="2025-07-14T21:21:33.318258210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:21:33.318380 containerd[1527]: time="2025-07-14T21:21:33.309486164Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872\" id:\"7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872\" pid:2883 exit_status:137 exited_at:{seconds:1752528093 nanos:285297226}" Jul 14 21:21:33.318490 containerd[1527]: time="2025-07-14T21:21:33.309536845Z" level=info msg="shim disconnected" id=7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872 namespace=k8s.io Jul 14 21:21:33.318531 containerd[1527]: time="2025-07-14T21:21:33.318488295Z" level=warning msg="cleaning up after shim disconnected" id=7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872 namespace=k8s.io Jul 14 21:21:33.318531 containerd[1527]: time="2025-07-14T21:21:33.318511735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:21:33.318763 containerd[1527]: time="2025-07-14T21:21:33.311034433Z" level=info msg="received exit event sandbox_id:\"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" exit_status:137 exited_at:{seconds:1752528093 nanos:278913625}" Jul 14 21:21:33.318763 containerd[1527]: time="2025-07-14T21:21:33.318676978Z" level=info msg="TearDown network for sandbox \"7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872\" successfully" Jul 14 21:21:33.318763 containerd[1527]: time="2025-07-14T21:21:33.318719019Z" level=info msg="StopPodSandbox for \"7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872\" returns successfully" Jul 14 21:21:33.319217 containerd[1527]: time="2025-07-14T21:21:33.311574844Z" level=info msg="TearDown network for sandbox \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" successfully" Jul 14 21:21:33.319217 containerd[1527]: time="2025-07-14T21:21:33.318919663Z" level=info msg="StopPodSandbox for \"1091ac9efecdb2c48fb1d05b5ffe8fc07c6645886d4320217ca54a498141e907\" returns successfully" Jul 14 21:21:33.319217 containerd[1527]: time="2025-07-14T21:21:33.318503055Z" level=info msg="received exit event sandbox_id:\"7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872\" exit_status:137 exited_at:{seconds:1752528093 nanos:285297226}" Jul 14 21:21:33.409072 kubelet[2658]: I0714 21:21:33.409038 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2ff7662-b733-4065-8d73-dcd869390744-hubble-tls\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.409844 kubelet[2658]: I0714 21:21:33.409511 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzd4f\" (UniqueName: \"kubernetes.io/projected/c2ff7662-b733-4065-8d73-dcd869390744-kube-api-access-mzd4f\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.409844 kubelet[2658]: I0714 21:21:33.409545 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-bpf-maps\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.409844 kubelet[2658]: I0714 21:21:33.409560 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cilium-cgroup\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.409844 kubelet[2658]: I0714 21:21:33.409574 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-hostproc\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.409844 kubelet[2658]: I0714 21:21:33.409589 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-host-proc-sys-net\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.409844 kubelet[2658]: I0714 21:21:33.409606 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-etc-cni-netd\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.410007 kubelet[2658]: I0714 21:21:33.409618 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cni-path\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.410007 kubelet[2658]: I0714 21:21:33.409635 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-lib-modules\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.410007 kubelet[2658]: I0714 21:21:33.409651 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cilium-run\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.410007 kubelet[2658]: I0714 21:21:33.409666 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-xtables-lock\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.410007 kubelet[2658]: I0714 21:21:33.409686 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2ff7662-b733-4065-8d73-dcd869390744-cilium-config-path\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.410007 kubelet[2658]: I0714 21:21:33.409733 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afa2b4b1-083c-4dbf-9943-dc585b244cb8-cilium-config-path\") pod \"afa2b4b1-083c-4dbf-9943-dc585b244cb8\" (UID: \"afa2b4b1-083c-4dbf-9943-dc585b244cb8\") " Jul 14 21:21:33.410123 kubelet[2658]: I0714 21:21:33.409753 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vkg6\" (UniqueName: \"kubernetes.io/projected/afa2b4b1-083c-4dbf-9943-dc585b244cb8-kube-api-access-2vkg6\") pod \"afa2b4b1-083c-4dbf-9943-dc585b244cb8\" (UID: \"afa2b4b1-083c-4dbf-9943-dc585b244cb8\") " Jul 14 21:21:33.410123 kubelet[2658]: I0714 21:21:33.409771 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-host-proc-sys-kernel\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.410123 kubelet[2658]: I0714 21:21:33.409793 2658 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2ff7662-b733-4065-8d73-dcd869390744-clustermesh-secrets\") pod \"c2ff7662-b733-4065-8d73-dcd869390744\" (UID: \"c2ff7662-b733-4065-8d73-dcd869390744\") " Jul 14 21:21:33.411020 kubelet[2658]: I0714 21:21:33.410734 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:21:33.411020 kubelet[2658]: I0714 21:21:33.410736 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:21:33.411020 kubelet[2658]: I0714 21:21:33.410772 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:21:33.411020 kubelet[2658]: I0714 21:21:33.410797 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:21:33.411020 kubelet[2658]: I0714 21:21:33.410786 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cni-path" (OuterVolumeSpecName: "cni-path") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:21:33.411172 kubelet[2658]: I0714 21:21:33.410813 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:21:33.411172 kubelet[2658]: I0714 21:21:33.410822 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:21:33.411172 kubelet[2658]: I0714 21:21:33.410837 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-hostproc" (OuterVolumeSpecName: "hostproc") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:21:33.414473 kubelet[2658]: I0714 21:21:33.414299 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa2b4b1-083c-4dbf-9943-dc585b244cb8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "afa2b4b1-083c-4dbf-9943-dc585b244cb8" (UID: "afa2b4b1-083c-4dbf-9943-dc585b244cb8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 21:21:33.414473 kubelet[2658]: I0714 21:21:33.414400 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:21:33.416486 kubelet[2658]: I0714 21:21:33.416445 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ff7662-b733-4065-8d73-dcd869390744-kube-api-access-mzd4f" (OuterVolumeSpecName: "kube-api-access-mzd4f") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "kube-api-access-mzd4f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:21:33.417315 kubelet[2658]: I0714 21:21:33.417197 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2ff7662-b733-4065-8d73-dcd869390744-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 21:21:33.417315 kubelet[2658]: I0714 21:21:33.417258 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 21:21:33.417762 kubelet[2658]: I0714 21:21:33.417708 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2ff7662-b733-4065-8d73-dcd869390744-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 21:21:33.417828 kubelet[2658]: I0714 21:21:33.417794 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ff7662-b733-4065-8d73-dcd869390744-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c2ff7662-b733-4065-8d73-dcd869390744" (UID: "c2ff7662-b733-4065-8d73-dcd869390744"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:21:33.418591 kubelet[2658]: I0714 21:21:33.418512 2658 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa2b4b1-083c-4dbf-9943-dc585b244cb8-kube-api-access-2vkg6" (OuterVolumeSpecName: "kube-api-access-2vkg6") pod "afa2b4b1-083c-4dbf-9943-dc585b244cb8" (UID: "afa2b4b1-083c-4dbf-9943-dc585b244cb8"). InnerVolumeSpecName "kube-api-access-2vkg6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:21:33.510093 kubelet[2658]: I0714 21:21:33.510053 2658 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510093 kubelet[2658]: I0714 21:21:33.510083 2658 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510093 kubelet[2658]: I0714 21:21:33.510093 2658 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2ff7662-b733-4065-8d73-dcd869390744-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510093 kubelet[2658]: I0714 21:21:33.510102 2658 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afa2b4b1-083c-4dbf-9943-dc585b244cb8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510315 kubelet[2658]: I0714 21:21:33.510111 2658 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2vkg6\" (UniqueName: \"kubernetes.io/projected/afa2b4b1-083c-4dbf-9943-dc585b244cb8-kube-api-access-2vkg6\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510315 kubelet[2658]: I0714 21:21:33.510121 2658 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510315 kubelet[2658]: I0714 21:21:33.510129 2658 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2ff7662-b733-4065-8d73-dcd869390744-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510315 kubelet[2658]: I0714 21:21:33.510138 2658 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2ff7662-b733-4065-8d73-dcd869390744-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510315 kubelet[2658]: I0714 21:21:33.510146 2658 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mzd4f\" (UniqueName: \"kubernetes.io/projected/c2ff7662-b733-4065-8d73-dcd869390744-kube-api-access-mzd4f\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510315 kubelet[2658]: I0714 21:21:33.510153 2658 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510315 kubelet[2658]: I0714 21:21:33.510160 2658 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510315 kubelet[2658]: I0714 21:21:33.510167 2658 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510474 kubelet[2658]: I0714 21:21:33.510174 2658 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510474 kubelet[2658]: I0714 21:21:33.510182 2658 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510474 kubelet[2658]: I0714 21:21:33.510189 2658 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.510474 kubelet[2658]: I0714 21:21:33.510196 2658 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2ff7662-b733-4065-8d73-dcd869390744-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 21:21:33.580568 kubelet[2658]: I0714 21:21:33.580205 2658 scope.go:117] "RemoveContainer" containerID="8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b" Jul 14 21:21:33.582814 containerd[1527]: time="2025-07-14T21:21:33.582766823Z" level=info msg="RemoveContainer for \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\"" Jul 14 21:21:33.588235 systemd[1]: Removed slice kubepods-besteffort-podafa2b4b1_083c_4dbf_9943_dc585b244cb8.slice - libcontainer container kubepods-besteffort-podafa2b4b1_083c_4dbf_9943_dc585b244cb8.slice. Jul 14 21:21:33.593335 containerd[1527]: time="2025-07-14T21:21:33.593207101Z" level=info msg="RemoveContainer for \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\" returns successfully" Jul 14 21:21:33.593492 kubelet[2658]: I0714 21:21:33.593461 2658 scope.go:117] "RemoveContainer" containerID="8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b" Jul 14 21:21:33.596887 systemd[1]: Removed slice kubepods-burstable-podc2ff7662_b733_4065_8d73_dcd869390744.slice - libcontainer container kubepods-burstable-podc2ff7662_b733_4065_8d73_dcd869390744.slice. Jul 14 21:21:33.596987 systemd[1]: kubepods-burstable-podc2ff7662_b733_4065_8d73_dcd869390744.slice: Consumed 6.630s CPU time, 124M memory peak, 152K read from disk, 14.3M written to disk. Jul 14 21:21:33.604322 containerd[1527]: time="2025-07-14T21:21:33.593669710Z" level=error msg="ContainerStatus for \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\": not found" Jul 14 21:21:33.605220 kubelet[2658]: E0714 21:21:33.605168 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\": not found" containerID="8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b" Jul 14 21:21:33.611385 kubelet[2658]: I0714 21:21:33.611279 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b"} err="failed to get container status \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f413e65152a9762fbfa9d1490351a07e66ea25f7600e03f8b6e5e2d1035ba1b\": not found" Jul 14 21:21:33.611385 kubelet[2658]: I0714 21:21:33.611388 2658 scope.go:117] "RemoveContainer" containerID="ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66" Jul 14 21:21:33.613548 containerd[1527]: time="2025-07-14T21:21:33.613521646Z" level=info msg="RemoveContainer for \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\"" Jul 14 21:21:33.617626 containerd[1527]: time="2025-07-14T21:21:33.617575563Z" level=info msg="RemoveContainer for \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" returns successfully" Jul 14 21:21:33.617824 kubelet[2658]: I0714 21:21:33.617802 2658 scope.go:117] "RemoveContainer" containerID="2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9" Jul 14 21:21:33.619114 containerd[1527]: time="2025-07-14T21:21:33.619091872Z" level=info msg="RemoveContainer for \"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\"" Jul 14 21:21:33.626295 containerd[1527]: time="2025-07-14T21:21:33.626266528Z" level=info msg="RemoveContainer for \"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\" returns successfully" Jul 14 21:21:33.626608 kubelet[2658]: I0714 21:21:33.626515 2658 scope.go:117] "RemoveContainer" containerID="fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401" Jul 14 21:21:33.628629 containerd[1527]: time="2025-07-14T21:21:33.628590812Z" level=info msg="RemoveContainer for \"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\"" Jul 14 21:21:33.632130 containerd[1527]: time="2025-07-14T21:21:33.632095478Z" level=info msg="RemoveContainer for \"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\" returns successfully" Jul 14 21:21:33.632303 kubelet[2658]: I0714 21:21:33.632247 2658 scope.go:117] "RemoveContainer" containerID="425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b" Jul 14 21:21:33.633460 containerd[1527]: time="2025-07-14T21:21:33.633434624Z" level=info msg="RemoveContainer for \"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\"" Jul 14 21:21:33.636334 containerd[1527]: time="2025-07-14T21:21:33.636279078Z" level=info msg="RemoveContainer for \"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\" returns successfully" Jul 14 21:21:33.636535 kubelet[2658]: I0714 21:21:33.636447 2658 scope.go:117] "RemoveContainer" containerID="73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274" Jul 14 21:21:33.637778 containerd[1527]: time="2025-07-14T21:21:33.637754705Z" level=info msg="RemoveContainer for \"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\"" Jul 14 21:21:33.640335 containerd[1527]: time="2025-07-14T21:21:33.640306154Z" level=info msg="RemoveContainer for \"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\" returns successfully" Jul 14 21:21:33.640504 kubelet[2658]: I0714 21:21:33.640482 2658 scope.go:117] "RemoveContainer" containerID="ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66" Jul 14 21:21:33.640771 containerd[1527]: time="2025-07-14T21:21:33.640738682Z" level=error msg="ContainerStatus for \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\": not found" Jul 14 21:21:33.640889 kubelet[2658]: E0714 21:21:33.640871 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\": not found" containerID="ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66" Jul 14 21:21:33.640973 kubelet[2658]: I0714 21:21:33.640952 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66"} err="failed to get container status \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea55129fe7bafc211273ff3b89e07d3ceee1f14102d517256f51516bc1090a66\": not found" Jul 14 21:21:33.641031 kubelet[2658]: I0714 21:21:33.641020 2658 scope.go:117] "RemoveContainer" containerID="2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9" Jul 14 21:21:33.641271 containerd[1527]: time="2025-07-14T21:21:33.641222011Z" level=error msg="ContainerStatus for \"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\": not found" Jul 14 21:21:33.641364 kubelet[2658]: E0714 21:21:33.641344 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\": not found" containerID="2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9" Jul 14 21:21:33.641395 kubelet[2658]: I0714 21:21:33.641369 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9"} err="failed to get container status \"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b5efb1edd7b1505427ad3665b115ddecf6d6823748a87b87e79e612a6e1abe9\": not found" Jul 14 21:21:33.641395 kubelet[2658]: I0714 21:21:33.641386 2658 scope.go:117] "RemoveContainer" containerID="fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401" Jul 14 21:21:33.641597 containerd[1527]: time="2025-07-14T21:21:33.641559698Z" level=error msg="ContainerStatus for \"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\": not found" Jul 14 21:21:33.641703 kubelet[2658]: E0714 21:21:33.641675 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\": not found" containerID="fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401" Jul 14 21:21:33.641731 kubelet[2658]: I0714 21:21:33.641712 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401"} err="failed to get container status \"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa3f4b60b9d3747ea5ab857f558ed3c61178fbec91682c5bfda3eca487028401\": not found" Jul 14 21:21:33.641761 kubelet[2658]: I0714 21:21:33.641730 2658 scope.go:117] "RemoveContainer" containerID="425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b" Jul 14 21:21:33.641947 containerd[1527]: time="2025-07-14T21:21:33.641894304Z" level=error msg="ContainerStatus for \"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\": not found" Jul 14 21:21:33.642073 kubelet[2658]: E0714 21:21:33.642054 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\": not found" containerID="425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b" Jul 14 21:21:33.642100 kubelet[2658]: I0714 21:21:33.642082 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b"} err="failed to get container status \"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\": rpc error: code = NotFound desc = an error occurred when try to find container \"425ee489dabb6572bef5581cea05b9e27973c5eca93a9c2a97d32c951336037b\": not found" Jul 14 21:21:33.642100 kubelet[2658]: I0714 21:21:33.642097 2658 scope.go:117] "RemoveContainer" containerID="73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274" Jul 14 21:21:33.642346 containerd[1527]: time="2025-07-14T21:21:33.642305032Z" level=error msg="ContainerStatus for \"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\": not found" Jul 14 21:21:33.642472 kubelet[2658]: E0714 21:21:33.642451 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\": not found" containerID="73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274" Jul 14 21:21:33.642555 kubelet[2658]: I0714 21:21:33.642475 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274"} err="failed to get container status \"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\": rpc error: code = NotFound desc = an error occurred when try to find container \"73bb8ac3af3f6a18fa6566eba81380095a2a7d73490836e3b08a2eee316a4274\": not found" Jul 14 21:21:34.218753 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f7d9670463b975f737ed441ad133d07971784cbeb9772c6114e3ca5dbad4872-shm.mount: Deactivated successfully. Jul 14 21:21:34.218855 systemd[1]: var-lib-kubelet-pods-afa2b4b1\x2d083c\x2d4dbf\x2d9943\x2ddc585b244cb8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2vkg6.mount: Deactivated successfully. Jul 14 21:21:34.218906 systemd[1]: var-lib-kubelet-pods-c2ff7662\x2db733\x2d4065\x2d8d73\x2ddcd869390744-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmzd4f.mount: Deactivated successfully. Jul 14 21:21:34.218957 systemd[1]: var-lib-kubelet-pods-c2ff7662\x2db733\x2d4065\x2d8d73\x2ddcd869390744-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 21:21:34.219002 systemd[1]: var-lib-kubelet-pods-c2ff7662\x2db733\x2d4065\x2d8d73\x2ddcd869390744-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 21:21:34.372419 kubelet[2658]: E0714 21:21:34.372388 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:34.375014 kubelet[2658]: I0714 21:21:34.374947 2658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa2b4b1-083c-4dbf-9943-dc585b244cb8" path="/var/lib/kubelet/pods/afa2b4b1-083c-4dbf-9943-dc585b244cb8/volumes" Jul 14 21:21:34.375538 kubelet[2658]: I0714 21:21:34.375503 2658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2ff7662-b733-4065-8d73-dcd869390744" path="/var/lib/kubelet/pods/c2ff7662-b733-4065-8d73-dcd869390744/volumes" Jul 14 21:21:35.111495 sshd[4267]: Connection closed by 10.0.0.1 port 42194 Jul 14 21:21:35.112589 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:35.118750 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:42194.service: Deactivated successfully. Jul 14 21:21:35.120282 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 21:21:35.121034 systemd-logind[1503]: Session 23 logged out. Waiting for processes to exit. Jul 14 21:21:35.123387 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:48734.service - OpenSSH per-connection server daemon (10.0.0.1:48734). Jul 14 21:21:35.123932 systemd-logind[1503]: Removed session 23. Jul 14 21:21:35.177149 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 48734 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:35.178208 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:35.181704 systemd-logind[1503]: New session 24 of user core. Jul 14 21:21:35.187815 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 21:21:35.424305 kubelet[2658]: E0714 21:21:35.424177 2658 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 21:21:36.547689 sshd[4420]: Connection closed by 10.0.0.1 port 48734 Jul 14 21:21:36.548530 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:36.559502 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:48734.service: Deactivated successfully. Jul 14 21:21:36.562353 kubelet[2658]: I0714 21:21:36.561278 2658 memory_manager.go:355] "RemoveStaleState removing state" podUID="c2ff7662-b733-4065-8d73-dcd869390744" containerName="cilium-agent" Jul 14 21:21:36.562353 kubelet[2658]: I0714 21:21:36.561305 2658 memory_manager.go:355] "RemoveStaleState removing state" podUID="afa2b4b1-083c-4dbf-9943-dc585b244cb8" containerName="cilium-operator" Jul 14 21:21:36.564348 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 21:21:36.564524 systemd[1]: session-24.scope: Consumed 1.269s CPU time, 26.1M memory peak. Jul 14 21:21:36.567932 systemd-logind[1503]: Session 24 logged out. Waiting for processes to exit. Jul 14 21:21:36.573169 systemd[1]: Started sshd@24-10.0.0.79:22-10.0.0.1:48750.service - OpenSSH per-connection server daemon (10.0.0.1:48750). Jul 14 21:21:36.576291 systemd-logind[1503]: Removed session 24. Jul 14 21:21:36.588415 systemd[1]: Created slice kubepods-burstable-pod8111d472_afe5_47ed_89c3_4f59c24e863d.slice - libcontainer container kubepods-burstable-pod8111d472_afe5_47ed_89c3_4f59c24e863d.slice. Jul 14 21:21:36.625601 kubelet[2658]: I0714 21:21:36.625542 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8111d472-afe5-47ed-89c3-4f59c24e863d-hubble-tls\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625601 kubelet[2658]: I0714 21:21:36.625585 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8111d472-afe5-47ed-89c3-4f59c24e863d-cilium-run\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625601 kubelet[2658]: I0714 21:21:36.625605 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8111d472-afe5-47ed-89c3-4f59c24e863d-etc-cni-netd\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625800 kubelet[2658]: I0714 21:21:36.625621 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8111d472-afe5-47ed-89c3-4f59c24e863d-host-proc-sys-net\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625800 kubelet[2658]: I0714 21:21:36.625637 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8111d472-afe5-47ed-89c3-4f59c24e863d-xtables-lock\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625800 kubelet[2658]: I0714 21:21:36.625655 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8111d472-afe5-47ed-89c3-4f59c24e863d-cilium-ipsec-secrets\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625800 kubelet[2658]: I0714 21:21:36.625672 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8111d472-afe5-47ed-89c3-4f59c24e863d-bpf-maps\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625800 kubelet[2658]: I0714 21:21:36.625686 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8111d472-afe5-47ed-89c3-4f59c24e863d-clustermesh-secrets\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625897 kubelet[2658]: I0714 21:21:36.625719 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8111d472-afe5-47ed-89c3-4f59c24e863d-host-proc-sys-kernel\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625897 kubelet[2658]: I0714 21:21:36.625736 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8111d472-afe5-47ed-89c3-4f59c24e863d-cni-path\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625897 kubelet[2658]: I0714 21:21:36.625753 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9s2h\" (UniqueName: \"kubernetes.io/projected/8111d472-afe5-47ed-89c3-4f59c24e863d-kube-api-access-s9s2h\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625897 kubelet[2658]: I0714 21:21:36.625770 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8111d472-afe5-47ed-89c3-4f59c24e863d-cilium-cgroup\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625897 kubelet[2658]: I0714 21:21:36.625805 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8111d472-afe5-47ed-89c3-4f59c24e863d-hostproc\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.625897 kubelet[2658]: I0714 21:21:36.625824 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8111d472-afe5-47ed-89c3-4f59c24e863d-lib-modules\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.626007 kubelet[2658]: I0714 21:21:36.625839 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8111d472-afe5-47ed-89c3-4f59c24e863d-cilium-config-path\") pod \"cilium-k52v7\" (UID: \"8111d472-afe5-47ed-89c3-4f59c24e863d\") " pod="kube-system/cilium-k52v7" Jul 14 21:21:36.630825 sshd[4432]: Accepted publickey for core from 10.0.0.1 port 48750 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:36.631923 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:36.636225 systemd-logind[1503]: New session 25 of user core. Jul 14 21:21:36.645844 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 21:21:36.694661 sshd[4435]: Connection closed by 10.0.0.1 port 48750 Jul 14 21:21:36.694932 sshd-session[4432]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:36.708465 systemd[1]: sshd@24-10.0.0.79:22-10.0.0.1:48750.service: Deactivated successfully. Jul 14 21:21:36.710277 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 21:21:36.710965 systemd-logind[1503]: Session 25 logged out. Waiting for processes to exit. Jul 14 21:21:36.714221 systemd[1]: Started sshd@25-10.0.0.79:22-10.0.0.1:48754.service - OpenSSH per-connection server daemon (10.0.0.1:48754). Jul 14 21:21:36.715174 systemd-logind[1503]: Removed session 25. Jul 14 21:21:36.771707 sshd[4442]: Accepted publickey for core from 10.0.0.1 port 48754 ssh2: RSA SHA256:2WhFb4hIV6asMtK/3oygiLWJK2wyIZMzeWonh0aJ84s Jul 14 21:21:36.773277 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:21:36.777053 systemd-logind[1503]: New session 26 of user core. Jul 14 21:21:36.783848 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 21:21:36.893284 kubelet[2658]: E0714 21:21:36.893140 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:36.893856 containerd[1527]: time="2025-07-14T21:21:36.893822727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k52v7,Uid:8111d472-afe5-47ed-89c3-4f59c24e863d,Namespace:kube-system,Attempt:0,}" Jul 14 21:21:36.914224 containerd[1527]: time="2025-07-14T21:21:36.914169487Z" level=info msg="connecting to shim 897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b" address="unix:///run/containerd/s/b26240f691082a3ecc078ec1b69c394817059e0e0482b6222ea103abd18c1a3d" namespace=k8s.io protocol=ttrpc version=3 Jul 14 21:21:36.936873 systemd[1]: Started cri-containerd-897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b.scope - libcontainer container 897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b. Jul 14 21:21:36.957218 containerd[1527]: time="2025-07-14T21:21:36.957175528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k52v7,Uid:8111d472-afe5-47ed-89c3-4f59c24e863d,Namespace:kube-system,Attempt:0,} returns sandbox id \"897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b\"" Jul 14 21:21:36.957884 kubelet[2658]: E0714 21:21:36.957864 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:36.959523 containerd[1527]: time="2025-07-14T21:21:36.959490889Z" level=info msg="CreateContainer within sandbox \"897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:21:36.964649 containerd[1527]: time="2025-07-14T21:21:36.964613100Z" level=info msg="Container 7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:21:36.971515 containerd[1527]: time="2025-07-14T21:21:36.971467981Z" level=info msg="CreateContainer within sandbox \"897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176\"" Jul 14 21:21:36.972073 containerd[1527]: time="2025-07-14T21:21:36.972041871Z" level=info msg="StartContainer for \"7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176\"" Jul 14 21:21:36.972841 containerd[1527]: time="2025-07-14T21:21:36.972806765Z" level=info msg="connecting to shim 7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176" address="unix:///run/containerd/s/b26240f691082a3ecc078ec1b69c394817059e0e0482b6222ea103abd18c1a3d" protocol=ttrpc version=3 Jul 14 21:21:36.994863 systemd[1]: Started cri-containerd-7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176.scope - libcontainer container 7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176. Jul 14 21:21:37.017896 containerd[1527]: time="2025-07-14T21:21:37.017862235Z" level=info msg="StartContainer for \"7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176\" returns successfully" Jul 14 21:21:37.032257 systemd[1]: cri-containerd-7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176.scope: Deactivated successfully. Jul 14 21:21:37.034332 containerd[1527]: time="2025-07-14T21:21:37.034286279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176\" id:\"7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176\" pid:4514 exited_at:{seconds:1752528097 nanos:33628148}" Jul 14 21:21:37.034641 containerd[1527]: time="2025-07-14T21:21:37.034606605Z" level=info msg="received exit event container_id:\"7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176\" id:\"7155497a9ac9dce6ed9b05855ce73b0a792865b6cc4fb077f71493e775fe8176\" pid:4514 exited_at:{seconds:1752528097 nanos:33628148}" Jul 14 21:21:37.601290 kubelet[2658]: E0714 21:21:37.601085 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:37.614371 containerd[1527]: time="2025-07-14T21:21:37.614291793Z" level=info msg="CreateContainer within sandbox \"897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:21:37.627287 containerd[1527]: time="2025-07-14T21:21:37.627241177Z" level=info msg="Container 5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:21:37.632600 containerd[1527]: time="2025-07-14T21:21:37.632148342Z" level=info msg="CreateContainer within sandbox \"897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1\"" Jul 14 21:21:37.632834 containerd[1527]: time="2025-07-14T21:21:37.632804714Z" level=info msg="StartContainer for \"5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1\"" Jul 14 21:21:37.633531 containerd[1527]: time="2025-07-14T21:21:37.633501806Z" level=info msg="connecting to shim 5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1" address="unix:///run/containerd/s/b26240f691082a3ecc078ec1b69c394817059e0e0482b6222ea103abd18c1a3d" protocol=ttrpc version=3 Jul 14 21:21:37.648912 systemd[1]: Started cri-containerd-5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1.scope - libcontainer container 5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1. Jul 14 21:21:37.670873 containerd[1527]: time="2025-07-14T21:21:37.670842452Z" level=info msg="StartContainer for \"5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1\" returns successfully" Jul 14 21:21:37.681692 systemd[1]: cri-containerd-5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1.scope: Deactivated successfully. Jul 14 21:21:37.682858 containerd[1527]: time="2025-07-14T21:21:37.682826619Z" level=info msg="received exit event container_id:\"5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1\" id:\"5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1\" pid:4561 exited_at:{seconds:1752528097 nanos:682423492}" Jul 14 21:21:37.682945 containerd[1527]: time="2025-07-14T21:21:37.682918461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1\" id:\"5c80a2d0b20d7f918450c3f310bae29bd17110ff370b28ae5f8ac23ad25f2ae1\" pid:4561 exited_at:{seconds:1752528097 nanos:682423492}" Jul 14 21:21:38.605961 kubelet[2658]: E0714 21:21:38.605930 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:38.608753 containerd[1527]: time="2025-07-14T21:21:38.608564243Z" level=info msg="CreateContainer within sandbox \"897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:21:38.617878 containerd[1527]: time="2025-07-14T21:21:38.617829199Z" level=info msg="Container bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:21:38.626218 containerd[1527]: time="2025-07-14T21:21:38.626183901Z" level=info msg="CreateContainer within sandbox \"897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284\"" Jul 14 21:21:38.626646 containerd[1527]: time="2025-07-14T21:21:38.626622228Z" level=info msg="StartContainer for \"bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284\"" Jul 14 21:21:38.627938 containerd[1527]: time="2025-07-14T21:21:38.627913290Z" level=info msg="connecting to shim bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284" address="unix:///run/containerd/s/b26240f691082a3ecc078ec1b69c394817059e0e0482b6222ea103abd18c1a3d" protocol=ttrpc version=3 Jul 14 21:21:38.650928 systemd[1]: Started cri-containerd-bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284.scope - libcontainer container bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284. Jul 14 21:21:38.678854 systemd[1]: cri-containerd-bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284.scope: Deactivated successfully. Jul 14 21:21:38.681898 containerd[1527]: time="2025-07-14T21:21:38.681870523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284\" id:\"bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284\" pid:4606 exited_at:{seconds:1752528098 nanos:681676160}" Jul 14 21:21:38.681968 containerd[1527]: time="2025-07-14T21:21:38.681904323Z" level=info msg="received exit event container_id:\"bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284\" id:\"bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284\" pid:4606 exited_at:{seconds:1752528098 nanos:681676160}" Jul 14 21:21:38.683087 containerd[1527]: time="2025-07-14T21:21:38.683061503Z" level=info msg="StartContainer for \"bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284\" returns successfully" Jul 14 21:21:38.700146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb1147a7330975f72c03b2a4c000f3d1abc6c48864292fcaab32b9153e4aa284-rootfs.mount: Deactivated successfully. Jul 14 21:21:39.611180 kubelet[2658]: E0714 21:21:39.611153 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:39.613971 containerd[1527]: time="2025-07-14T21:21:39.613932866Z" level=info msg="CreateContainer within sandbox \"897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:21:39.624613 containerd[1527]: time="2025-07-14T21:21:39.624025833Z" level=info msg="Container 21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:21:39.633175 containerd[1527]: time="2025-07-14T21:21:39.633049183Z" level=info msg="CreateContainer within sandbox \"897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a\"" Jul 14 21:21:39.633680 containerd[1527]: time="2025-07-14T21:21:39.633623592Z" level=info msg="StartContainer for \"21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a\"" Jul 14 21:21:39.634444 containerd[1527]: time="2025-07-14T21:21:39.634354284Z" level=info msg="connecting to shim 21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a" address="unix:///run/containerd/s/b26240f691082a3ecc078ec1b69c394817059e0e0482b6222ea103abd18c1a3d" protocol=ttrpc version=3 Jul 14 21:21:39.652921 systemd[1]: Started cri-containerd-21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a.scope - libcontainer container 21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a. Jul 14 21:21:39.672947 systemd[1]: cri-containerd-21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a.scope: Deactivated successfully. Jul 14 21:21:39.676432 containerd[1527]: time="2025-07-14T21:21:39.676121576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a\" id:\"21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a\" pid:4645 exited_at:{seconds:1752528099 nanos:673204607}" Jul 14 21:21:39.676432 containerd[1527]: time="2025-07-14T21:21:39.676274338Z" level=info msg="received exit event container_id:\"21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a\" id:\"21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a\" pid:4645 exited_at:{seconds:1752528099 nanos:673204607}" Jul 14 21:21:39.683406 containerd[1527]: time="2025-07-14T21:21:39.683379176Z" level=info msg="StartContainer for \"21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a\" returns successfully" Jul 14 21:21:39.695802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a-rootfs.mount: Deactivated successfully. Jul 14 21:21:39.697655 containerd[1527]: time="2025-07-14T21:21:39.681163739Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8111d472_afe5_47ed_89c3_4f59c24e863d.slice/cri-containerd-21982965b296bb7045f82c438f9408d6ae0dd8fc22c8dc1051371cccd4db9a6a.scope/memory.events\": no such file or directory" Jul 14 21:21:40.425160 kubelet[2658]: E0714 21:21:40.425115 2658 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 21:21:40.617159 kubelet[2658]: E0714 21:21:40.617061 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:40.623724 containerd[1527]: time="2025-07-14T21:21:40.623236148Z" level=info msg="CreateContainer within sandbox \"897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:21:40.635559 containerd[1527]: time="2025-07-14T21:21:40.635527427Z" level=info msg="Container fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9: CDI devices from CRI Config.CDIDevices: []" Jul 14 21:21:40.637251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397317300.mount: Deactivated successfully. Jul 14 21:21:40.643792 containerd[1527]: time="2025-07-14T21:21:40.643728160Z" level=info msg="CreateContainer within sandbox \"897d7d7cd47b3ee9ec8d8be4d8a677400704ddcb89b1eb436a65b747aee7856b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9\"" Jul 14 21:21:40.644407 containerd[1527]: time="2025-07-14T21:21:40.644382170Z" level=info msg="StartContainer for \"fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9\"" Jul 14 21:21:40.646653 containerd[1527]: time="2025-07-14T21:21:40.646605046Z" level=info msg="connecting to shim fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9" address="unix:///run/containerd/s/b26240f691082a3ecc078ec1b69c394817059e0e0482b6222ea103abd18c1a3d" protocol=ttrpc version=3 Jul 14 21:21:40.664855 systemd[1]: Started cri-containerd-fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9.scope - libcontainer container fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9. Jul 14 21:21:40.699295 containerd[1527]: time="2025-07-14T21:21:40.699084216Z" level=info msg="StartContainer for \"fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9\" returns successfully" Jul 14 21:21:40.754119 containerd[1527]: time="2025-07-14T21:21:40.754074226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9\" id:\"406a1fea50e34945ac3ebf560533ad794f8658aecef6d17d09eca10716d4abb7\" pid:4714 exited_at:{seconds:1752528100 nanos:753836262}" Jul 14 21:21:40.964750 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 14 21:21:41.628432 kubelet[2658]: E0714 21:21:41.628362 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:41.642431 kubelet[2658]: I0714 21:21:41.642088 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k52v7" podStartSLOduration=5.642073142 podStartE2EDuration="5.642073142s" podCreationTimestamp="2025-07-14 21:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:21:41.641410491 +0000 UTC m=+81.367207547" watchObservedRunningTime="2025-07-14 21:21:41.642073142 +0000 UTC m=+81.367870198" Jul 14 21:21:42.064656 kubelet[2658]: I0714 21:21:42.064595 2658 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T21:21:42Z","lastTransitionTime":"2025-07-14T21:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 21:21:42.893665 kubelet[2658]: E0714 21:21:42.893625 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:43.112567 containerd[1527]: time="2025-07-14T21:21:43.112491273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9\" id:\"c1715e07994702c9c450c59aceea32d0d8082a16945516519c0007e2c057a8be\" pid:5027 exit_status:1 exited_at:{seconds:1752528103 nanos:112249230}" Jul 14 21:21:43.752847 systemd-networkd[1437]: lxc_health: Link UP Jul 14 21:21:43.753096 systemd-networkd[1437]: lxc_health: Gained carrier Jul 14 21:21:44.895092 kubelet[2658]: E0714 21:21:44.895047 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:45.267076 containerd[1527]: time="2025-07-14T21:21:45.266845504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9\" id:\"e68f3b833c012245facce4f18dc776a1b26d35bc2589745f1dcdb24087fe90a0\" pid:5252 exited_at:{seconds:1752528105 nanos:266508699}" Jul 14 21:21:45.639497 kubelet[2658]: E0714 21:21:45.637800 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:45.677950 systemd-networkd[1437]: lxc_health: Gained IPv6LL Jul 14 21:21:46.639835 kubelet[2658]: E0714 21:21:46.639804 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:47.371944 kubelet[2658]: E0714 21:21:47.371908 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:21:47.375801 containerd[1527]: time="2025-07-14T21:21:47.375382736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9\" id:\"d72ea9cc173439419cf6868ce2c178f0123aa30ce8de24d711a63ea5222b4dca\" pid:5285 exited_at:{seconds:1752528107 nanos:374507764}" Jul 14 21:21:49.471951 containerd[1527]: time="2025-07-14T21:21:49.471911032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe1ce727cb8f15f55eb66a5a774dcdf87059c76196a62ae288a8e84d1f688aa9\" id:\"5130d9b7ce952130119ff3e3ad2d3c8dc16e9f84d6d689d9714b77a227fca627\" pid:5308 exited_at:{seconds:1752528109 nanos:471469866}" Jul 14 21:21:49.474448 kubelet[2658]: E0714 21:21:49.474350 2658 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40354->127.0.0.1:40457: write tcp 127.0.0.1:40354->127.0.0.1:40457: write: broken pipe Jul 14 21:21:49.476588 sshd[4449]: Connection closed by 10.0.0.1 port 48754 Jul 14 21:21:49.477036 sshd-session[4442]: pam_unix(sshd:session): session closed for user core Jul 14 21:21:49.481122 systemd[1]: sshd@25-10.0.0.79:22-10.0.0.1:48754.service: Deactivated successfully. Jul 14 21:21:49.482673 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 21:21:49.483322 systemd-logind[1503]: Session 26 logged out. Waiting for processes to exit. Jul 14 21:21:49.484227 systemd-logind[1503]: Removed session 26.