Jul 10 00:29:35.965835 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 00:29:35.965857 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jul 9 22:54:34 -00 2025 Jul 10 00:29:35.965867 kernel: KASLR enabled Jul 10 00:29:35.965873 kernel: efi: EFI v2.7 by EDK II Jul 10 00:29:35.965878 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 10 00:29:35.965884 kernel: random: crng init done Jul 10 00:29:35.965891 kernel: ACPI: Early table checksum verification disabled Jul 10 00:29:35.965897 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 10 00:29:35.965904 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:29:35.965911 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:29:35.965917 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:29:35.965923 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:29:35.965930 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:29:35.965936 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:29:35.965944 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:29:35.965951 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:29:35.965958 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:29:35.965964 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:29:35.965971 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 00:29:35.965977 kernel: NUMA: Failed to initialise from firmware Jul 10 00:29:35.965984 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:29:35.965990 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 10 00:29:35.965996 kernel: Zone ranges: Jul 10 00:29:35.966003 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:29:35.966009 kernel: DMA32 empty Jul 10 00:29:35.966016 kernel: Normal empty Jul 10 00:29:35.966023 kernel: Movable zone start for each node Jul 10 00:29:35.966029 kernel: Early memory node ranges Jul 10 00:29:35.966036 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 10 00:29:35.966042 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 10 00:29:35.966049 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 10 00:29:35.966055 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 10 00:29:35.966062 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 10 00:29:35.966068 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 10 00:29:35.966074 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 10 00:29:35.966081 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:29:35.966087 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 00:29:35.966095 kernel: psci: probing for conduit method from ACPI. Jul 10 00:29:35.966102 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 00:29:35.966108 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:29:35.966117 kernel: psci: Trusted OS migration not required Jul 10 00:29:35.966124 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:29:35.966131 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 00:29:35.966140 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 10 00:29:35.966146 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 10 00:29:35.966153 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 00:29:35.966160 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:29:35.966167 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:29:35.966174 kernel: CPU features: detected: Hardware dirty bit management Jul 10 00:29:35.966180 kernel: CPU features: detected: Spectre-v4 Jul 10 00:29:35.966187 kernel: CPU features: detected: Spectre-BHB Jul 10 00:29:35.966194 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 00:29:35.966201 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 00:29:35.966209 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 00:29:35.966216 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 00:29:35.966222 kernel: alternatives: applying boot alternatives Jul 10 00:29:35.966230 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:29:35.966237 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:29:35.966244 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:29:35.966251 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:29:35.966257 kernel: Fallback order for Node 0: 0 Jul 10 00:29:35.966264 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 10 00:29:35.966271 kernel: Policy zone: DMA Jul 10 00:29:35.966277 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:29:35.966286 kernel: software IO TLB: area num 4. Jul 10 00:29:35.966293 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 10 00:29:35.966300 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 10 00:29:35.966307 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:29:35.966313 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:29:35.966321 kernel: rcu: RCU event tracing is enabled. Jul 10 00:29:35.966328 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:29:35.966334 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:29:35.966341 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:29:35.966348 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:29:35.966355 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:29:35.966362 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:29:35.966370 kernel: GICv3: 256 SPIs implemented Jul 10 00:29:35.966376 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:29:35.966383 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:29:35.966390 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 00:29:35.966406 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 00:29:35.966413 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 00:29:35.966420 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:29:35.966427 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:29:35.966434 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 10 00:29:35.966441 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 10 00:29:35.966458 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:29:35.966468 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:29:35.966475 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 00:29:35.966483 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 00:29:35.966490 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 00:29:35.966497 kernel: arm-pv: using stolen time PV Jul 10 00:29:35.966505 kernel: Console: colour dummy device 80x25 Jul 10 00:29:35.966512 kernel: ACPI: Core revision 20230628 Jul 10 00:29:35.966519 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 00:29:35.966526 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:29:35.966533 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 10 00:29:35.966542 kernel: landlock: Up and running. Jul 10 00:29:35.966549 kernel: SELinux: Initializing. Jul 10 00:29:35.966556 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:29:35.966563 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:29:35.966570 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:29:35.966578 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:29:35.966585 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:29:35.966592 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:29:35.966599 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 00:29:35.966607 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 00:29:35.966614 kernel: Remapping and enabling EFI services. Jul 10 00:29:35.966621 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:29:35.966628 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:29:35.966635 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 00:29:35.966642 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 10 00:29:35.966649 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:29:35.966656 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 00:29:35.966663 kernel: Detected PIPT I-cache on CPU2 Jul 10 00:29:35.966670 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 00:29:35.966679 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 10 00:29:35.966686 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:29:35.966698 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 00:29:35.966706 kernel: Detected PIPT I-cache on CPU3 Jul 10 00:29:35.966713 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 00:29:35.966721 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 10 00:29:35.966728 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:29:35.966735 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 00:29:35.966742 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:29:35.966751 kernel: SMP: Total of 4 processors activated. Jul 10 00:29:35.966758 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:29:35.966765 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 00:29:35.966773 kernel: CPU features: detected: Common not Private translations Jul 10 00:29:35.966780 kernel: CPU features: detected: CRC32 instructions Jul 10 00:29:35.966787 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 10 00:29:35.966795 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 00:29:35.966802 kernel: CPU features: detected: LSE atomic instructions Jul 10 00:29:35.966811 kernel: CPU features: detected: Privileged Access Never Jul 10 00:29:35.966818 kernel: CPU features: detected: RAS Extension Support Jul 10 00:29:35.966825 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 00:29:35.966833 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:29:35.966840 kernel: alternatives: applying system-wide alternatives Jul 10 00:29:35.966847 kernel: devtmpfs: initialized Jul 10 00:29:35.966855 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:29:35.966862 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:29:35.966870 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:29:35.966878 kernel: SMBIOS 3.0.0 present. Jul 10 00:29:35.966886 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 10 00:29:35.966893 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:29:35.966900 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:29:35.966908 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:29:35.966915 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:29:35.966923 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:29:35.966930 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 10 00:29:35.966937 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:29:35.966946 kernel: cpuidle: using governor menu Jul 10 00:29:35.966954 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:29:35.966961 kernel: ASID allocator initialised with 32768 entries Jul 10 00:29:35.966969 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:29:35.966976 kernel: Serial: AMBA PL011 UART driver Jul 10 00:29:35.966983 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 00:29:35.966991 kernel: Modules: 0 pages in range for non-PLT usage Jul 10 00:29:35.967002 kernel: Modules: 509008 pages in range for PLT usage Jul 10 00:29:35.967010 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:29:35.967019 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:29:35.967026 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:29:35.967034 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 00:29:35.967041 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:29:35.967048 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:29:35.967055 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:29:35.967063 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 00:29:35.967070 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:29:35.967077 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:29:35.967086 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:29:35.967093 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:29:35.967101 kernel: ACPI: Interpreter enabled Jul 10 00:29:35.967108 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:29:35.967115 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:29:35.967122 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 00:29:35.967130 kernel: printk: console [ttyAMA0] enabled Jul 10 00:29:35.967137 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:29:35.967286 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:29:35.967424 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:29:35.967516 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:29:35.967586 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 00:29:35.967651 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 00:29:35.967661 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 00:29:35.967669 kernel: PCI host bridge to bus 0000:00 Jul 10 00:29:35.967741 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 00:29:35.967807 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:29:35.967865 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 00:29:35.967924 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:29:35.968011 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 00:29:35.968087 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:29:35.968155 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 10 00:29:35.968226 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 10 00:29:35.968294 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:29:35.968361 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:29:35.968438 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 10 00:29:35.968540 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 10 00:29:35.968601 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 00:29:35.968661 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:29:35.968723 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 00:29:35.968733 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:29:35.968741 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:29:35.968748 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:29:35.968756 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:29:35.968763 kernel: iommu: Default domain type: Translated Jul 10 00:29:35.968770 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:29:35.968778 kernel: efivars: Registered efivars operations Jul 10 00:29:35.968785 kernel: vgaarb: loaded Jul 10 00:29:35.968795 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:29:35.968802 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:29:35.968810 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:29:35.968817 kernel: pnp: PnP ACPI init Jul 10 00:29:35.968890 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 00:29:35.968901 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:29:35.968908 kernel: NET: Registered PF_INET protocol family Jul 10 00:29:35.968916 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:29:35.968926 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:29:35.968933 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:29:35.968941 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:29:35.968948 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:29:35.968956 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:29:35.968963 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:29:35.968971 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:29:35.968978 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:29:35.968986 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:29:35.968995 kernel: kvm [1]: HYP mode not available Jul 10 00:29:35.969002 kernel: Initialise system trusted keyrings Jul 10 00:29:35.969010 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:29:35.969017 kernel: Key type asymmetric registered Jul 10 00:29:35.969025 kernel: Asymmetric key parser 'x509' registered Jul 10 00:29:35.969033 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:29:35.969040 kernel: io scheduler mq-deadline registered Jul 10 00:29:35.969048 kernel: io scheduler kyber registered Jul 10 00:29:35.969055 kernel: io scheduler bfq registered Jul 10 00:29:35.969065 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:29:35.969072 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:29:35.969080 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:29:35.969146 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 00:29:35.969157 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:29:35.969164 kernel: thunder_xcv, ver 1.0 Jul 10 00:29:35.969172 kernel: thunder_bgx, ver 1.0 Jul 10 00:29:35.969179 kernel: nicpf, ver 1.0 Jul 10 00:29:35.969187 kernel: nicvf, ver 1.0 Jul 10 00:29:35.969263 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:29:35.969329 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:29:35 UTC (1752107375) Jul 10 00:29:35.969339 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:29:35.969346 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 00:29:35.969366 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 10 00:29:35.969374 kernel: watchdog: Hard watchdog permanently disabled Jul 10 00:29:35.969382 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:29:35.969391 kernel: Segment Routing with IPv6 Jul 10 00:29:35.969434 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:29:35.969468 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:29:35.969480 kernel: Key type dns_resolver registered Jul 10 00:29:35.969487 kernel: registered taskstats version 1 Jul 10 00:29:35.969495 kernel: Loading compiled-in X.509 certificates Jul 10 00:29:35.969503 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 9cbc45ab00feb4acb0fa362a962909c99fb6ef52' Jul 10 00:29:35.969511 kernel: Key type .fscrypt registered Jul 10 00:29:35.969520 kernel: Key type fscrypt-provisioning registered Jul 10 00:29:35.969527 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:29:35.969538 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:29:35.969546 kernel: ima: No architecture policies found Jul 10 00:29:35.969554 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:29:35.969562 kernel: clk: Disabling unused clocks Jul 10 00:29:35.969570 kernel: Freeing unused kernel memory: 39424K Jul 10 00:29:35.969577 kernel: Run /init as init process Jul 10 00:29:35.969585 kernel: with arguments: Jul 10 00:29:35.969593 kernel: /init Jul 10 00:29:35.969600 kernel: with environment: Jul 10 00:29:35.969609 kernel: HOME=/ Jul 10 00:29:35.969617 kernel: TERM=linux Jul 10 00:29:35.969624 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:29:35.969634 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:29:35.969643 systemd[1]: Detected virtualization kvm. Jul 10 00:29:35.969651 systemd[1]: Detected architecture arm64. Jul 10 00:29:35.969659 systemd[1]: Running in initrd. Jul 10 00:29:35.969668 systemd[1]: No hostname configured, using default hostname. Jul 10 00:29:35.969675 systemd[1]: Hostname set to . Jul 10 00:29:35.969684 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:29:35.969691 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:29:35.969699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:29:35.969707 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:29:35.969716 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:29:35.969724 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:29:35.969734 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:29:35.969742 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:29:35.969752 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:29:35.969760 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:29:35.969768 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:29:35.969776 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:29:35.969784 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:29:35.969794 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:29:35.969802 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:29:35.969809 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:29:35.969817 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:29:35.969825 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:29:35.969833 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:29:35.969842 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 10 00:29:35.969850 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:29:35.969859 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:29:35.969868 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:29:35.969877 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:29:35.969885 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:29:35.969893 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:29:35.969900 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:29:35.969909 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:29:35.969917 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:29:35.969925 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:29:35.969935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:29:35.969944 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:29:35.969952 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:29:35.969960 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:29:35.969969 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:29:35.969979 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:29:35.969988 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:29:35.970016 systemd-journald[237]: Collecting audit messages is disabled. Jul 10 00:29:35.970036 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:29:35.970046 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:29:35.970054 kernel: Bridge firewalling registered Jul 10 00:29:35.970063 systemd-journald[237]: Journal started Jul 10 00:29:35.970081 systemd-journald[237]: Runtime Journal (/run/log/journal/bd9b9b43b0ec4ba6a81c7d992ad06438) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:29:35.950075 systemd-modules-load[239]: Inserted module 'overlay' Jul 10 00:29:35.967927 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 10 00:29:35.973064 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:29:35.978010 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:29:35.978441 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:29:35.979443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:29:35.981889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:29:35.992605 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:29:35.994121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:29:35.996350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:29:36.004718 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:29:36.006219 dracut-cmdline[266]: dracut-dracut-053 Jul 10 00:29:36.005710 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:29:36.009501 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:29:36.013290 dracut-cmdline[266]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:29:36.044726 systemd-resolved[287]: Positive Trust Anchors: Jul 10 00:29:36.044743 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:29:36.044775 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:29:36.049491 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 10 00:29:36.050480 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:29:36.051644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:29:36.086494 kernel: SCSI subsystem initialized Jul 10 00:29:36.090464 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:29:36.098506 kernel: iscsi: registered transport (tcp) Jul 10 00:29:36.111719 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:29:36.111799 kernel: QLogic iSCSI HBA Driver Jul 10 00:29:36.155496 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:29:36.166602 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:29:36.183691 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:29:36.184706 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:29:36.184717 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 10 00:29:36.232473 kernel: raid6: neonx8 gen() 15777 MB/s Jul 10 00:29:36.249473 kernel: raid6: neonx4 gen() 15666 MB/s Jul 10 00:29:36.266476 kernel: raid6: neonx2 gen() 13190 MB/s Jul 10 00:29:36.283479 kernel: raid6: neonx1 gen() 10479 MB/s Jul 10 00:29:36.300496 kernel: raid6: int64x8 gen() 6959 MB/s Jul 10 00:29:36.317477 kernel: raid6: int64x4 gen() 7347 MB/s Jul 10 00:29:36.334515 kernel: raid6: int64x2 gen() 6128 MB/s Jul 10 00:29:36.351494 kernel: raid6: int64x1 gen() 5047 MB/s Jul 10 00:29:36.351573 kernel: raid6: using algorithm neonx8 gen() 15777 MB/s Jul 10 00:29:36.368498 kernel: raid6: .... xor() 11931 MB/s, rmw enabled Jul 10 00:29:36.368558 kernel: raid6: using neon recovery algorithm Jul 10 00:29:36.373469 kernel: xor: measuring software checksum speed Jul 10 00:29:36.374500 kernel: 8regs : 17286 MB/sec Jul 10 00:29:36.374523 kernel: 32regs : 19679 MB/sec Jul 10 00:29:36.375467 kernel: arm64_neon : 27079 MB/sec Jul 10 00:29:36.375487 kernel: xor: using function: arm64_neon (27079 MB/sec) Jul 10 00:29:36.426489 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:29:36.438772 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:29:36.456679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:29:36.468892 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jul 10 00:29:36.472105 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:29:36.486890 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:29:36.498604 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 10 00:29:36.529515 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:29:36.540688 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:29:36.584622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:29:36.592773 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:29:36.610651 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:29:36.612404 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:29:36.613877 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:29:36.616158 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:29:36.624607 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:29:36.635761 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:29:36.638466 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 10 00:29:36.641640 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:29:36.642695 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:29:36.642808 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:29:36.645714 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:29:36.646471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:29:36.652123 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:29:36.652165 kernel: GPT:9289727 != 19775487 Jul 10 00:29:36.652177 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:29:36.652187 kernel: GPT:9289727 != 19775487 Jul 10 00:29:36.652212 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:29:36.652229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:29:36.646601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:29:36.651845 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:29:36.658758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:29:36.665485 kernel: BTRFS: device fsid e18a5201-bc0c-484b-ba1b-be3c0a720c32 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (503) Jul 10 00:29:36.668493 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (517) Jul 10 00:29:36.675865 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:29:36.676985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:29:36.685049 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:29:36.686097 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:29:36.691692 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:29:36.696332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:29:36.713713 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:29:36.715851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:29:36.721462 disk-uuid[551]: Primary Header is updated. Jul 10 00:29:36.721462 disk-uuid[551]: Secondary Entries is updated. Jul 10 00:29:36.721462 disk-uuid[551]: Secondary Header is updated. Jul 10 00:29:36.728486 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:29:36.738828 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:29:37.737473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:29:37.737976 disk-uuid[553]: The operation has completed successfully. Jul 10 00:29:37.760872 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:29:37.760967 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:29:37.778670 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:29:37.781846 sh[576]: Success Jul 10 00:29:37.799524 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 00:29:37.840932 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:29:37.842652 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:29:37.844255 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:29:37.858097 kernel: BTRFS info (device dm-0): first mount of filesystem e18a5201-bc0c-484b-ba1b-be3c0a720c32 Jul 10 00:29:37.858148 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:29:37.858160 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 10 00:29:37.859469 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 10 00:29:37.859496 kernel: BTRFS info (device dm-0): using free space tree Jul 10 00:29:37.868371 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:29:37.869368 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:29:37.870113 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:29:37.872582 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:29:37.883765 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:29:37.883820 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:29:37.883832 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:29:37.886481 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:29:37.897015 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:29:37.899814 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:29:37.906317 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:29:37.911677 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:29:37.975890 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:29:37.989650 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:29:38.016629 systemd-networkd[765]: lo: Link UP Jul 10 00:29:38.016637 systemd-networkd[765]: lo: Gained carrier Jul 10 00:29:38.017625 systemd-networkd[765]: Enumeration completed Jul 10 00:29:38.017718 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:29:38.018322 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:29:38.018326 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:29:38.018678 systemd[1]: Reached target network.target - Network. Jul 10 00:29:38.019528 systemd-networkd[765]: eth0: Link UP Jul 10 00:29:38.019531 systemd-networkd[765]: eth0: Gained carrier Jul 10 00:29:38.019538 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:29:38.030485 ignition[681]: Ignition 2.19.0 Jul 10 00:29:38.030491 ignition[681]: Stage: fetch-offline Jul 10 00:29:38.030526 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:29:38.030533 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:29:38.030738 ignition[681]: parsed url from cmdline: "" Jul 10 00:29:38.030742 ignition[681]: no config URL provided Jul 10 00:29:38.030747 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:29:38.030753 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:29:38.030777 ignition[681]: op(1): [started] loading QEMU firmware config module Jul 10 00:29:38.030782 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:29:38.041505 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:29:38.043822 ignition[681]: op(1): [finished] loading QEMU firmware config module Jul 10 00:29:38.084593 ignition[681]: parsing config with SHA512: 26c6f2b7963527d6100c00a36698d58fc9c0bb1240d77aef39129c2f189e5e2bbadd0608645fb47ee193f3d54a2c7d74ecf22a6266488ef767d9d02ebdc4b4ae Jul 10 00:29:38.090258 unknown[681]: fetched base config from "system" Jul 10 00:29:38.091178 unknown[681]: fetched user config from "qemu" Jul 10 00:29:38.091688 ignition[681]: fetch-offline: fetch-offline passed Jul 10 00:29:38.091755 ignition[681]: Ignition finished successfully Jul 10 00:29:38.093227 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:29:38.094653 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:29:38.104630 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:29:38.114732 ignition[773]: Ignition 2.19.0 Jul 10 00:29:38.114745 ignition[773]: Stage: kargs Jul 10 00:29:38.114924 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:29:38.114934 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:29:38.115816 ignition[773]: kargs: kargs passed Jul 10 00:29:38.115861 ignition[773]: Ignition finished successfully Jul 10 00:29:38.117946 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:29:38.128601 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:29:38.138311 ignition[781]: Ignition 2.19.0 Jul 10 00:29:38.138321 ignition[781]: Stage: disks Jul 10 00:29:38.138533 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:29:38.138544 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:29:38.139384 ignition[781]: disks: disks passed Jul 10 00:29:38.140850 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:29:38.139439 ignition[781]: Ignition finished successfully Jul 10 00:29:38.142182 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:29:38.143300 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:29:38.144577 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:29:38.145889 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:29:38.147345 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:29:38.158634 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:29:38.173048 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 10 00:29:38.176715 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:29:38.184537 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:29:38.231476 kernel: EXT4-fs (vda9): mounted filesystem c566fdd5-af6f-4008-858c-a2aed765f9b4 r/w with ordered data mode. Quota mode: none. Jul 10 00:29:38.231966 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:29:38.233059 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:29:38.244542 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:29:38.246505 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:29:38.247293 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:29:38.247336 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:29:38.247360 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:29:38.252710 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:29:38.254195 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:29:38.259483 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (799) Jul 10 00:29:38.262471 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:29:38.262510 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:29:38.262522 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:29:38.265469 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:29:38.266606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:29:38.305938 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:29:38.309789 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:29:38.313592 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:29:38.317125 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:29:38.392204 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:29:38.408654 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:29:38.410015 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:29:38.414549 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:29:38.430624 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:29:38.433081 ignition[913]: INFO : Ignition 2.19.0 Jul 10 00:29:38.433081 ignition[913]: INFO : Stage: mount Jul 10 00:29:38.434232 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:29:38.434232 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:29:38.434232 ignition[913]: INFO : mount: mount passed Jul 10 00:29:38.434232 ignition[913]: INFO : Ignition finished successfully Jul 10 00:29:38.436683 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:29:38.449552 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:29:38.854276 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:29:38.865628 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:29:38.871471 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (926) Jul 10 00:29:38.873804 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:29:38.873836 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:29:38.873847 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:29:38.875467 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:29:38.876952 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:29:38.894429 ignition[943]: INFO : Ignition 2.19.0 Jul 10 00:29:38.894429 ignition[943]: INFO : Stage: files Jul 10 00:29:38.895745 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:29:38.895745 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:29:38.895745 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:29:38.898299 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:29:38.898299 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:29:38.901584 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:29:38.902910 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:29:38.902910 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:29:38.902097 unknown[943]: wrote ssh authorized keys file for user: core Jul 10 00:29:38.905955 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 10 00:29:38.907702 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 10 00:29:38.949667 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:29:39.141784 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 10 00:29:39.141784 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:29:39.145214 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 10 00:29:39.502601 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:29:39.614419 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:29:39.616241 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 10 00:29:39.702678 systemd-networkd[765]: eth0: Gained IPv6LL Jul 10 00:29:40.044050 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:29:40.518104 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 00:29:40.518104 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:29:40.521018 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:29:40.521018 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:29:40.521018 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:29:40.521018 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 10 00:29:40.521018 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:29:40.521018 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:29:40.521018 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 10 00:29:40.521018 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:29:40.554397 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:29:40.559137 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:29:40.561774 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:29:40.561774 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:29:40.561774 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:29:40.561774 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:29:40.561774 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:29:40.561774 ignition[943]: INFO : files: files passed Jul 10 00:29:40.561774 ignition[943]: INFO : Ignition finished successfully Jul 10 00:29:40.561722 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:29:40.575769 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:29:40.578697 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:29:40.580845 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:29:40.582675 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:29:40.587980 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 00:29:40.591414 initrd-setup-root-after-ignition[973]: grep: Jul 10 00:29:40.591414 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:29:40.593388 initrd-setup-root-after-ignition[973]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:29:40.593388 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:29:40.595199 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:29:40.597228 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:29:40.612629 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:29:40.634184 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:29:40.634321 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:29:40.636060 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:29:40.637412 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:29:40.638776 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:29:40.639631 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:29:40.655438 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:29:40.664629 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:29:40.674082 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:29:40.675949 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:29:40.676859 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:29:40.678286 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:29:40.678412 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:29:40.680245 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:29:40.681857 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:29:40.683088 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:29:40.684385 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:29:40.685888 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:29:40.687319 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:29:40.688671 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:29:40.690164 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:29:40.691623 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:29:40.692932 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:29:40.694090 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:29:40.694216 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:29:40.695923 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:29:40.697304 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:29:40.698725 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:29:40.698860 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:29:40.700322 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:29:40.700457 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:29:40.702513 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:29:40.702631 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:29:40.704056 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:29:40.705152 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:29:40.708533 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:29:40.709531 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:29:40.711105 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:29:40.712202 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:29:40.712289 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:29:40.713384 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:29:40.713494 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:29:40.714628 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:29:40.714734 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:29:40.716076 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:29:40.716177 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:29:40.730640 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:29:40.732080 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:29:40.732811 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:29:40.732927 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:29:40.734573 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:29:40.734690 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:29:40.740260 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:29:40.741481 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:29:40.744830 ignition[997]: INFO : Ignition 2.19.0 Jul 10 00:29:40.744830 ignition[997]: INFO : Stage: umount Jul 10 00:29:40.746269 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:29:40.746269 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:29:40.746269 ignition[997]: INFO : umount: umount passed Jul 10 00:29:40.746269 ignition[997]: INFO : Ignition finished successfully Jul 10 00:29:40.746172 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:29:40.747761 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:29:40.747857 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:29:40.749787 systemd[1]: Stopped target network.target - Network. Jul 10 00:29:40.750903 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:29:40.750987 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:29:40.752304 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:29:40.752353 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:29:40.753739 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:29:40.753783 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:29:40.755146 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:29:40.755190 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:29:40.756761 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:29:40.758211 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:29:40.759781 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:29:40.759878 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:29:40.761304 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:29:40.761401 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:29:40.762532 systemd-networkd[765]: eth0: DHCPv6 lease lost Jul 10 00:29:40.764595 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:29:40.764744 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:29:40.766858 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:29:40.766893 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:29:40.776567 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:29:40.777266 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:29:40.777326 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:29:40.778987 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:29:40.781398 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:29:40.781519 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:29:40.787109 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:29:40.787210 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:29:40.788736 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:29:40.788780 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:29:40.790155 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:29:40.790195 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:29:40.793191 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:29:40.793340 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:29:40.794730 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:29:40.795515 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:29:40.797591 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:29:40.797651 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:29:40.798687 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:29:40.798733 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:29:40.800122 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:29:40.800179 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:29:40.802255 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:29:40.802304 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:29:40.804334 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:29:40.804394 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:29:40.807620 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:29:40.809044 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:29:40.809105 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:29:40.810807 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 00:29:40.810851 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:29:40.812522 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:29:40.812562 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:29:40.814267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:29:40.814304 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:29:40.816930 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:29:40.818474 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:29:40.820152 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:29:40.822108 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:29:40.836585 systemd[1]: Switching root. Jul 10 00:29:40.870488 systemd-journald[237]: Journal stopped Jul 10 00:29:41.612562 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 10 00:29:41.612618 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:29:41.612630 kernel: SELinux: policy capability open_perms=1 Jul 10 00:29:41.612640 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:29:41.612650 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:29:41.612662 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:29:41.612672 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:29:41.612682 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:29:41.612691 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:29:41.612701 kernel: audit: type=1403 audit(1752107381.045:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:29:41.612712 systemd[1]: Successfully loaded SELinux policy in 32.570ms. Jul 10 00:29:41.612731 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.471ms. Jul 10 00:29:41.612743 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:29:41.612754 systemd[1]: Detected virtualization kvm. Jul 10 00:29:41.612766 systemd[1]: Detected architecture arm64. Jul 10 00:29:41.612776 systemd[1]: Detected first boot. Jul 10 00:29:41.612787 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:29:41.612804 zram_generator::config[1041]: No configuration found. Jul 10 00:29:41.612815 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:29:41.612826 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:29:41.612836 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:29:41.612847 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:29:41.612860 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:29:41.612871 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:29:41.612881 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:29:41.612892 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:29:41.612903 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:29:41.612914 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:29:41.612925 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:29:41.612935 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:29:41.612945 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:29:41.612958 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:29:41.612969 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:29:41.612979 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:29:41.612990 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:29:41.613001 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:29:41.613012 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 00:29:41.613022 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:29:41.613035 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:29:41.613045 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:29:41.613057 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:29:41.613068 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:29:41.613079 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:29:41.613090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:29:41.613100 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:29:41.613111 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:29:41.613121 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:29:41.613134 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:29:41.613145 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:29:41.613155 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:29:41.613166 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:29:41.613176 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:29:41.613187 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:29:41.613197 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:29:41.613207 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:29:41.613218 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:29:41.613230 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:29:41.613241 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:29:41.613252 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:29:41.613268 systemd[1]: Reached target machines.target - Containers. Jul 10 00:29:41.613282 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:29:41.613293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:29:41.613303 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:29:41.613314 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:29:41.613324 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:29:41.613336 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:29:41.613347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:29:41.613358 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:29:41.613368 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:29:41.613385 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:29:41.613396 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:29:41.613407 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:29:41.613417 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:29:41.613429 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:29:41.613440 kernel: fuse: init (API version 7.39) Jul 10 00:29:41.613465 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:29:41.613476 kernel: loop: module loaded Jul 10 00:29:41.613486 kernel: ACPI: bus type drm_connector registered Jul 10 00:29:41.613496 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:29:41.613507 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:29:41.613519 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:29:41.613530 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:29:41.613540 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:29:41.613553 systemd[1]: Stopped verity-setup.service. Jul 10 00:29:41.613582 systemd-journald[1105]: Collecting audit messages is disabled. Jul 10 00:29:41.613604 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:29:41.613615 systemd-journald[1105]: Journal started Jul 10 00:29:41.613637 systemd-journald[1105]: Runtime Journal (/run/log/journal/bd9b9b43b0ec4ba6a81c7d992ad06438) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:29:41.439092 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:29:41.453330 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:29:41.453694 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:29:41.615474 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:29:41.616002 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:29:41.617060 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:29:41.617920 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:29:41.618990 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:29:41.619889 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:29:41.620810 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:29:41.621892 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:29:41.623229 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:29:41.623366 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:29:41.624629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:29:41.624761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:29:41.625910 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:29:41.626044 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:29:41.628850 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:29:41.629083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:29:41.630409 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:29:41.630642 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:29:41.631894 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:29:41.632128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:29:41.633287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:29:41.634506 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:29:41.635751 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:29:41.647261 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:29:41.659561 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:29:41.661560 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:29:41.662472 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:29:41.662502 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:29:41.664232 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 10 00:29:41.666308 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:29:41.668333 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:29:41.669346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:29:41.671663 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:29:41.674638 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:29:41.675857 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:29:41.677929 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:29:41.680089 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:29:41.683864 systemd-journald[1105]: Time spent on flushing to /var/log/journal/bd9b9b43b0ec4ba6a81c7d992ad06438 is 19.902ms for 857 entries. Jul 10 00:29:41.683864 systemd-journald[1105]: System Journal (/var/log/journal/bd9b9b43b0ec4ba6a81c7d992ad06438) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:29:41.713159 systemd-journald[1105]: Received client request to flush runtime journal. Jul 10 00:29:41.713194 kernel: loop0: detected capacity change from 0 to 114328 Jul 10 00:29:41.683683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:29:41.689642 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:29:41.693913 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:29:41.699025 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:29:41.700344 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:29:41.701676 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:29:41.703081 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:29:41.704706 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:29:41.706037 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:29:41.713150 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:29:41.724758 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 10 00:29:41.727290 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jul 10 00:29:41.727599 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:29:41.727308 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jul 10 00:29:41.728656 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 10 00:29:41.731052 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:29:41.735078 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:29:41.745038 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:29:41.749659 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:29:41.750408 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 10 00:29:41.754297 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 10 00:29:41.757463 kernel: loop1: detected capacity change from 0 to 207008 Jul 10 00:29:41.777697 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:29:41.784596 kernel: loop2: detected capacity change from 0 to 114432 Jul 10 00:29:41.789686 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:29:41.802885 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jul 10 00:29:41.802900 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jul 10 00:29:41.806842 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:29:41.817489 kernel: loop3: detected capacity change from 0 to 114328 Jul 10 00:29:41.822464 kernel: loop4: detected capacity change from 0 to 207008 Jul 10 00:29:41.827470 kernel: loop5: detected capacity change from 0 to 114432 Jul 10 00:29:41.830354 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 00:29:41.830740 (sd-merge)[1182]: Merged extensions into '/usr'. Jul 10 00:29:41.835752 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:29:41.835769 systemd[1]: Reloading... Jul 10 00:29:41.882474 zram_generator::config[1204]: No configuration found. Jul 10 00:29:41.958426 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:29:41.994275 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:29:42.030731 systemd[1]: Reloading finished in 194 ms. Jul 10 00:29:42.057819 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:29:42.059091 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:29:42.070658 systemd[1]: Starting ensure-sysext.service... Jul 10 00:29:42.072434 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:29:42.082648 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:29:42.082662 systemd[1]: Reloading... Jul 10 00:29:42.097151 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:29:42.097412 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:29:42.098082 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:29:42.098294 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jul 10 00:29:42.098344 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jul 10 00:29:42.100639 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:29:42.100652 systemd-tmpfiles[1243]: Skipping /boot Jul 10 00:29:42.107267 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:29:42.107283 systemd-tmpfiles[1243]: Skipping /boot Jul 10 00:29:42.133469 zram_generator::config[1270]: No configuration found. Jul 10 00:29:42.218579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:29:42.253756 systemd[1]: Reloading finished in 170 ms. Jul 10 00:29:42.272071 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:29:42.282937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:29:42.288550 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:29:42.290781 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:29:42.292708 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:29:42.297666 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:29:42.300692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:29:42.304916 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:29:42.313176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:29:42.314704 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:29:42.317795 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:29:42.321535 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:29:42.322345 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:29:42.329408 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:29:42.334562 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:29:42.337128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:29:42.337888 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:29:42.339316 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:29:42.339471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:29:42.341917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:29:42.342041 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:29:42.343585 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Jul 10 00:29:42.346010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:29:42.355770 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:29:42.362028 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:29:42.362859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:29:42.362974 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:29:42.366011 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:29:42.368191 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:29:42.369782 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:29:42.373099 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:29:42.374676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:29:42.374809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:29:42.376304 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:29:42.376442 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:29:42.383906 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:29:42.393675 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:29:42.408426 systemd[1]: Finished ensure-sysext.service. Jul 10 00:29:42.413701 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 10 00:29:42.414055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:29:42.424639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:29:42.427224 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:29:42.427638 augenrules[1369]: No rules Jul 10 00:29:42.431886 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:29:42.436182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:29:42.437237 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:29:42.439266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:29:42.441970 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:29:42.442833 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:29:42.444510 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:29:42.445763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:29:42.447537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:29:42.448760 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:29:42.448872 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:29:42.450737 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:29:42.450874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:29:42.453810 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:29:42.453955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:29:42.459175 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1344) Jul 10 00:29:42.467097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:29:42.467163 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:29:42.492132 systemd-resolved[1310]: Positive Trust Anchors: Jul 10 00:29:42.492154 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:29:42.492186 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:29:42.497965 systemd-resolved[1310]: Defaulting to hostname 'linux'. Jul 10 00:29:42.499474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:29:42.502613 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:29:42.515442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:29:42.531610 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:29:42.532735 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:29:42.534414 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:29:42.540410 systemd-networkd[1381]: lo: Link UP Jul 10 00:29:42.540420 systemd-networkd[1381]: lo: Gained carrier Jul 10 00:29:42.541185 systemd-networkd[1381]: Enumeration completed Jul 10 00:29:42.542803 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:29:42.545602 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:29:42.546281 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:29:42.546290 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:29:42.547018 systemd-networkd[1381]: eth0: Link UP Jul 10 00:29:42.547021 systemd-networkd[1381]: eth0: Gained carrier Jul 10 00:29:42.547034 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:29:42.551610 systemd[1]: Reached target network.target - Network. Jul 10 00:29:42.558606 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:29:42.560744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:29:42.562001 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 10 00:29:42.563514 systemd-networkd[1381]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:29:42.564794 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jul 10 00:29:42.565666 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 10 00:29:42.566685 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:29:42.566748 systemd-timesyncd[1382]: Initial clock synchronization to Thu 2025-07-10 00:29:42.736998 UTC. Jul 10 00:29:42.594653 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:29:42.618429 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:29:42.638124 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 10 00:29:42.639478 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:29:42.640411 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:29:42.641432 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:29:42.642492 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:29:42.643773 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:29:42.644795 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:29:42.645881 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:29:42.646937 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:29:42.646973 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:29:42.647905 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:29:42.649511 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:29:42.651794 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:29:42.663494 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:29:42.665649 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 10 00:29:42.666945 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:29:42.667957 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:29:42.668708 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:29:42.669442 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:29:42.669486 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:29:42.670415 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:29:42.672174 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:29:42.674598 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:29:42.675692 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:29:42.677639 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:29:42.681568 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:29:42.682434 jq[1414]: false Jul 10 00:29:42.682874 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:29:42.687629 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:29:42.691037 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:29:42.694755 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:29:42.701828 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:29:42.703427 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:29:42.703855 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:29:42.704815 extend-filesystems[1415]: Found loop3 Jul 10 00:29:42.704815 extend-filesystems[1415]: Found loop4 Jul 10 00:29:42.704815 extend-filesystems[1415]: Found loop5 Jul 10 00:29:42.704815 extend-filesystems[1415]: Found vda Jul 10 00:29:42.704815 extend-filesystems[1415]: Found vda1 Jul 10 00:29:42.704815 extend-filesystems[1415]: Found vda2 Jul 10 00:29:42.704815 extend-filesystems[1415]: Found vda3 Jul 10 00:29:42.704815 extend-filesystems[1415]: Found usr Jul 10 00:29:42.704815 extend-filesystems[1415]: Found vda4 Jul 10 00:29:42.704815 extend-filesystems[1415]: Found vda6 Jul 10 00:29:42.704815 extend-filesystems[1415]: Found vda7 Jul 10 00:29:42.704815 extend-filesystems[1415]: Found vda9 Jul 10 00:29:42.704815 extend-filesystems[1415]: Checking size of /dev/vda9 Jul 10 00:29:42.704945 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:29:42.706540 dbus-daemon[1413]: [system] SELinux support is enabled Jul 10 00:29:42.708076 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:29:42.709547 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:29:42.715492 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 10 00:29:42.719876 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:29:42.720037 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:29:42.724771 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:29:42.725111 jq[1431]: true Jul 10 00:29:42.725431 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:29:42.740074 extend-filesystems[1415]: Resized partition /dev/vda9 Jul 10 00:29:42.742721 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Jul 10 00:29:42.748681 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:29:42.748710 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1344) Jul 10 00:29:42.744357 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:29:42.744408 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:29:42.750014 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:29:42.750046 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:29:42.753669 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:29:42.753869 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:29:42.756607 update_engine[1426]: I20250710 00:29:42.756057 1426 main.cc:92] Flatcar Update Engine starting Jul 10 00:29:42.761659 tar[1435]: linux-arm64/LICENSE Jul 10 00:29:42.763579 tar[1435]: linux-arm64/helm Jul 10 00:29:42.761988 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:29:42.763688 update_engine[1426]: I20250710 00:29:42.761770 1426 update_check_scheduler.cc:74] Next update check in 5m16s Jul 10 00:29:42.763239 (ntainerd)[1441]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:29:42.770998 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:29:42.772570 jq[1442]: true Jul 10 00:29:42.810522 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:29:42.812352 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:29:42.812676 systemd-logind[1425]: New seat seat0. Jul 10 00:29:42.814364 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:29:42.828208 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:29:42.828208 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:29:42.828208 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:29:42.833417 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Jul 10 00:29:42.830998 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:29:42.831168 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:29:42.844423 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:29:42.848091 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:29:42.850329 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:29:42.872248 locksmithd[1451]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:29:42.968127 containerd[1441]: time="2025-07-10T00:29:42.967981680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 10 00:29:42.996779 containerd[1441]: time="2025-07-10T00:29:42.996377080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:29:42.997890 containerd[1441]: time="2025-07-10T00:29:42.997856520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:29:42.998005 containerd[1441]: time="2025-07-10T00:29:42.997988920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:29:42.999209 containerd[1441]: time="2025-07-10T00:29:42.998107960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:29:42.999209 containerd[1441]: time="2025-07-10T00:29:42.998280120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 10 00:29:42.999209 containerd[1441]: time="2025-07-10T00:29:42.998301640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 10 00:29:42.999209 containerd[1441]: time="2025-07-10T00:29:42.998357560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:29:42.999209 containerd[1441]: time="2025-07-10T00:29:42.998377840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:29:42.999209 containerd[1441]: time="2025-07-10T00:29:42.998559920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:29:42.999209 containerd[1441]: time="2025-07-10T00:29:42.998576320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:29:42.999209 containerd[1441]: time="2025-07-10T00:29:42.998594080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:29:42.999209 containerd[1441]: time="2025-07-10T00:29:42.998603720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:29:42.999209 containerd[1441]: time="2025-07-10T00:29:42.998683160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:29:42.999209 containerd[1441]: time="2025-07-10T00:29:42.998861600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:29:42.999471 containerd[1441]: time="2025-07-10T00:29:42.998958680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:29:42.999471 containerd[1441]: time="2025-07-10T00:29:42.998972120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:29:42.999471 containerd[1441]: time="2025-07-10T00:29:42.999042400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:29:42.999471 containerd[1441]: time="2025-07-10T00:29:42.999077880Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:29:43.002556 containerd[1441]: time="2025-07-10T00:29:43.002529068Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:29:43.002676 containerd[1441]: time="2025-07-10T00:29:43.002662042Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:29:43.002790 containerd[1441]: time="2025-07-10T00:29:43.002776019Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 10 00:29:43.002885 containerd[1441]: time="2025-07-10T00:29:43.002870428Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 10 00:29:43.003040 containerd[1441]: time="2025-07-10T00:29:43.003021621Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:29:43.003240 containerd[1441]: time="2025-07-10T00:29:43.003219590Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:29:43.003703 containerd[1441]: time="2025-07-10T00:29:43.003681381Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:29:43.004003 containerd[1441]: time="2025-07-10T00:29:43.003978987Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 10 00:29:43.004074 containerd[1441]: time="2025-07-10T00:29:43.004060324Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 10 00:29:43.004183 containerd[1441]: time="2025-07-10T00:29:43.004165354Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 10 00:29:43.004242 containerd[1441]: time="2025-07-10T00:29:43.004228838Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:29:43.004347 containerd[1441]: time="2025-07-10T00:29:43.004332071Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:29:43.004406 containerd[1441]: time="2025-07-10T00:29:43.004393839Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:29:43.004567 containerd[1441]: time="2025-07-10T00:29:43.004549567Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:29:43.004690 containerd[1441]: time="2025-07-10T00:29:43.004622325Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:29:43.004749 containerd[1441]: time="2025-07-10T00:29:43.004736465Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:29:43.004802 containerd[1441]: time="2025-07-10T00:29:43.004790022Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:29:43.004856 containerd[1441]: time="2025-07-10T00:29:43.004843334Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:29:43.004981 containerd[1441]: time="2025-07-10T00:29:43.004965359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.005056 containerd[1441]: time="2025-07-10T00:29:43.005041916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.005161 containerd[1441]: time="2025-07-10T00:29:43.005145884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.005224 containerd[1441]: time="2025-07-10T00:29:43.005211002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.005338 containerd[1441]: time="2025-07-10T00:29:43.005322855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.005406 containerd[1441]: time="2025-07-10T00:29:43.005392752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.005532 containerd[1441]: time="2025-07-10T00:29:43.005515594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.005592 containerd[1441]: time="2025-07-10T00:29:43.005579283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.005697 containerd[1441]: time="2025-07-10T00:29:43.005633943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.006095 containerd[1441]: time="2025-07-10T00:29:43.005746041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.006095 containerd[1441]: time="2025-07-10T00:29:43.005768141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.006095 containerd[1441]: time="2025-07-10T00:29:43.005795921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.006095 containerd[1441]: time="2025-07-10T00:29:43.005809851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.006095 containerd[1441]: time="2025-07-10T00:29:43.005825947Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 10 00:29:43.006095 containerd[1441]: time="2025-07-10T00:29:43.005857485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.006095 containerd[1441]: time="2025-07-10T00:29:43.005872682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.006095 containerd[1441]: time="2025-07-10T00:29:43.005884079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:29:43.007273 containerd[1441]: time="2025-07-10T00:29:43.006927644Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:29:43.007947 containerd[1441]: time="2025-07-10T00:29:43.007407328Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 10 00:29:43.007947 containerd[1441]: time="2025-07-10T00:29:43.007429347Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:29:43.007947 containerd[1441]: time="2025-07-10T00:29:43.007442787Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 10 00:29:43.007947 containerd[1441]: time="2025-07-10T00:29:43.007452265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.007947 containerd[1441]: time="2025-07-10T00:29:43.007479595Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 10 00:29:43.007947 containerd[1441]: time="2025-07-10T00:29:43.007499980Z" level=info msg="NRI interface is disabled by configuration." Jul 10 00:29:43.007947 containerd[1441]: time="2025-07-10T00:29:43.007511827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:29:43.008178 containerd[1441]: time="2025-07-10T00:29:43.007861765Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:29:43.008178 containerd[1441]: time="2025-07-10T00:29:43.007925658Z" level=info msg="Connect containerd service" Jul 10 00:29:43.008178 containerd[1441]: time="2025-07-10T00:29:43.007953315Z" level=info msg="using legacy CRI server" Jul 10 00:29:43.008178 containerd[1441]: time="2025-07-10T00:29:43.007960872Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:29:43.008178 containerd[1441]: time="2025-07-10T00:29:43.008037878Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:29:43.010563 containerd[1441]: time="2025-07-10T00:29:43.010120636Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:29:43.010563 containerd[1441]: time="2025-07-10T00:29:43.010493369Z" level=info msg="Start subscribing containerd event" Jul 10 00:29:43.010563 containerd[1441]: time="2025-07-10T00:29:43.010539532Z" level=info msg="Start recovering state" Jul 10 00:29:43.010693 containerd[1441]: time="2025-07-10T00:29:43.010612943Z" level=info msg="Start event monitor" Jul 10 00:29:43.010693 containerd[1441]: time="2025-07-10T00:29:43.010625648Z" level=info msg="Start snapshots syncer" Jul 10 00:29:43.010693 containerd[1441]: time="2025-07-10T00:29:43.010634268Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:29:43.010693 containerd[1441]: time="2025-07-10T00:29:43.010641172Z" level=info msg="Start streaming server" Jul 10 00:29:43.011373 containerd[1441]: time="2025-07-10T00:29:43.011274909Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:29:43.011423 containerd[1441]: time="2025-07-10T00:29:43.011406861Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:29:43.011548 containerd[1441]: time="2025-07-10T00:29:43.011461766Z" level=info msg="containerd successfully booted in 0.045889s" Jul 10 00:29:43.011638 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:29:43.119975 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:29:43.139722 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:29:43.159752 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:29:43.166025 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:29:43.167667 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:29:43.170064 tar[1435]: linux-arm64/README.md Jul 10 00:29:43.171070 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:29:43.181514 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:29:43.185086 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:29:43.187740 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:29:43.189721 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 00:29:43.190742 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:29:44.439052 systemd-networkd[1381]: eth0: Gained IPv6LL Jul 10 00:29:44.441526 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:29:44.442909 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:29:44.451738 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 00:29:44.453681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:29:44.455435 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:29:44.469497 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:29:44.470774 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 00:29:44.473068 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:29:44.480736 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:29:45.020827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:29:45.022122 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:29:45.025447 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:29:45.026605 systemd[1]: Startup finished in 624ms (kernel) + 5.321s (initrd) + 4.017s (userspace) = 9.963s. Jul 10 00:29:45.440949 kubelet[1526]: E0710 00:29:45.440829 1526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:29:45.443184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:29:45.443332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:29:49.012205 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:29:49.013356 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:49196.service - OpenSSH per-connection server daemon (10.0.0.1:49196). Jul 10 00:29:49.071587 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 49196 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:49.073642 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:49.081488 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:29:49.091776 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:29:49.093618 systemd-logind[1425]: New session 1 of user core. Jul 10 00:29:49.103258 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:29:49.118847 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:29:49.124561 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:29:49.215313 systemd[1543]: Queued start job for default target default.target. Jul 10 00:29:49.225165 systemd[1543]: Created slice app.slice - User Application Slice. Jul 10 00:29:49.225194 systemd[1543]: Reached target paths.target - Paths. Jul 10 00:29:49.225207 systemd[1543]: Reached target timers.target - Timers. Jul 10 00:29:49.226521 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:29:49.237342 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:29:49.237490 systemd[1543]: Reached target sockets.target - Sockets. Jul 10 00:29:49.237511 systemd[1543]: Reached target basic.target - Basic System. Jul 10 00:29:49.237554 systemd[1543]: Reached target default.target - Main User Target. Jul 10 00:29:49.237586 systemd[1543]: Startup finished in 105ms. Jul 10 00:29:49.237743 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:29:49.239161 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:29:49.302072 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:49200.service - OpenSSH per-connection server daemon (10.0.0.1:49200). Jul 10 00:29:49.366776 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 49200 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:49.368266 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:49.372553 systemd-logind[1425]: New session 2 of user core. Jul 10 00:29:49.379621 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:29:49.432001 sshd[1554]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:49.450062 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:49200.service: Deactivated successfully. Jul 10 00:29:49.452217 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:29:49.454768 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:29:49.461748 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:49202.service - OpenSSH per-connection server daemon (10.0.0.1:49202). Jul 10 00:29:49.463106 systemd-logind[1425]: Removed session 2. Jul 10 00:29:49.499187 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 49202 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:49.500588 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:49.504851 systemd-logind[1425]: New session 3 of user core. Jul 10 00:29:49.510631 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:29:49.564401 sshd[1561]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:49.573666 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:49202.service: Deactivated successfully. Jul 10 00:29:49.575126 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:29:49.579764 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:29:49.592851 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:49204.service - OpenSSH per-connection server daemon (10.0.0.1:49204). Jul 10 00:29:49.594558 systemd-logind[1425]: Removed session 3. Jul 10 00:29:49.631293 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 49204 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:49.632731 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:49.638294 systemd-logind[1425]: New session 4 of user core. Jul 10 00:29:49.647668 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:29:49.709408 sshd[1568]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:49.725253 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:49204.service: Deactivated successfully. Jul 10 00:29:49.727819 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:29:49.729083 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:29:49.738874 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:49210.service - OpenSSH per-connection server daemon (10.0.0.1:49210). Jul 10 00:29:49.743088 systemd-logind[1425]: Removed session 4. Jul 10 00:29:49.781287 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 49210 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:49.782791 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:49.787565 systemd-logind[1425]: New session 5 of user core. Jul 10 00:29:49.799657 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:29:49.871920 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:29:49.872946 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:29:49.898158 sudo[1578]: pam_unix(sudo:session): session closed for user root Jul 10 00:29:49.900134 sshd[1575]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:49.915431 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:49210.service: Deactivated successfully. Jul 10 00:29:49.917182 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:29:49.919205 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:29:49.923113 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:49218.service - OpenSSH per-connection server daemon (10.0.0.1:49218). Jul 10 00:29:49.925325 systemd-logind[1425]: Removed session 5. Jul 10 00:29:49.966482 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 49218 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:49.968040 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:49.972800 systemd-logind[1425]: New session 6 of user core. Jul 10 00:29:49.983635 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:29:50.036819 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:29:50.037101 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:29:50.044087 sudo[1587]: pam_unix(sudo:session): session closed for user root Jul 10 00:29:50.049371 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 10 00:29:50.049684 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:29:50.071136 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 10 00:29:50.072477 auditctl[1590]: No rules Jul 10 00:29:50.072790 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:29:50.073006 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 10 00:29:50.075100 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:29:50.099509 augenrules[1608]: No rules Jul 10 00:29:50.102502 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:29:50.103812 sudo[1586]: pam_unix(sudo:session): session closed for user root Jul 10 00:29:50.105683 sshd[1583]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:50.115973 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:49218.service: Deactivated successfully. Jul 10 00:29:50.117412 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:29:50.120704 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:29:50.127750 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:49222.service - OpenSSH per-connection server daemon (10.0.0.1:49222). Jul 10 00:29:50.129026 systemd-logind[1425]: Removed session 6. Jul 10 00:29:50.163945 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 49222 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:50.165285 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:50.169556 systemd-logind[1425]: New session 7 of user core. Jul 10 00:29:50.184668 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:29:50.237008 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:29:50.237627 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:29:50.586767 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:29:50.586872 (dockerd)[1637]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:29:50.941671 dockerd[1637]: time="2025-07-10T00:29:50.941543303Z" level=info msg="Starting up" Jul 10 00:29:51.108868 dockerd[1637]: time="2025-07-10T00:29:51.108521851Z" level=info msg="Loading containers: start." Jul 10 00:29:51.209483 kernel: Initializing XFRM netlink socket Jul 10 00:29:51.293686 systemd-networkd[1381]: docker0: Link UP Jul 10 00:29:51.314708 dockerd[1637]: time="2025-07-10T00:29:51.314647130Z" level=info msg="Loading containers: done." Jul 10 00:29:51.331258 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1075454153-merged.mount: Deactivated successfully. Jul 10 00:29:51.336000 dockerd[1637]: time="2025-07-10T00:29:51.335947209Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:29:51.336112 dockerd[1637]: time="2025-07-10T00:29:51.336086380Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 10 00:29:51.336235 dockerd[1637]: time="2025-07-10T00:29:51.336211489Z" level=info msg="Daemon has completed initialization" Jul 10 00:29:51.364068 dockerd[1637]: time="2025-07-10T00:29:51.363941889Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:29:51.364472 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:29:51.967825 containerd[1441]: time="2025-07-10T00:29:51.967775303Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 00:29:52.782011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644677919.mount: Deactivated successfully. Jul 10 00:29:53.894244 containerd[1441]: time="2025-07-10T00:29:53.894183454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:53.895254 containerd[1441]: time="2025-07-10T00:29:53.895211663Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 10 00:29:53.895957 containerd[1441]: time="2025-07-10T00:29:53.895922223Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:53.899147 containerd[1441]: time="2025-07-10T00:29:53.899089190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:53.900997 containerd[1441]: time="2025-07-10T00:29:53.900817701Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.932994064s" Jul 10 00:29:53.900997 containerd[1441]: time="2025-07-10T00:29:53.900859816Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 10 00:29:53.901563 containerd[1441]: time="2025-07-10T00:29:53.901508511Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 00:29:55.252342 containerd[1441]: time="2025-07-10T00:29:55.252292974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:55.253277 containerd[1441]: time="2025-07-10T00:29:55.253244840Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 10 00:29:55.253917 containerd[1441]: time="2025-07-10T00:29:55.253884452Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:55.256893 containerd[1441]: time="2025-07-10T00:29:55.256861128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:55.258197 containerd[1441]: time="2025-07-10T00:29:55.258079734Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.356536924s" Jul 10 00:29:55.258197 containerd[1441]: time="2025-07-10T00:29:55.258113277Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 10 00:29:55.259150 containerd[1441]: time="2025-07-10T00:29:55.259109252Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 00:29:55.694018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:29:55.709676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:29:55.822253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:29:55.825903 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:29:55.863324 kubelet[1849]: E0710 00:29:55.863268 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:29:55.869508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:29:55.869774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:29:56.433430 containerd[1441]: time="2025-07-10T00:29:56.433370550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:56.434634 containerd[1441]: time="2025-07-10T00:29:56.434587381Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 10 00:29:56.435635 containerd[1441]: time="2025-07-10T00:29:56.435587602Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:56.438518 containerd[1441]: time="2025-07-10T00:29:56.438473555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:56.439729 containerd[1441]: time="2025-07-10T00:29:56.439684203Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.180537877s" Jul 10 00:29:56.439729 containerd[1441]: time="2025-07-10T00:29:56.439715319Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 10 00:29:56.440189 containerd[1441]: time="2025-07-10T00:29:56.440165442Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 00:29:57.409750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2830412271.mount: Deactivated successfully. Jul 10 00:29:57.649642 containerd[1441]: time="2025-07-10T00:29:57.649587120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:57.650440 containerd[1441]: time="2025-07-10T00:29:57.650301178Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 10 00:29:57.651408 containerd[1441]: time="2025-07-10T00:29:57.651353182Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:57.653279 containerd[1441]: time="2025-07-10T00:29:57.653242086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:57.653945 containerd[1441]: time="2025-07-10T00:29:57.653912441Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.213712796s" Jul 10 00:29:57.654015 containerd[1441]: time="2025-07-10T00:29:57.653949643Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 10 00:29:57.654418 containerd[1441]: time="2025-07-10T00:29:57.654394740Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:29:58.321314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860516579.mount: Deactivated successfully. Jul 10 00:29:59.209802 containerd[1441]: time="2025-07-10T00:29:59.209746150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:59.210483 containerd[1441]: time="2025-07-10T00:29:59.210436201Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 10 00:29:59.212411 containerd[1441]: time="2025-07-10T00:29:59.212367526Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:59.215062 containerd[1441]: time="2025-07-10T00:29:59.214989904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:59.216474 containerd[1441]: time="2025-07-10T00:29:59.216319680Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.561892603s" Jul 10 00:29:59.216474 containerd[1441]: time="2025-07-10T00:29:59.216363590Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 10 00:29:59.216920 containerd[1441]: time="2025-07-10T00:29:59.216889229Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:29:59.754517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835672485.mount: Deactivated successfully. Jul 10 00:29:59.758347 containerd[1441]: time="2025-07-10T00:29:59.758293989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:59.759442 containerd[1441]: time="2025-07-10T00:29:59.759396355Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 10 00:29:59.760385 containerd[1441]: time="2025-07-10T00:29:59.760334067Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:59.762487 containerd[1441]: time="2025-07-10T00:29:59.762422265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:59.763389 containerd[1441]: time="2025-07-10T00:29:59.763302874Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 546.378517ms" Jul 10 00:29:59.763389 containerd[1441]: time="2025-07-10T00:29:59.763336759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 00:29:59.763811 containerd[1441]: time="2025-07-10T00:29:59.763785765Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 00:30:00.299758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1307811562.mount: Deactivated successfully. Jul 10 00:30:02.332731 containerd[1441]: time="2025-07-10T00:30:02.332671639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:02.337314 containerd[1441]: time="2025-07-10T00:30:02.337267368Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 10 00:30:02.337963 containerd[1441]: time="2025-07-10T00:30:02.337931645Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:02.342088 containerd[1441]: time="2025-07-10T00:30:02.342050012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:02.342791 containerd[1441]: time="2025-07-10T00:30:02.342752874Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.578936719s" Jul 10 00:30:02.342828 containerd[1441]: time="2025-07-10T00:30:02.342788695Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 10 00:30:06.120059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:30:06.129661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:06.293633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:06.297147 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:30:06.331605 kubelet[2011]: E0710 00:30:06.331546 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:30:06.334353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:30:06.334532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:30:07.849237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:07.858966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:07.879934 systemd[1]: Reloading requested from client PID 2026 ('systemctl') (unit session-7.scope)... Jul 10 00:30:07.879950 systemd[1]: Reloading... Jul 10 00:30:07.950216 zram_generator::config[2065]: No configuration found. Jul 10 00:30:08.172146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:30:08.226226 systemd[1]: Reloading finished in 345 ms. Jul 10 00:30:08.268721 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:08.271829 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:30:08.272027 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:08.282694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:08.380517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:08.385410 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:30:08.418858 kubelet[2113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:30:08.418858 kubelet[2113]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:30:08.418858 kubelet[2113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:30:08.419242 kubelet[2113]: I0710 00:30:08.418911 2113 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:30:09.018501 kubelet[2113]: I0710 00:30:09.018399 2113 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:30:09.018501 kubelet[2113]: I0710 00:30:09.018433 2113 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:30:09.018769 kubelet[2113]: I0710 00:30:09.018738 2113 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:30:09.050889 kubelet[2113]: E0710 00:30:09.050829 2113 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:30:09.052261 kubelet[2113]: I0710 00:30:09.052231 2113 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:30:09.056899 kubelet[2113]: E0710 00:30:09.056848 2113 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:30:09.056899 kubelet[2113]: I0710 00:30:09.056875 2113 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:30:09.059394 kubelet[2113]: I0710 00:30:09.059369 2113 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:30:09.060015 kubelet[2113]: I0710 00:30:09.059969 2113 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:30:09.060740 kubelet[2113]: I0710 00:30:09.060010 2113 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:30:09.060740 kubelet[2113]: I0710 00:30:09.060325 2113 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:30:09.060740 kubelet[2113]: I0710 00:30:09.060336 2113 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:30:09.060740 kubelet[2113]: I0710 00:30:09.060547 2113 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:30:09.063850 kubelet[2113]: I0710 00:30:09.063812 2113 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:30:09.063850 kubelet[2113]: I0710 00:30:09.063840 2113 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:30:09.063945 kubelet[2113]: I0710 00:30:09.063860 2113 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:30:09.063945 kubelet[2113]: I0710 00:30:09.063870 2113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:30:09.066345 kubelet[2113]: W0710 00:30:09.066198 2113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jul 10 00:30:09.066345 kubelet[2113]: E0710 00:30:09.066257 2113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:30:09.066590 kubelet[2113]: W0710 00:30:09.066560 2113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jul 10 00:30:09.066689 kubelet[2113]: E0710 00:30:09.066673 2113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:30:09.068850 kubelet[2113]: I0710 00:30:09.068818 2113 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:30:09.069545 kubelet[2113]: I0710 00:30:09.069522 2113 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:30:09.069679 kubelet[2113]: W0710 00:30:09.069659 2113 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:30:09.071244 kubelet[2113]: I0710 00:30:09.071213 2113 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:30:09.071244 kubelet[2113]: I0710 00:30:09.071247 2113 server.go:1287] "Started kubelet" Jul 10 00:30:09.071624 kubelet[2113]: I0710 00:30:09.071468 2113 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:30:09.072882 kubelet[2113]: I0710 00:30:09.072859 2113 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:30:09.073088 kubelet[2113]: I0710 00:30:09.073039 2113 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:30:09.073372 kubelet[2113]: I0710 00:30:09.073352 2113 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:30:09.074493 kubelet[2113]: I0710 00:30:09.074440 2113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:30:09.075299 kubelet[2113]: I0710 00:30:09.075271 2113 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:30:09.076520 kubelet[2113]: E0710 00:30:09.076495 2113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:09.076584 kubelet[2113]: I0710 00:30:09.076530 2113 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:30:09.076608 kubelet[2113]: E0710 00:30:09.076033 2113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bc67ce0546aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:30:09.071228586 +0000 UTC m=+0.682604889,LastTimestamp:2025-07-10 00:30:09.071228586 +0000 UTC m=+0.682604889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:30:09.076706 kubelet[2113]: I0710 00:30:09.076678 2113 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:30:09.076750 kubelet[2113]: I0710 00:30:09.076734 2113 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:30:09.076926 kubelet[2113]: E0710 00:30:09.076894 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" Jul 10 00:30:09.077166 kubelet[2113]: W0710 00:30:09.076994 2113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jul 10 00:30:09.077166 kubelet[2113]: E0710 00:30:09.077040 2113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:30:09.077166 kubelet[2113]: E0710 00:30:09.077063 2113 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:30:09.077166 kubelet[2113]: I0710 00:30:09.077090 2113 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:30:09.077277 kubelet[2113]: I0710 00:30:09.077172 2113 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:30:09.078132 kubelet[2113]: I0710 00:30:09.078116 2113 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:30:09.089810 kubelet[2113]: I0710 00:30:09.089601 2113 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:30:09.089810 kubelet[2113]: I0710 00:30:09.089633 2113 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:30:09.089810 kubelet[2113]: I0710 00:30:09.089662 2113 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:30:09.090205 kubelet[2113]: I0710 00:30:09.090166 2113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:30:09.091212 kubelet[2113]: I0710 00:30:09.091182 2113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:30:09.091212 kubelet[2113]: I0710 00:30:09.091208 2113 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:30:09.091290 kubelet[2113]: I0710 00:30:09.091229 2113 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:30:09.091290 kubelet[2113]: I0710 00:30:09.091236 2113 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:30:09.091290 kubelet[2113]: E0710 00:30:09.091277 2113 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:30:09.165853 kubelet[2113]: I0710 00:30:09.165801 2113 policy_none.go:49] "None policy: Start" Jul 10 00:30:09.165853 kubelet[2113]: I0710 00:30:09.165838 2113 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:30:09.165853 kubelet[2113]: I0710 00:30:09.165852 2113 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:30:09.166169 kubelet[2113]: W0710 00:30:09.166117 2113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jul 10 00:30:09.166211 kubelet[2113]: E0710 00:30:09.166171 2113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:30:09.170824 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:30:09.177572 kubelet[2113]: E0710 00:30:09.177527 2113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:09.183069 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:30:09.185696 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:30:09.192310 kubelet[2113]: E0710 00:30:09.192276 2113 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:30:09.205163 kubelet[2113]: I0710 00:30:09.205136 2113 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:30:09.205356 kubelet[2113]: I0710 00:30:09.205341 2113 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:30:09.205516 kubelet[2113]: I0710 00:30:09.205358 2113 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:30:09.206289 kubelet[2113]: I0710 00:30:09.205619 2113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:30:09.206344 kubelet[2113]: E0710 00:30:09.206288 2113 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:30:09.206344 kubelet[2113]: E0710 00:30:09.206323 2113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:30:09.278520 kubelet[2113]: E0710 00:30:09.277622 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" Jul 10 00:30:09.306776 kubelet[2113]: I0710 00:30:09.306744 2113 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:09.307146 kubelet[2113]: E0710 00:30:09.307123 2113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jul 10 00:30:09.399605 systemd[1]: Created slice kubepods-burstable-pod2d63a6f4aceeca0cb752b063dfa33607.slice - libcontainer container kubepods-burstable-pod2d63a6f4aceeca0cb752b063dfa33607.slice. Jul 10 00:30:09.420735 kubelet[2113]: E0710 00:30:09.420665 2113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:09.423426 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 10 00:30:09.424922 kubelet[2113]: E0710 00:30:09.424901 2113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:09.426973 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 10 00:30:09.428551 kubelet[2113]: E0710 00:30:09.428370 2113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:09.479281 kubelet[2113]: I0710 00:30:09.479232 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:09.479281 kubelet[2113]: I0710 00:30:09.479272 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d63a6f4aceeca0cb752b063dfa33607-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2d63a6f4aceeca0cb752b063dfa33607\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:09.479407 kubelet[2113]: I0710 00:30:09.479291 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d63a6f4aceeca0cb752b063dfa33607-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2d63a6f4aceeca0cb752b063dfa33607\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:09.479407 kubelet[2113]: I0710 00:30:09.479314 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:09.479407 kubelet[2113]: I0710 00:30:09.479331 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:09.479407 kubelet[2113]: I0710 00:30:09.479347 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:09.479407 kubelet[2113]: I0710 00:30:09.479360 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d63a6f4aceeca0cb752b063dfa33607-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2d63a6f4aceeca0cb752b063dfa33607\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:09.479551 kubelet[2113]: I0710 00:30:09.479375 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:09.479551 kubelet[2113]: I0710 00:30:09.479388 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:09.508397 kubelet[2113]: I0710 00:30:09.508354 2113 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:09.508763 kubelet[2113]: E0710 00:30:09.508730 2113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jul 10 00:30:09.678099 kubelet[2113]: E0710 00:30:09.678000 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" Jul 10 00:30:09.721615 kubelet[2113]: E0710 00:30:09.721580 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:09.722399 containerd[1441]: time="2025-07-10T00:30:09.722350355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2d63a6f4aceeca0cb752b063dfa33607,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:09.725503 kubelet[2113]: E0710 00:30:09.725471 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:09.725848 containerd[1441]: time="2025-07-10T00:30:09.725810083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:09.729120 kubelet[2113]: E0710 00:30:09.729089 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:09.729509 containerd[1441]: time="2025-07-10T00:30:09.729373639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:09.910498 kubelet[2113]: I0710 00:30:09.910469 2113 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:09.910791 kubelet[2113]: E0710 00:30:09.910769 2113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jul 10 00:30:10.200522 kubelet[2113]: W0710 00:30:10.200423 2113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jul 10 00:30:10.200522 kubelet[2113]: E0710 00:30:10.200513 2113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:30:10.342784 kubelet[2113]: W0710 00:30:10.342722 2113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jul 10 00:30:10.342784 kubelet[2113]: E0710 00:30:10.342789 2113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:30:10.383846 kubelet[2113]: W0710 00:30:10.383791 2113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jul 10 00:30:10.383846 kubelet[2113]: E0710 00:30:10.383842 2113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:30:10.403190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536376399.mount: Deactivated successfully. Jul 10 00:30:10.408555 containerd[1441]: time="2025-07-10T00:30:10.408512545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:30:10.409180 containerd[1441]: time="2025-07-10T00:30:10.409148793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 10 00:30:10.409975 containerd[1441]: time="2025-07-10T00:30:10.409935328Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:30:10.410938 containerd[1441]: time="2025-07-10T00:30:10.410905209Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:30:10.411038 containerd[1441]: time="2025-07-10T00:30:10.411021397Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:30:10.412374 containerd[1441]: time="2025-07-10T00:30:10.412346123Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:30:10.412556 containerd[1441]: time="2025-07-10T00:30:10.412531310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:30:10.414795 containerd[1441]: time="2025-07-10T00:30:10.414721337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 688.851736ms" Jul 10 00:30:10.415247 containerd[1441]: time="2025-07-10T00:30:10.415153067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:30:10.416168 containerd[1441]: time="2025-07-10T00:30:10.415982067Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 693.503427ms" Jul 10 00:30:10.419174 containerd[1441]: time="2025-07-10T00:30:10.419117361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 689.689486ms" Jul 10 00:30:10.479028 kubelet[2113]: E0710 00:30:10.478632 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="1.6s" Jul 10 00:30:10.530984 kubelet[2113]: W0710 00:30:10.530207 2113 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jul 10 00:30:10.530984 kubelet[2113]: E0710 00:30:10.530258 2113 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:30:10.553839 containerd[1441]: time="2025-07-10T00:30:10.553663923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:10.553839 containerd[1441]: time="2025-07-10T00:30:10.553719755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:10.553839 containerd[1441]: time="2025-07-10T00:30:10.553732563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:10.553839 containerd[1441]: time="2025-07-10T00:30:10.553815371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:10.554327 containerd[1441]: time="2025-07-10T00:30:10.554255545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:10.554327 containerd[1441]: time="2025-07-10T00:30:10.554295689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:10.554327 containerd[1441]: time="2025-07-10T00:30:10.554306655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:10.554659 containerd[1441]: time="2025-07-10T00:30:10.554550236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:10.554880 containerd[1441]: time="2025-07-10T00:30:10.554787213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:10.554880 containerd[1441]: time="2025-07-10T00:30:10.554824995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:10.554880 containerd[1441]: time="2025-07-10T00:30:10.554835801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:10.554994 containerd[1441]: time="2025-07-10T00:30:10.554899878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:10.582624 systemd[1]: Started cri-containerd-6439fd6718e308b590de4cf8d8d27691d929aa3207d8a770be39b21feb34f04a.scope - libcontainer container 6439fd6718e308b590de4cf8d8d27691d929aa3207d8a770be39b21feb34f04a. Jul 10 00:30:10.584021 systemd[1]: Started cri-containerd-abac08e0d36924c3e6157bab6eba6e370b87b7fb1564cddfd0af1619d09fcb54.scope - libcontainer container abac08e0d36924c3e6157bab6eba6e370b87b7fb1564cddfd0af1619d09fcb54. Jul 10 00:30:10.585155 systemd[1]: Started cri-containerd-e8a3d041c57355ac206ebc395470d609503194cf570f2393a2121054d32e73d4.scope - libcontainer container e8a3d041c57355ac206ebc395470d609503194cf570f2393a2121054d32e73d4. Jul 10 00:30:10.615740 containerd[1441]: time="2025-07-10T00:30:10.615685806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2d63a6f4aceeca0cb752b063dfa33607,Namespace:kube-system,Attempt:0,} returns sandbox id \"6439fd6718e308b590de4cf8d8d27691d929aa3207d8a770be39b21feb34f04a\"" Jul 10 00:30:10.618264 kubelet[2113]: E0710 00:30:10.618110 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:10.619215 containerd[1441]: time="2025-07-10T00:30:10.619165179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"abac08e0d36924c3e6157bab6eba6e370b87b7fb1564cddfd0af1619d09fcb54\"" Jul 10 00:30:10.619812 kubelet[2113]: E0710 00:30:10.619713 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:10.621371 containerd[1441]: time="2025-07-10T00:30:10.621341959Z" level=info msg="CreateContainer within sandbox \"6439fd6718e308b590de4cf8d8d27691d929aa3207d8a770be39b21feb34f04a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:30:10.622624 containerd[1441]: time="2025-07-10T00:30:10.622588960Z" level=info msg="CreateContainer within sandbox \"abac08e0d36924c3e6157bab6eba6e370b87b7fb1564cddfd0af1619d09fcb54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:30:10.626565 containerd[1441]: time="2025-07-10T00:30:10.626535203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8a3d041c57355ac206ebc395470d609503194cf570f2393a2121054d32e73d4\"" Jul 10 00:30:10.627310 kubelet[2113]: E0710 00:30:10.627122 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:10.628680 containerd[1441]: time="2025-07-10T00:30:10.628648626Z" level=info msg="CreateContainer within sandbox \"e8a3d041c57355ac206ebc395470d609503194cf570f2393a2121054d32e73d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:30:10.642896 containerd[1441]: time="2025-07-10T00:30:10.642831311Z" level=info msg="CreateContainer within sandbox \"abac08e0d36924c3e6157bab6eba6e370b87b7fb1564cddfd0af1619d09fcb54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67ca5e07899b2e00e5e57b9118e4a23fdda8b9f4a31c6590b180feb87dd2e31c\"" Jul 10 00:30:10.643732 containerd[1441]: time="2025-07-10T00:30:10.643471842Z" level=info msg="StartContainer for \"67ca5e07899b2e00e5e57b9118e4a23fdda8b9f4a31c6590b180feb87dd2e31c\"" Jul 10 00:30:10.644903 containerd[1441]: time="2025-07-10T00:30:10.644851440Z" level=info msg="CreateContainer within sandbox \"6439fd6718e308b590de4cf8d8d27691d929aa3207d8a770be39b21feb34f04a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"73c0f7be1d13f5a58f5680f1ea5c5abed29e39638900fe276f4e6661df5bc411\"" Jul 10 00:30:10.645479 containerd[1441]: time="2025-07-10T00:30:10.645267761Z" level=info msg="StartContainer for \"73c0f7be1d13f5a58f5680f1ea5c5abed29e39638900fe276f4e6661df5bc411\"" Jul 10 00:30:10.648812 containerd[1441]: time="2025-07-10T00:30:10.648769267Z" level=info msg="CreateContainer within sandbox \"e8a3d041c57355ac206ebc395470d609503194cf570f2393a2121054d32e73d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"07073be6c8d7d8618eb135e8ded4f3c08918ea2f8c3ceecbbbc9ed590d3f8e36\"" Jul 10 00:30:10.649587 containerd[1441]: time="2025-07-10T00:30:10.649337636Z" level=info msg="StartContainer for \"07073be6c8d7d8618eb135e8ded4f3c08918ea2f8c3ceecbbbc9ed590d3f8e36\"" Jul 10 00:30:10.676660 systemd[1]: Started cri-containerd-67ca5e07899b2e00e5e57b9118e4a23fdda8b9f4a31c6590b180feb87dd2e31c.scope - libcontainer container 67ca5e07899b2e00e5e57b9118e4a23fdda8b9f4a31c6590b180feb87dd2e31c. Jul 10 00:30:10.677671 systemd[1]: Started cri-containerd-73c0f7be1d13f5a58f5680f1ea5c5abed29e39638900fe276f4e6661df5bc411.scope - libcontainer container 73c0f7be1d13f5a58f5680f1ea5c5abed29e39638900fe276f4e6661df5bc411. Jul 10 00:30:10.681432 systemd[1]: Started cri-containerd-07073be6c8d7d8618eb135e8ded4f3c08918ea2f8c3ceecbbbc9ed590d3f8e36.scope - libcontainer container 07073be6c8d7d8618eb135e8ded4f3c08918ea2f8c3ceecbbbc9ed590d3f8e36. Jul 10 00:30:10.719598 kubelet[2113]: I0710 00:30:10.715907 2113 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:10.719598 kubelet[2113]: E0710 00:30:10.716202 2113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jul 10 00:30:10.726857 containerd[1441]: time="2025-07-10T00:30:10.726804935Z" level=info msg="StartContainer for \"73c0f7be1d13f5a58f5680f1ea5c5abed29e39638900fe276f4e6661df5bc411\" returns successfully" Jul 10 00:30:10.727127 containerd[1441]: time="2025-07-10T00:30:10.726938572Z" level=info msg="StartContainer for \"07073be6c8d7d8618eb135e8ded4f3c08918ea2f8c3ceecbbbc9ed590d3f8e36\" returns successfully" Jul 10 00:30:10.727127 containerd[1441]: time="2025-07-10T00:30:10.726965147Z" level=info msg="StartContainer for \"67ca5e07899b2e00e5e57b9118e4a23fdda8b9f4a31c6590b180feb87dd2e31c\" returns successfully" Jul 10 00:30:11.103063 kubelet[2113]: E0710 00:30:11.103029 2113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:11.107043 kubelet[2113]: E0710 00:30:11.107015 2113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:11.108953 kubelet[2113]: E0710 00:30:11.107150 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:11.108953 kubelet[2113]: E0710 00:30:11.108673 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:11.109180 kubelet[2113]: E0710 00:30:11.109152 2113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:11.109302 kubelet[2113]: E0710 00:30:11.109282 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:12.112037 kubelet[2113]: E0710 00:30:12.111779 2113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:12.112037 kubelet[2113]: E0710 00:30:12.111876 2113 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:12.112037 kubelet[2113]: E0710 00:30:12.111905 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:12.112037 kubelet[2113]: E0710 00:30:12.111985 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:12.317788 kubelet[2113]: I0710 00:30:12.317758 2113 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:12.599174 kubelet[2113]: E0710 00:30:12.598250 2113 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:30:12.690102 kubelet[2113]: I0710 00:30:12.690070 2113 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:30:12.776781 kubelet[2113]: I0710 00:30:12.776727 2113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:12.782525 kubelet[2113]: E0710 00:30:12.782488 2113 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:12.782525 kubelet[2113]: I0710 00:30:12.782520 2113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:12.784350 kubelet[2113]: E0710 00:30:12.784222 2113 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:12.784350 kubelet[2113]: I0710 00:30:12.784248 2113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:12.786463 kubelet[2113]: E0710 00:30:12.786416 2113 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:13.068571 kubelet[2113]: I0710 00:30:13.068401 2113 apiserver.go:52] "Watching apiserver" Jul 10 00:30:13.077085 kubelet[2113]: I0710 00:30:13.077036 2113 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:30:14.022750 kubelet[2113]: I0710 00:30:14.022716 2113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:14.030870 kubelet[2113]: E0710 00:30:14.030817 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:14.113270 kubelet[2113]: E0710 00:30:14.113042 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:14.777297 kubelet[2113]: I0710 00:30:14.777259 2113 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:14.783420 kubelet[2113]: E0710 00:30:14.783363 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:14.857542 systemd[1]: Reloading requested from client PID 2396 ('systemctl') (unit session-7.scope)... Jul 10 00:30:14.857558 systemd[1]: Reloading... Jul 10 00:30:14.923671 zram_generator::config[2438]: No configuration found. Jul 10 00:30:15.005023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:30:15.069858 systemd[1]: Reloading finished in 212 ms. Jul 10 00:30:15.112722 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:15.129610 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:30:15.130944 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:15.131008 systemd[1]: kubelet.service: Consumed 1.061s CPU time, 127.5M memory peak, 0B memory swap peak. Jul 10 00:30:15.142969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:15.256330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:15.261203 (kubelet)[2477]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:30:15.302094 kubelet[2477]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:30:15.302094 kubelet[2477]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:30:15.302094 kubelet[2477]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:30:15.302438 kubelet[2477]: I0710 00:30:15.302169 2477 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:30:15.312817 kubelet[2477]: I0710 00:30:15.312774 2477 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:30:15.312817 kubelet[2477]: I0710 00:30:15.312805 2477 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:30:15.313123 kubelet[2477]: I0710 00:30:15.313108 2477 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:30:15.314551 kubelet[2477]: I0710 00:30:15.314521 2477 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:30:15.319518 kubelet[2477]: I0710 00:30:15.319068 2477 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:30:15.322937 kubelet[2477]: E0710 00:30:15.322836 2477 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:30:15.322937 kubelet[2477]: I0710 00:30:15.322870 2477 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:30:15.327866 kubelet[2477]: I0710 00:30:15.327665 2477 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:30:15.328201 kubelet[2477]: I0710 00:30:15.328148 2477 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:30:15.328486 kubelet[2477]: I0710 00:30:15.328189 2477 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:30:15.328577 kubelet[2477]: I0710 00:30:15.328500 2477 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:30:15.328577 kubelet[2477]: I0710 00:30:15.328510 2477 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:30:15.328577 kubelet[2477]: I0710 00:30:15.328558 2477 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:30:15.328746 kubelet[2477]: I0710 00:30:15.328732 2477 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:30:15.328777 kubelet[2477]: I0710 00:30:15.328751 2477 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:30:15.328777 kubelet[2477]: I0710 00:30:15.328770 2477 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:30:15.328821 kubelet[2477]: I0710 00:30:15.328779 2477 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:30:15.332919 kubelet[2477]: I0710 00:30:15.332866 2477 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:30:15.333460 kubelet[2477]: I0710 00:30:15.333417 2477 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:30:15.334100 kubelet[2477]: I0710 00:30:15.334080 2477 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:30:15.334157 kubelet[2477]: I0710 00:30:15.334142 2477 server.go:1287] "Started kubelet" Jul 10 00:30:15.335116 kubelet[2477]: I0710 00:30:15.335076 2477 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:30:15.335862 kubelet[2477]: I0710 00:30:15.335830 2477 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:30:15.338604 kubelet[2477]: I0710 00:30:15.338532 2477 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:30:15.340503 kubelet[2477]: I0710 00:30:15.338844 2477 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:30:15.340503 kubelet[2477]: I0710 00:30:15.339070 2477 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:30:15.342037 kubelet[2477]: I0710 00:30:15.342006 2477 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:30:15.346483 kubelet[2477]: E0710 00:30:15.344100 2477 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:15.346483 kubelet[2477]: I0710 00:30:15.344141 2477 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:30:15.346483 kubelet[2477]: I0710 00:30:15.344314 2477 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:30:15.346483 kubelet[2477]: I0710 00:30:15.344443 2477 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:30:15.352268 kubelet[2477]: I0710 00:30:15.351958 2477 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:30:15.352268 kubelet[2477]: I0710 00:30:15.352078 2477 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:30:15.356081 kubelet[2477]: I0710 00:30:15.355944 2477 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:30:15.359734 kubelet[2477]: E0710 00:30:15.359679 2477 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:30:15.360325 kubelet[2477]: I0710 00:30:15.360090 2477 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:30:15.361648 kubelet[2477]: I0710 00:30:15.361600 2477 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:30:15.361712 kubelet[2477]: I0710 00:30:15.361654 2477 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:30:15.361712 kubelet[2477]: I0710 00:30:15.361675 2477 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:30:15.361712 kubelet[2477]: I0710 00:30:15.361686 2477 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:30:15.361796 kubelet[2477]: E0710 00:30:15.361752 2477 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:30:15.392723 kubelet[2477]: I0710 00:30:15.392691 2477 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:30:15.393479 kubelet[2477]: I0710 00:30:15.392887 2477 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:30:15.393479 kubelet[2477]: I0710 00:30:15.392915 2477 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:30:15.393479 kubelet[2477]: I0710 00:30:15.393087 2477 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:30:15.393479 kubelet[2477]: I0710 00:30:15.393099 2477 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:30:15.393479 kubelet[2477]: I0710 00:30:15.393118 2477 policy_none.go:49] "None policy: Start" Jul 10 00:30:15.393479 kubelet[2477]: I0710 00:30:15.393126 2477 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:30:15.393479 kubelet[2477]: I0710 00:30:15.393135 2477 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:30:15.393479 kubelet[2477]: I0710 00:30:15.393224 2477 state_mem.go:75] "Updated machine memory state" Jul 10 00:30:15.397169 kubelet[2477]: I0710 00:30:15.397138 2477 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:30:15.397531 kubelet[2477]: I0710 00:30:15.397301 2477 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:30:15.397531 kubelet[2477]: I0710 00:30:15.397320 2477 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:30:15.397639 kubelet[2477]: I0710 00:30:15.397609 2477 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:30:15.398580 kubelet[2477]: E0710 00:30:15.398557 2477 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:30:15.463038 kubelet[2477]: I0710 00:30:15.462969 2477 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:15.463383 kubelet[2477]: I0710 00:30:15.463364 2477 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:15.463680 kubelet[2477]: I0710 00:30:15.463548 2477 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:15.495915 kubelet[2477]: E0710 00:30:15.495755 2477 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:15.495915 kubelet[2477]: E0710 00:30:15.495809 2477 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:15.501509 kubelet[2477]: I0710 00:30:15.501483 2477 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:15.508556 kubelet[2477]: I0710 00:30:15.508369 2477 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 00:30:15.508556 kubelet[2477]: I0710 00:30:15.508500 2477 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:30:15.645860 kubelet[2477]: I0710 00:30:15.645744 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d63a6f4aceeca0cb752b063dfa33607-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2d63a6f4aceeca0cb752b063dfa33607\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:15.646031 kubelet[2477]: I0710 00:30:15.646006 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:15.646273 kubelet[2477]: I0710 00:30:15.646067 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:15.646273 kubelet[2477]: I0710 00:30:15.646116 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:15.646273 kubelet[2477]: I0710 00:30:15.646138 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:15.646273 kubelet[2477]: I0710 00:30:15.646158 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:15.646273 kubelet[2477]: I0710 00:30:15.646175 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d63a6f4aceeca0cb752b063dfa33607-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2d63a6f4aceeca0cb752b063dfa33607\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:15.646411 kubelet[2477]: I0710 00:30:15.646198 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d63a6f4aceeca0cb752b063dfa33607-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2d63a6f4aceeca0cb752b063dfa33607\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:15.646411 kubelet[2477]: I0710 00:30:15.646214 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:15.796266 kubelet[2477]: E0710 00:30:15.796218 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:15.796625 kubelet[2477]: E0710 00:30:15.796566 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:15.796625 kubelet[2477]: E0710 00:30:15.796603 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:15.881410 sudo[2514]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:30:15.881716 sudo[2514]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 00:30:16.325080 sudo[2514]: pam_unix(sudo:session): session closed for user root Jul 10 00:30:16.331065 kubelet[2477]: I0710 00:30:16.330818 2477 apiserver.go:52] "Watching apiserver" Jul 10 00:30:16.346583 kubelet[2477]: I0710 00:30:16.346520 2477 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:30:16.376188 kubelet[2477]: E0710 00:30:16.375105 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:16.376188 kubelet[2477]: I0710 00:30:16.375272 2477 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:16.376188 kubelet[2477]: E0710 00:30:16.375667 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:16.381487 kubelet[2477]: E0710 00:30:16.380602 2477 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:16.381487 kubelet[2477]: E0710 00:30:16.380753 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:16.405021 kubelet[2477]: I0710 00:30:16.404585 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.404567187 podStartE2EDuration="2.404567187s" podCreationTimestamp="2025-07-10 00:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:16.39611948 +0000 UTC m=+1.130822537" watchObservedRunningTime="2025-07-10 00:30:16.404567187 +0000 UTC m=+1.139270244" Jul 10 00:30:16.414003 kubelet[2477]: I0710 00:30:16.413927 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.413911575 podStartE2EDuration="2.413911575s" podCreationTimestamp="2025-07-10 00:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:16.405649758 +0000 UTC m=+1.140352775" watchObservedRunningTime="2025-07-10 00:30:16.413911575 +0000 UTC m=+1.148614632" Jul 10 00:30:17.376593 kubelet[2477]: E0710 00:30:17.376558 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:17.376593 kubelet[2477]: E0710 00:30:17.376577 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:17.880437 sudo[1619]: pam_unix(sudo:session): session closed for user root Jul 10 00:30:17.882581 sshd[1616]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:17.886395 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:49222.service: Deactivated successfully. Jul 10 00:30:17.888130 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:30:17.888377 systemd[1]: session-7.scope: Consumed 7.697s CPU time, 153.2M memory peak, 0B memory swap peak. Jul 10 00:30:17.889006 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:30:17.889863 systemd-logind[1425]: Removed session 7. Jul 10 00:30:21.077929 kubelet[2477]: E0710 00:30:21.077887 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:21.100392 kubelet[2477]: I0710 00:30:21.100323 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.100307397 podStartE2EDuration="6.100307397s" podCreationTimestamp="2025-07-10 00:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:16.414520898 +0000 UTC m=+1.149223955" watchObservedRunningTime="2025-07-10 00:30:21.100307397 +0000 UTC m=+5.835010454" Jul 10 00:30:21.358945 kubelet[2477]: I0710 00:30:21.358804 2477 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:30:21.359223 containerd[1441]: time="2025-07-10T00:30:21.359189723Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:30:21.359577 kubelet[2477]: I0710 00:30:21.359353 2477 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:30:21.382561 kubelet[2477]: E0710 00:30:21.382521 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:22.107736 systemd[1]: Created slice kubepods-besteffort-podd21acfea_aa4e_45a9_9c57_582b3aa63080.slice - libcontainer container kubepods-besteffort-podd21acfea_aa4e_45a9_9c57_582b3aa63080.slice. Jul 10 00:30:22.132366 systemd[1]: Created slice kubepods-burstable-pod16e86c90_5453_4e92_b298_392870edbf1c.slice - libcontainer container kubepods-burstable-pod16e86c90_5453_4e92_b298_392870edbf1c.slice. Jul 10 00:30:22.191509 kubelet[2477]: I0710 00:30:22.191442 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cni-path\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.191509 kubelet[2477]: I0710 00:30:22.191510 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cilium-cgroup\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.191888 kubelet[2477]: I0710 00:30:22.191536 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cilium-run\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.191888 kubelet[2477]: I0710 00:30:22.191553 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-bpf-maps\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.191888 kubelet[2477]: I0710 00:30:22.191567 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16e86c90-5453-4e92-b298-392870edbf1c-hubble-tls\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.191888 kubelet[2477]: I0710 00:30:22.191584 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwmv\" (UniqueName: \"kubernetes.io/projected/d21acfea-aa4e-45a9-9c57-582b3aa63080-kube-api-access-2jwmv\") pod \"kube-proxy-72q26\" (UID: \"d21acfea-aa4e-45a9-9c57-582b3aa63080\") " pod="kube-system/kube-proxy-72q26" Jul 10 00:30:22.191888 kubelet[2477]: I0710 00:30:22.191600 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16e86c90-5453-4e92-b298-392870edbf1c-clustermesh-secrets\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.191888 kubelet[2477]: I0710 00:30:22.191613 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16e86c90-5453-4e92-b298-392870edbf1c-cilium-config-path\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.192004 kubelet[2477]: I0710 00:30:22.191628 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-host-proc-sys-net\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.192004 kubelet[2477]: I0710 00:30:22.191642 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d21acfea-aa4e-45a9-9c57-582b3aa63080-xtables-lock\") pod \"kube-proxy-72q26\" (UID: \"d21acfea-aa4e-45a9-9c57-582b3aa63080\") " pod="kube-system/kube-proxy-72q26" Jul 10 00:30:22.192004 kubelet[2477]: I0710 00:30:22.191658 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-hostproc\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.192004 kubelet[2477]: I0710 00:30:22.191672 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-etc-cni-netd\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.192004 kubelet[2477]: I0710 00:30:22.191686 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d21acfea-aa4e-45a9-9c57-582b3aa63080-kube-proxy\") pod \"kube-proxy-72q26\" (UID: \"d21acfea-aa4e-45a9-9c57-582b3aa63080\") " pod="kube-system/kube-proxy-72q26" Jul 10 00:30:22.192004 kubelet[2477]: I0710 00:30:22.191699 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-lib-modules\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.192115 kubelet[2477]: I0710 00:30:22.191713 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd86t\" (UniqueName: \"kubernetes.io/projected/16e86c90-5453-4e92-b298-392870edbf1c-kube-api-access-kd86t\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.192115 kubelet[2477]: I0710 00:30:22.191727 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-host-proc-sys-kernel\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.192115 kubelet[2477]: I0710 00:30:22.191743 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d21acfea-aa4e-45a9-9c57-582b3aa63080-lib-modules\") pod \"kube-proxy-72q26\" (UID: \"d21acfea-aa4e-45a9-9c57-582b3aa63080\") " pod="kube-system/kube-proxy-72q26" Jul 10 00:30:22.192115 kubelet[2477]: I0710 00:30:22.191768 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-xtables-lock\") pod \"cilium-zqsg5\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " pod="kube-system/cilium-zqsg5" Jul 10 00:30:22.408283 systemd[1]: Created slice kubepods-besteffort-pod552a43bb_11a3_41d5_9ee4_9126c34ecb10.slice - libcontainer container kubepods-besteffort-pod552a43bb_11a3_41d5_9ee4_9126c34ecb10.slice. Jul 10 00:30:22.426945 kubelet[2477]: E0710 00:30:22.426911 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:22.427554 containerd[1441]: time="2025-07-10T00:30:22.427515120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-72q26,Uid:d21acfea-aa4e-45a9-9c57-582b3aa63080,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:22.437315 kubelet[2477]: E0710 00:30:22.437228 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:22.437897 containerd[1441]: time="2025-07-10T00:30:22.437768292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zqsg5,Uid:16e86c90-5453-4e92-b298-392870edbf1c,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:22.455690 containerd[1441]: time="2025-07-10T00:30:22.455261697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:22.455900 containerd[1441]: time="2025-07-10T00:30:22.455842808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:22.456005 containerd[1441]: time="2025-07-10T00:30:22.455958471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:22.456320 containerd[1441]: time="2025-07-10T00:30:22.456216360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:22.460763 containerd[1441]: time="2025-07-10T00:30:22.460688500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:22.460881 containerd[1441]: time="2025-07-10T00:30:22.460741871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:22.460881 containerd[1441]: time="2025-07-10T00:30:22.460756833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:22.460881 containerd[1441]: time="2025-07-10T00:30:22.460836129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:22.475662 systemd[1]: Started cri-containerd-45e49e4115ffc719592655366123ffcbfdbe3aa015f5c87c55544d80ba6de797.scope - libcontainer container 45e49e4115ffc719592655366123ffcbfdbe3aa015f5c87c55544d80ba6de797. Jul 10 00:30:22.479112 systemd[1]: Started cri-containerd-90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5.scope - libcontainer container 90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5. Jul 10 00:30:22.493842 kubelet[2477]: I0710 00:30:22.493799 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz565\" (UniqueName: \"kubernetes.io/projected/552a43bb-11a3-41d5-9ee4-9126c34ecb10-kube-api-access-xz565\") pod \"cilium-operator-6c4d7847fc-x2p72\" (UID: \"552a43bb-11a3-41d5-9ee4-9126c34ecb10\") " pod="kube-system/cilium-operator-6c4d7847fc-x2p72" Jul 10 00:30:22.493842 kubelet[2477]: I0710 00:30:22.493844 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/552a43bb-11a3-41d5-9ee4-9126c34ecb10-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-x2p72\" (UID: \"552a43bb-11a3-41d5-9ee4-9126c34ecb10\") " pod="kube-system/cilium-operator-6c4d7847fc-x2p72" Jul 10 00:30:22.502782 containerd[1441]: time="2025-07-10T00:30:22.502640649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zqsg5,Uid:16e86c90-5453-4e92-b298-392870edbf1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\"" Jul 10 00:30:22.506472 containerd[1441]: time="2025-07-10T00:30:22.506359204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-72q26,Uid:d21acfea-aa4e-45a9-9c57-582b3aa63080,Namespace:kube-system,Attempt:0,} returns sandbox id \"45e49e4115ffc719592655366123ffcbfdbe3aa015f5c87c55544d80ba6de797\"" Jul 10 00:30:22.507940 kubelet[2477]: E0710 00:30:22.507908 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:22.508558 kubelet[2477]: E0710 00:30:22.508539 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:22.510028 containerd[1441]: time="2025-07-10T00:30:22.509669561Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:30:22.511231 containerd[1441]: time="2025-07-10T00:30:22.511201856Z" level=info msg="CreateContainer within sandbox \"45e49e4115ffc719592655366123ffcbfdbe3aa015f5c87c55544d80ba6de797\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:30:22.524352 containerd[1441]: time="2025-07-10T00:30:22.524299055Z" level=info msg="CreateContainer within sandbox \"45e49e4115ffc719592655366123ffcbfdbe3aa015f5c87c55544d80ba6de797\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d8441688ff02aa687ac1be2cb40f0233c551306e9d48f98cfe24292ac1996227\"" Jul 10 00:30:22.525680 containerd[1441]: time="2025-07-10T00:30:22.525651355Z" level=info msg="StartContainer for \"d8441688ff02aa687ac1be2cb40f0233c551306e9d48f98cfe24292ac1996227\"" Jul 10 00:30:22.554626 systemd[1]: Started cri-containerd-d8441688ff02aa687ac1be2cb40f0233c551306e9d48f98cfe24292ac1996227.scope - libcontainer container d8441688ff02aa687ac1be2cb40f0233c551306e9d48f98cfe24292ac1996227. Jul 10 00:30:22.583791 containerd[1441]: time="2025-07-10T00:30:22.583720843Z" level=info msg="StartContainer for \"d8441688ff02aa687ac1be2cb40f0233c551306e9d48f98cfe24292ac1996227\" returns successfully" Jul 10 00:30:22.711850 kubelet[2477]: E0710 00:30:22.711520 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:22.712824 containerd[1441]: time="2025-07-10T00:30:22.712397152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-x2p72,Uid:552a43bb-11a3-41d5-9ee4-9126c34ecb10,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:22.743403 containerd[1441]: time="2025-07-10T00:30:22.743153308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:22.743403 containerd[1441]: time="2025-07-10T00:30:22.743207358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:22.743403 containerd[1441]: time="2025-07-10T00:30:22.743218320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:22.743403 containerd[1441]: time="2025-07-10T00:30:22.743307377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:22.762673 systemd[1]: Started cri-containerd-51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf.scope - libcontainer container 51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf. Jul 10 00:30:22.800675 containerd[1441]: time="2025-07-10T00:30:22.800635163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-x2p72,Uid:552a43bb-11a3-41d5-9ee4-9126c34ecb10,Namespace:kube-system,Attempt:0,} returns sandbox id \"51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf\"" Jul 10 00:30:22.801479 kubelet[2477]: E0710 00:30:22.801286 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:23.162283 kubelet[2477]: E0710 00:30:23.162160 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:23.391898 kubelet[2477]: E0710 00:30:23.391829 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:23.392847 kubelet[2477]: E0710 00:30:23.392010 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:23.419591 kubelet[2477]: I0710 00:30:23.418980 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-72q26" podStartSLOduration=1.418960563 podStartE2EDuration="1.418960563s" podCreationTimestamp="2025-07-10 00:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:23.408312222 +0000 UTC m=+8.143015279" watchObservedRunningTime="2025-07-10 00:30:23.418960563 +0000 UTC m=+8.153663620" Jul 10 00:30:23.983053 kubelet[2477]: E0710 00:30:23.983014 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:24.393981 kubelet[2477]: E0710 00:30:24.393863 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:24.394415 kubelet[2477]: E0710 00:30:24.394385 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:26.778739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount99369668.mount: Deactivated successfully. Jul 10 00:30:28.067050 update_engine[1426]: I20250710 00:30:28.066962 1426 update_attempter.cc:509] Updating boot flags... Jul 10 00:30:28.099478 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2873) Jul 10 00:30:28.150255 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2875) Jul 10 00:30:32.648002 containerd[1441]: time="2025-07-10T00:30:32.647945334Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:32.650154 containerd[1441]: time="2025-07-10T00:30:32.650058259Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 10 00:30:32.651015 containerd[1441]: time="2025-07-10T00:30:32.650991488Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:32.652576 containerd[1441]: time="2025-07-10T00:30:32.652536827Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.142498395s" Jul 10 00:30:32.652623 containerd[1441]: time="2025-07-10T00:30:32.652578231Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 00:30:32.656181 containerd[1441]: time="2025-07-10T00:30:32.656100200Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:30:32.661833 containerd[1441]: time="2025-07-10T00:30:32.661669965Z" level=info msg="CreateContainer within sandbox \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:30:32.680102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423375273.mount: Deactivated successfully. Jul 10 00:30:32.680978 containerd[1441]: time="2025-07-10T00:30:32.680817944Z" level=info msg="CreateContainer within sandbox \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634\"" Jul 10 00:30:32.681624 containerd[1441]: time="2025-07-10T00:30:32.681592074Z" level=info msg="StartContainer for \"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634\"" Jul 10 00:30:32.713638 systemd[1]: Started cri-containerd-595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634.scope - libcontainer container 595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634. Jul 10 00:30:32.739386 containerd[1441]: time="2025-07-10T00:30:32.739327805Z" level=info msg="StartContainer for \"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634\" returns successfully" Jul 10 00:30:32.858720 systemd[1]: cri-containerd-595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634.scope: Deactivated successfully. Jul 10 00:30:33.103164 containerd[1441]: time="2025-07-10T00:30:33.093380018Z" level=info msg="shim disconnected" id=595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634 namespace=k8s.io Jul 10 00:30:33.103164 containerd[1441]: time="2025-07-10T00:30:33.103079491Z" level=warning msg="cleaning up after shim disconnected" id=595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634 namespace=k8s.io Jul 10 00:30:33.103164 containerd[1441]: time="2025-07-10T00:30:33.103098333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:33.421728 kubelet[2477]: E0710 00:30:33.421286 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:33.425362 containerd[1441]: time="2025-07-10T00:30:33.425135758Z" level=info msg="CreateContainer within sandbox \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:30:33.440651 containerd[1441]: time="2025-07-10T00:30:33.440518860Z" level=info msg="CreateContainer within sandbox \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de\"" Jul 10 00:30:33.441475 containerd[1441]: time="2025-07-10T00:30:33.441391597Z" level=info msg="StartContainer for \"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de\"" Jul 10 00:30:33.476688 systemd[1]: Started cri-containerd-c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de.scope - libcontainer container c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de. Jul 10 00:30:33.506941 containerd[1441]: time="2025-07-10T00:30:33.506893603Z" level=info msg="StartContainer for \"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de\" returns successfully" Jul 10 00:30:33.527599 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:30:33.528211 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:30:33.528318 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:30:33.535865 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:30:33.536059 systemd[1]: cri-containerd-c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de.scope: Deactivated successfully. Jul 10 00:30:33.565146 containerd[1441]: time="2025-07-10T00:30:33.565072839Z" level=info msg="shim disconnected" id=c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de namespace=k8s.io Jul 10 00:30:33.565146 containerd[1441]: time="2025-07-10T00:30:33.565133925Z" level=warning msg="cleaning up after shim disconnected" id=c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de namespace=k8s.io Jul 10 00:30:33.565146 containerd[1441]: time="2025-07-10T00:30:33.565143647Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:33.567098 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:30:33.681385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634-rootfs.mount: Deactivated successfully. Jul 10 00:30:33.875203 containerd[1441]: time="2025-07-10T00:30:33.875117977Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:33.877365 containerd[1441]: time="2025-07-10T00:30:33.877307859Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 10 00:30:33.878990 containerd[1441]: time="2025-07-10T00:30:33.878954762Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:33.881110 containerd[1441]: time="2025-07-10T00:30:33.881060955Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.22492087s" Jul 10 00:30:33.881110 containerd[1441]: time="2025-07-10T00:30:33.881108240Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 00:30:33.883422 containerd[1441]: time="2025-07-10T00:30:33.883373330Z" level=info msg="CreateContainer within sandbox \"51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:30:33.901085 containerd[1441]: time="2025-07-10T00:30:33.901030044Z" level=info msg="CreateContainer within sandbox \"51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\"" Jul 10 00:30:33.902776 containerd[1441]: time="2025-07-10T00:30:33.902725711Z" level=info msg="StartContainer for \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\"" Jul 10 00:30:33.935649 systemd[1]: Started cri-containerd-766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f.scope - libcontainer container 766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f. Jul 10 00:30:33.957964 containerd[1441]: time="2025-07-10T00:30:33.957914736Z" level=info msg="StartContainer for \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\" returns successfully" Jul 10 00:30:34.419422 kubelet[2477]: E0710 00:30:34.419376 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:34.423264 kubelet[2477]: E0710 00:30:34.423210 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:34.425100 containerd[1441]: time="2025-07-10T00:30:34.425057961Z" level=info msg="CreateContainer within sandbox \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:30:34.431411 kubelet[2477]: I0710 00:30:34.431344 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-x2p72" podStartSLOduration=1.351065698 podStartE2EDuration="12.431329824s" podCreationTimestamp="2025-07-10 00:30:22 +0000 UTC" firstStartedPulling="2025-07-10 00:30:22.801699888 +0000 UTC m=+7.536402945" lastFinishedPulling="2025-07-10 00:30:33.881964014 +0000 UTC m=+18.616667071" observedRunningTime="2025-07-10 00:30:34.431069716 +0000 UTC m=+19.165772773" watchObservedRunningTime="2025-07-10 00:30:34.431329824 +0000 UTC m=+19.166032881" Jul 10 00:30:34.488759 containerd[1441]: time="2025-07-10T00:30:34.488666523Z" level=info msg="CreateContainer within sandbox \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0\"" Jul 10 00:30:34.491676 containerd[1441]: time="2025-07-10T00:30:34.491635517Z" level=info msg="StartContainer for \"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0\"" Jul 10 00:30:34.529683 systemd[1]: Started cri-containerd-46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0.scope - libcontainer container 46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0. Jul 10 00:30:34.564188 containerd[1441]: time="2025-07-10T00:30:34.564105136Z" level=info msg="StartContainer for \"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0\" returns successfully" Jul 10 00:30:34.580543 systemd[1]: cri-containerd-46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0.scope: Deactivated successfully. Jul 10 00:30:34.634402 containerd[1441]: time="2025-07-10T00:30:34.634332438Z" level=info msg="shim disconnected" id=46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0 namespace=k8s.io Jul 10 00:30:34.634402 containerd[1441]: time="2025-07-10T00:30:34.634388404Z" level=warning msg="cleaning up after shim disconnected" id=46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0 namespace=k8s.io Jul 10 00:30:34.634402 containerd[1441]: time="2025-07-10T00:30:34.634397525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:34.652183 containerd[1441]: time="2025-07-10T00:30:34.652132399Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:30:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 00:30:35.426209 kubelet[2477]: E0710 00:30:35.425784 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:35.426622 kubelet[2477]: E0710 00:30:35.426581 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:35.430877 containerd[1441]: time="2025-07-10T00:30:35.430829791Z" level=info msg="CreateContainer within sandbox \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:30:35.452985 containerd[1441]: time="2025-07-10T00:30:35.452935185Z" level=info msg="CreateContainer within sandbox \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db\"" Jul 10 00:30:35.454739 containerd[1441]: time="2025-07-10T00:30:35.453694621Z" level=info msg="StartContainer for \"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db\"" Jul 10 00:30:35.480657 systemd[1]: Started cri-containerd-7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db.scope - libcontainer container 7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db. Jul 10 00:30:35.507542 systemd[1]: cri-containerd-7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db.scope: Deactivated successfully. Jul 10 00:30:35.510571 containerd[1441]: time="2025-07-10T00:30:35.510028914Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e86c90_5453_4e92_b298_392870edbf1c.slice/cri-containerd-7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db.scope/memory.events\": no such file or directory" Jul 10 00:30:35.539924 containerd[1441]: time="2025-07-10T00:30:35.539880211Z" level=info msg="StartContainer for \"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db\" returns successfully" Jul 10 00:30:35.549112 containerd[1441]: time="2025-07-10T00:30:35.549033416Z" level=info msg="shim disconnected" id=7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db namespace=k8s.io Jul 10 00:30:35.549112 containerd[1441]: time="2025-07-10T00:30:35.549094542Z" level=warning msg="cleaning up after shim disconnected" id=7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db namespace=k8s.io Jul 10 00:30:35.549112 containerd[1441]: time="2025-07-10T00:30:35.549103143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:35.678559 systemd[1]: run-containerd-runc-k8s.io-7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db-runc.sotXlS.mount: Deactivated successfully. Jul 10 00:30:35.678653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db-rootfs.mount: Deactivated successfully. Jul 10 00:30:36.430277 kubelet[2477]: E0710 00:30:36.430114 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:36.432704 containerd[1441]: time="2025-07-10T00:30:36.432658055Z" level=info msg="CreateContainer within sandbox \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:30:36.454133 containerd[1441]: time="2025-07-10T00:30:36.454076127Z" level=info msg="CreateContainer within sandbox \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\"" Jul 10 00:30:36.456206 containerd[1441]: time="2025-07-10T00:30:36.454891646Z" level=info msg="StartContainer for \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\"" Jul 10 00:30:36.483631 systemd[1]: Started cri-containerd-081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4.scope - libcontainer container 081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4. Jul 10 00:30:36.514518 containerd[1441]: time="2025-07-10T00:30:36.514472648Z" level=info msg="StartContainer for \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\" returns successfully" Jul 10 00:30:36.655501 kubelet[2477]: I0710 00:30:36.655456 2477 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:30:36.727270 systemd[1]: Created slice kubepods-burstable-podd1c61c48_93c1_4b8b_9c07_6957109f633d.slice - libcontainer container kubepods-burstable-podd1c61c48_93c1_4b8b_9c07_6957109f633d.slice. Jul 10 00:30:36.732861 systemd[1]: Created slice kubepods-burstable-podaf26841c_a656_45d9_9d56_fb56643afb1a.slice - libcontainer container kubepods-burstable-podaf26841c_a656_45d9_9d56_fb56643afb1a.slice. Jul 10 00:30:36.808443 kubelet[2477]: I0710 00:30:36.808403 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1c61c48-93c1-4b8b-9c07-6957109f633d-config-volume\") pod \"coredns-668d6bf9bc-kpsq7\" (UID: \"d1c61c48-93c1-4b8b-9c07-6957109f633d\") " pod="kube-system/coredns-668d6bf9bc-kpsq7" Jul 10 00:30:36.808443 kubelet[2477]: I0710 00:30:36.808497 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af26841c-a656-45d9-9d56-fb56643afb1a-config-volume\") pod \"coredns-668d6bf9bc-qw87d\" (UID: \"af26841c-a656-45d9-9d56-fb56643afb1a\") " pod="kube-system/coredns-668d6bf9bc-qw87d" Jul 10 00:30:36.808689 kubelet[2477]: I0710 00:30:36.808518 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4w2x\" (UniqueName: \"kubernetes.io/projected/af26841c-a656-45d9-9d56-fb56643afb1a-kube-api-access-d4w2x\") pod \"coredns-668d6bf9bc-qw87d\" (UID: \"af26841c-a656-45d9-9d56-fb56643afb1a\") " pod="kube-system/coredns-668d6bf9bc-qw87d" Jul 10 00:30:36.808689 kubelet[2477]: I0710 00:30:36.808553 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9pn\" (UniqueName: \"kubernetes.io/projected/d1c61c48-93c1-4b8b-9c07-6957109f633d-kube-api-access-8t9pn\") pod \"coredns-668d6bf9bc-kpsq7\" (UID: \"d1c61c48-93c1-4b8b-9c07-6957109f633d\") " pod="kube-system/coredns-668d6bf9bc-kpsq7" Jul 10 00:30:37.033012 kubelet[2477]: E0710 00:30:37.032894 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:37.033740 containerd[1441]: time="2025-07-10T00:30:37.033667368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kpsq7,Uid:d1c61c48-93c1-4b8b-9c07-6957109f633d,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:37.036884 kubelet[2477]: E0710 00:30:37.036580 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:37.037373 containerd[1441]: time="2025-07-10T00:30:37.037286863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qw87d,Uid:af26841c-a656-45d9-9d56-fb56643afb1a,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:37.435147 kubelet[2477]: E0710 00:30:37.435103 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:37.539506 kubelet[2477]: I0710 00:30:37.539387 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zqsg5" podStartSLOduration=5.392101966 podStartE2EDuration="15.539370736s" podCreationTimestamp="2025-07-10 00:30:22 +0000 UTC" firstStartedPulling="2025-07-10 00:30:22.508699174 +0000 UTC m=+7.243402231" lastFinishedPulling="2025-07-10 00:30:32.655967944 +0000 UTC m=+17.390671001" observedRunningTime="2025-07-10 00:30:37.539079189 +0000 UTC m=+22.273782246" watchObservedRunningTime="2025-07-10 00:30:37.539370736 +0000 UTC m=+22.274073753" Jul 10 00:30:38.437017 kubelet[2477]: E0710 00:30:38.436968 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:38.835308 systemd-networkd[1381]: cilium_host: Link UP Jul 10 00:30:38.835460 systemd-networkd[1381]: cilium_net: Link UP Jul 10 00:30:38.835465 systemd-networkd[1381]: cilium_net: Gained carrier Jul 10 00:30:38.835596 systemd-networkd[1381]: cilium_host: Gained carrier Jul 10 00:30:38.835737 systemd-networkd[1381]: cilium_host: Gained IPv6LL Jul 10 00:30:38.934160 systemd-networkd[1381]: cilium_vxlan: Link UP Jul 10 00:30:38.934167 systemd-networkd[1381]: cilium_vxlan: Gained carrier Jul 10 00:30:39.281479 kernel: NET: Registered PF_ALG protocol family Jul 10 00:30:39.286591 systemd-networkd[1381]: cilium_net: Gained IPv6LL Jul 10 00:30:39.438733 kubelet[2477]: E0710 00:30:39.438701 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:39.910848 systemd-networkd[1381]: lxc_health: Link UP Jul 10 00:30:39.923325 systemd-networkd[1381]: lxc_health: Gained carrier Jul 10 00:30:40.172444 systemd-networkd[1381]: lxc57f4e8ef5e84: Link UP Jul 10 00:30:40.183475 kernel: eth0: renamed from tmpfc5d9 Jul 10 00:30:40.190920 systemd-networkd[1381]: lxc57f4e8ef5e84: Gained carrier Jul 10 00:30:40.192710 systemd-networkd[1381]: lxc83c68cf21391: Link UP Jul 10 00:30:40.202954 kernel: eth0: renamed from tmpd9f6b Jul 10 00:30:40.213098 systemd-networkd[1381]: lxc83c68cf21391: Gained carrier Jul 10 00:30:40.374746 systemd-networkd[1381]: cilium_vxlan: Gained IPv6LL Jul 10 00:30:40.478857 kubelet[2477]: E0710 00:30:40.478730 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:41.334651 systemd-networkd[1381]: lxc83c68cf21391: Gained IPv6LL Jul 10 00:30:41.444685 kubelet[2477]: E0710 00:30:41.444637 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:41.911617 systemd-networkd[1381]: lxc_health: Gained IPv6LL Jul 10 00:30:42.039566 systemd-networkd[1381]: lxc57f4e8ef5e84: Gained IPv6LL Jul 10 00:30:43.405970 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:36054.service - OpenSSH per-connection server daemon (10.0.0.1:36054). Jul 10 00:30:43.453231 sshd[3716]: Accepted publickey for core from 10.0.0.1 port 36054 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:43.455604 sshd[3716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:43.460826 systemd-logind[1425]: New session 8 of user core. Jul 10 00:30:43.470680 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:30:43.632973 sshd[3716]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:43.636898 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:36054.service: Deactivated successfully. Jul 10 00:30:43.638430 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:30:43.640921 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:30:43.641886 systemd-logind[1425]: Removed session 8. Jul 10 00:30:43.916139 containerd[1441]: time="2025-07-10T00:30:43.916050154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:43.916139 containerd[1441]: time="2025-07-10T00:30:43.916096918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:43.916139 containerd[1441]: time="2025-07-10T00:30:43.916107998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:43.916654 containerd[1441]: time="2025-07-10T00:30:43.916176243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:43.921846 containerd[1441]: time="2025-07-10T00:30:43.921583918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:43.921846 containerd[1441]: time="2025-07-10T00:30:43.921650123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:43.921846 containerd[1441]: time="2025-07-10T00:30:43.921661964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:43.921846 containerd[1441]: time="2025-07-10T00:30:43.921757451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:43.937649 systemd[1]: Started cri-containerd-fc5d982656d875ed75eab0c4a8f2cce79890bb3e976256fb105de19d453f4739.scope - libcontainer container fc5d982656d875ed75eab0c4a8f2cce79890bb3e976256fb105de19d453f4739. Jul 10 00:30:43.942775 systemd[1]: Started cri-containerd-d9f6bcb74f9e5ed7130d8e95bdc44303ccea74a21b4b284754d65c533a741212.scope - libcontainer container d9f6bcb74f9e5ed7130d8e95bdc44303ccea74a21b4b284754d65c533a741212. Jul 10 00:30:43.948805 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:30:43.953255 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:30:43.975868 containerd[1441]: time="2025-07-10T00:30:43.975821160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kpsq7,Uid:d1c61c48-93c1-4b8b-9c07-6957109f633d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9f6bcb74f9e5ed7130d8e95bdc44303ccea74a21b4b284754d65c533a741212\"" Jul 10 00:30:43.976224 containerd[1441]: time="2025-07-10T00:30:43.975856202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qw87d,Uid:af26841c-a656-45d9-9d56-fb56643afb1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc5d982656d875ed75eab0c4a8f2cce79890bb3e976256fb105de19d453f4739\"" Jul 10 00:30:43.977395 kubelet[2477]: E0710 00:30:43.977069 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:43.977395 kubelet[2477]: E0710 00:30:43.977073 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:43.980407 containerd[1441]: time="2025-07-10T00:30:43.980171117Z" level=info msg="CreateContainer within sandbox \"d9f6bcb74f9e5ed7130d8e95bdc44303ccea74a21b4b284754d65c533a741212\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:30:43.980572 containerd[1441]: time="2025-07-10T00:30:43.980500581Z" level=info msg="CreateContainer within sandbox \"fc5d982656d875ed75eab0c4a8f2cce79890bb3e976256fb105de19d453f4739\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:30:43.995983 containerd[1441]: time="2025-07-10T00:30:43.995902186Z" level=info msg="CreateContainer within sandbox \"d9f6bcb74f9e5ed7130d8e95bdc44303ccea74a21b4b284754d65c533a741212\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48471b98896d129911c991e69582fa153ab2ae22a4557aa6ae8853d06fdbf071\"" Jul 10 00:30:43.996846 containerd[1441]: time="2025-07-10T00:30:43.996767849Z" level=info msg="StartContainer for \"48471b98896d129911c991e69582fa153ab2ae22a4557aa6ae8853d06fdbf071\"" Jul 10 00:30:43.999673 containerd[1441]: time="2025-07-10T00:30:43.999637699Z" level=info msg="CreateContainer within sandbox \"fc5d982656d875ed75eab0c4a8f2cce79890bb3e976256fb105de19d453f4739\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73c0bd8f2ac7244381598ff8f834b1464559418bc6af7065a07a2f05bd59774a\"" Jul 10 00:30:44.001380 containerd[1441]: time="2025-07-10T00:30:44.001298260Z" level=info msg="StartContainer for \"73c0bd8f2ac7244381598ff8f834b1464559418bc6af7065a07a2f05bd59774a\"" Jul 10 00:30:44.030647 systemd[1]: Started cri-containerd-48471b98896d129911c991e69582fa153ab2ae22a4557aa6ae8853d06fdbf071.scope - libcontainer container 48471b98896d129911c991e69582fa153ab2ae22a4557aa6ae8853d06fdbf071. Jul 10 00:30:44.033303 systemd[1]: Started cri-containerd-73c0bd8f2ac7244381598ff8f834b1464559418bc6af7065a07a2f05bd59774a.scope - libcontainer container 73c0bd8f2ac7244381598ff8f834b1464559418bc6af7065a07a2f05bd59774a. Jul 10 00:30:44.067278 containerd[1441]: time="2025-07-10T00:30:44.067236306Z" level=info msg="StartContainer for \"48471b98896d129911c991e69582fa153ab2ae22a4557aa6ae8853d06fdbf071\" returns successfully" Jul 10 00:30:44.080866 containerd[1441]: time="2025-07-10T00:30:44.080692214Z" level=info msg="StartContainer for \"73c0bd8f2ac7244381598ff8f834b1464559418bc6af7065a07a2f05bd59774a\" returns successfully" Jul 10 00:30:44.450817 kubelet[2477]: E0710 00:30:44.450692 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:44.455001 kubelet[2477]: E0710 00:30:44.454960 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:44.463550 kubelet[2477]: I0710 00:30:44.463494 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kpsq7" podStartSLOduration=22.463475898 podStartE2EDuration="22.463475898s" podCreationTimestamp="2025-07-10 00:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:44.463007665 +0000 UTC m=+29.197710722" watchObservedRunningTime="2025-07-10 00:30:44.463475898 +0000 UTC m=+29.198179075" Jul 10 00:30:44.488324 kubelet[2477]: I0710 00:30:44.488259 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qw87d" podStartSLOduration=22.488240323 podStartE2EDuration="22.488240323s" podCreationTimestamp="2025-07-10 00:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:44.487708045 +0000 UTC m=+29.222411102" watchObservedRunningTime="2025-07-10 00:30:44.488240323 +0000 UTC m=+29.222943340" Jul 10 00:30:45.456723 kubelet[2477]: E0710 00:30:45.456681 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:46.458846 kubelet[2477]: E0710 00:30:46.458763 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:48.647194 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:36068.service - OpenSSH per-connection server daemon (10.0.0.1:36068). Jul 10 00:30:48.690406 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 36068 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:48.691869 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:48.695648 systemd-logind[1425]: New session 9 of user core. Jul 10 00:30:48.711608 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:30:48.829642 sshd[3901]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:48.833212 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:36068.service: Deactivated successfully. Jul 10 00:30:48.835029 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:30:48.835696 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:30:48.836720 systemd-logind[1425]: Removed session 9. Jul 10 00:30:53.843111 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:33484.service - OpenSSH per-connection server daemon (10.0.0.1:33484). Jul 10 00:30:53.891481 sshd[3918]: Accepted publickey for core from 10.0.0.1 port 33484 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:53.892790 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:53.896340 systemd-logind[1425]: New session 10 of user core. Jul 10 00:30:53.905624 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:30:54.022903 sshd[3918]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:54.038035 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:33484.service: Deactivated successfully. Jul 10 00:30:54.040762 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:30:54.042762 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:30:54.053914 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:33500.service - OpenSSH per-connection server daemon (10.0.0.1:33500). Jul 10 00:30:54.055239 systemd-logind[1425]: Removed session 10. Jul 10 00:30:54.092072 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 33500 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:54.093895 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:54.098034 systemd-logind[1425]: New session 11 of user core. Jul 10 00:30:54.107649 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:30:54.283122 sshd[3934]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:54.290029 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:33506.service - OpenSSH per-connection server daemon (10.0.0.1:33506). Jul 10 00:30:54.294226 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:33500.service: Deactivated successfully. Jul 10 00:30:54.294587 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:30:54.296076 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:30:54.297769 systemd-logind[1425]: Removed session 11. Jul 10 00:30:54.345305 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 33506 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:54.348871 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:54.353137 systemd-logind[1425]: New session 12 of user core. Jul 10 00:30:54.360616 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:30:54.456155 kubelet[2477]: E0710 00:30:54.456100 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:54.477583 kubelet[2477]: E0710 00:30:54.477555 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:54.478605 sshd[3945]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:54.483139 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:33506.service: Deactivated successfully. Jul 10 00:30:54.485978 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:30:54.487095 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:30:54.487904 systemd-logind[1425]: Removed session 12. Jul 10 00:30:59.507798 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:33522.service - OpenSSH per-connection server daemon (10.0.0.1:33522). Jul 10 00:30:59.575635 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 33522 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:59.577071 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:59.581337 systemd-logind[1425]: New session 13 of user core. Jul 10 00:30:59.588665 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:30:59.709871 sshd[3966]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:59.713888 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:33522.service: Deactivated successfully. Jul 10 00:30:59.715813 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:30:59.716936 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:30:59.718543 systemd-logind[1425]: Removed session 13. Jul 10 00:31:04.720307 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:57090.service - OpenSSH per-connection server daemon (10.0.0.1:57090). Jul 10 00:31:04.801821 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 57090 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:04.803570 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:04.810910 systemd-logind[1425]: New session 14 of user core. Jul 10 00:31:04.819717 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:31:04.984982 sshd[3981]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:04.997379 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:57090.service: Deactivated successfully. Jul 10 00:31:05.000917 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:31:05.002739 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:31:05.014120 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:57098.service - OpenSSH per-connection server daemon (10.0.0.1:57098). Jul 10 00:31:05.019342 systemd-logind[1425]: Removed session 14. Jul 10 00:31:05.053238 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 57098 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:05.054934 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:05.059887 systemd-logind[1425]: New session 15 of user core. Jul 10 00:31:05.070667 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:31:05.294714 sshd[3995]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:05.305152 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:57098.service: Deactivated successfully. Jul 10 00:31:05.308010 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:31:05.309556 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:31:05.315038 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:57114.service - OpenSSH per-connection server daemon (10.0.0.1:57114). Jul 10 00:31:05.316283 systemd-logind[1425]: Removed session 15. Jul 10 00:31:05.368489 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 57114 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:05.369693 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:05.374679 systemd-logind[1425]: New session 16 of user core. Jul 10 00:31:05.380675 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:31:06.218799 sshd[4007]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:06.226860 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:57114.service: Deactivated successfully. Jul 10 00:31:06.232383 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:31:06.234812 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:31:06.243829 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:57130.service - OpenSSH per-connection server daemon (10.0.0.1:57130). Jul 10 00:31:06.245715 systemd-logind[1425]: Removed session 16. Jul 10 00:31:06.284673 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 57130 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:06.286295 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:06.291610 systemd-logind[1425]: New session 17 of user core. Jul 10 00:31:06.295693 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:31:06.540195 sshd[4027]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:06.552328 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:57130.service: Deactivated successfully. Jul 10 00:31:06.554945 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:31:06.558341 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:31:06.565982 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:57138.service - OpenSSH per-connection server daemon (10.0.0.1:57138). Jul 10 00:31:06.567015 systemd-logind[1425]: Removed session 17. Jul 10 00:31:06.605662 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 57138 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:06.607548 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:06.612470 systemd-logind[1425]: New session 18 of user core. Jul 10 00:31:06.616648 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:31:06.726379 sshd[4039]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:06.730338 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:57138.service: Deactivated successfully. Jul 10 00:31:06.732052 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:31:06.732782 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:31:06.733772 systemd-logind[1425]: Removed session 18. Jul 10 00:31:11.744682 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:57144.service - OpenSSH per-connection server daemon (10.0.0.1:57144). Jul 10 00:31:11.785729 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 57144 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:11.787558 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:11.791702 systemd-logind[1425]: New session 19 of user core. Jul 10 00:31:11.809658 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:31:11.916642 sshd[4055]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:11.920214 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:57144.service: Deactivated successfully. Jul 10 00:31:11.921820 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:31:11.923030 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:31:11.923859 systemd-logind[1425]: Removed session 19. Jul 10 00:31:16.931201 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:58646.service - OpenSSH per-connection server daemon (10.0.0.1:58646). Jul 10 00:31:16.973295 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 58646 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:16.975178 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:16.978935 systemd-logind[1425]: New session 20 of user core. Jul 10 00:31:16.991633 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:31:17.100811 sshd[4071]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:17.104014 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:31:17.104327 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:58646.service: Deactivated successfully. Jul 10 00:31:17.107282 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:31:17.108695 systemd-logind[1425]: Removed session 20. Jul 10 00:31:22.112704 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:58650.service - OpenSSH per-connection server daemon (10.0.0.1:58650). Jul 10 00:31:22.154946 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 58650 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:22.157578 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:22.163173 systemd-logind[1425]: New session 21 of user core. Jul 10 00:31:22.173673 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:31:22.284949 sshd[4085]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:22.298173 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:58650.service: Deactivated successfully. Jul 10 00:31:22.302123 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:31:22.303764 systemd-logind[1425]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:31:22.316836 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:58664.service - OpenSSH per-connection server daemon (10.0.0.1:58664). Jul 10 00:31:22.319566 systemd-logind[1425]: Removed session 21. Jul 10 00:31:22.352918 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 58664 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:22.354523 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:22.358961 systemd-logind[1425]: New session 22 of user core. Jul 10 00:31:22.364629 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:31:23.865419 containerd[1441]: time="2025-07-10T00:31:23.865233257Z" level=info msg="StopContainer for \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\" with timeout 30 (s)" Jul 10 00:31:23.866676 containerd[1441]: time="2025-07-10T00:31:23.866566702Z" level=info msg="Stop container \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\" with signal terminated" Jul 10 00:31:23.882003 systemd[1]: run-containerd-runc-k8s.io-081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4-runc.G6hTxv.mount: Deactivated successfully. Jul 10 00:31:23.883429 systemd[1]: cri-containerd-766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f.scope: Deactivated successfully. Jul 10 00:31:23.899486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f-rootfs.mount: Deactivated successfully. Jul 10 00:31:23.910757 containerd[1441]: time="2025-07-10T00:31:23.910565975Z" level=info msg="shim disconnected" id=766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f namespace=k8s.io Jul 10 00:31:23.910757 containerd[1441]: time="2025-07-10T00:31:23.910617734Z" level=warning msg="cleaning up after shim disconnected" id=766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f namespace=k8s.io Jul 10 00:31:23.910757 containerd[1441]: time="2025-07-10T00:31:23.910626694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:31:23.913648 containerd[1441]: time="2025-07-10T00:31:23.913609417Z" level=info msg="StopContainer for \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\" with timeout 2 (s)" Jul 10 00:31:23.913996 containerd[1441]: time="2025-07-10T00:31:23.913969648Z" level=info msg="Stop container \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\" with signal terminated" Jul 10 00:31:23.920050 systemd-networkd[1381]: lxc_health: Link DOWN Jul 10 00:31:23.920056 systemd-networkd[1381]: lxc_health: Lost carrier Jul 10 00:31:23.935790 containerd[1441]: time="2025-07-10T00:31:23.935660372Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:31:23.942648 systemd[1]: cri-containerd-081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4.scope: Deactivated successfully. Jul 10 00:31:23.942953 systemd[1]: cri-containerd-081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4.scope: Consumed 6.789s CPU time. Jul 10 00:31:23.962863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4-rootfs.mount: Deactivated successfully. Jul 10 00:31:23.968405 containerd[1441]: time="2025-07-10T00:31:23.968353775Z" level=info msg="StopContainer for \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\" returns successfully" Jul 10 00:31:23.968667 containerd[1441]: time="2025-07-10T00:31:23.968563889Z" level=info msg="shim disconnected" id=081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4 namespace=k8s.io Jul 10 00:31:23.968667 containerd[1441]: time="2025-07-10T00:31:23.968604328Z" level=warning msg="cleaning up after shim disconnected" id=081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4 namespace=k8s.io Jul 10 00:31:23.968667 containerd[1441]: time="2025-07-10T00:31:23.968612408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:31:23.969757 containerd[1441]: time="2025-07-10T00:31:23.969707060Z" level=info msg="StopPodSandbox for \"51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf\"" Jul 10 00:31:23.969936 containerd[1441]: time="2025-07-10T00:31:23.969791138Z" level=info msg="Container to stop \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:31:23.979380 systemd[1]: cri-containerd-51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf.scope: Deactivated successfully. Jul 10 00:31:23.994564 containerd[1441]: time="2025-07-10T00:31:23.994522704Z" level=info msg="StopContainer for \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\" returns successfully" Jul 10 00:31:23.995003 containerd[1441]: time="2025-07-10T00:31:23.994979253Z" level=info msg="StopPodSandbox for \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\"" Jul 10 00:31:23.995046 containerd[1441]: time="2025-07-10T00:31:23.995015772Z" level=info msg="Container to stop \"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:31:23.995046 containerd[1441]: time="2025-07-10T00:31:23.995028731Z" level=info msg="Container to stop \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:31:23.995046 containerd[1441]: time="2025-07-10T00:31:23.995038131Z" level=info msg="Container to stop \"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:31:23.995135 containerd[1441]: time="2025-07-10T00:31:23.995047731Z" level=info msg="Container to stop \"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:31:23.995135 containerd[1441]: time="2025-07-10T00:31:23.995056971Z" level=info msg="Container to stop \"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:31:24.003721 systemd[1]: cri-containerd-90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5.scope: Deactivated successfully. Jul 10 00:31:24.006757 containerd[1441]: time="2025-07-10T00:31:24.006434808Z" level=info msg="shim disconnected" id=51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf namespace=k8s.io Jul 10 00:31:24.006757 containerd[1441]: time="2025-07-10T00:31:24.006595685Z" level=warning msg="cleaning up after shim disconnected" id=51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf namespace=k8s.io Jul 10 00:31:24.006914 containerd[1441]: time="2025-07-10T00:31:24.006797360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:31:24.018202 containerd[1441]: time="2025-07-10T00:31:24.018157806Z" level=info msg="TearDown network for sandbox \"51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf\" successfully" Jul 10 00:31:24.018202 containerd[1441]: time="2025-07-10T00:31:24.018191686Z" level=info msg="StopPodSandbox for \"51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf\" returns successfully" Jul 10 00:31:24.035662 containerd[1441]: time="2025-07-10T00:31:24.035591347Z" level=info msg="shim disconnected" id=90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5 namespace=k8s.io Jul 10 00:31:24.035662 containerd[1441]: time="2025-07-10T00:31:24.035646026Z" level=warning msg="cleaning up after shim disconnected" id=90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5 namespace=k8s.io Jul 10 00:31:24.035662 containerd[1441]: time="2025-07-10T00:31:24.035655266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:31:24.046184 containerd[1441]: time="2025-07-10T00:31:24.046118614Z" level=info msg="TearDown network for sandbox \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" successfully" Jul 10 00:31:24.046184 containerd[1441]: time="2025-07-10T00:31:24.046164093Z" level=info msg="StopPodSandbox for \"90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5\" returns successfully" Jul 10 00:31:24.105604 kubelet[2477]: I0710 00:31:24.105526 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16e86c90-5453-4e92-b298-392870edbf1c-clustermesh-secrets\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.105604 kubelet[2477]: I0710 00:31:24.105590 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-host-proc-sys-kernel\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.105604 kubelet[2477]: I0710 00:31:24.105608 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-lib-modules\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106028 kubelet[2477]: I0710 00:31:24.105627 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16e86c90-5453-4e92-b298-392870edbf1c-cilium-config-path\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106028 kubelet[2477]: I0710 00:31:24.105700 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-host-proc-sys-net\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106028 kubelet[2477]: I0710 00:31:24.105718 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-hostproc\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106028 kubelet[2477]: I0710 00:31:24.105734 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-xtables-lock\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106028 kubelet[2477]: I0710 00:31:24.105750 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cni-path\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106028 kubelet[2477]: I0710 00:31:24.105767 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd86t\" (UniqueName: \"kubernetes.io/projected/16e86c90-5453-4e92-b298-392870edbf1c-kube-api-access-kd86t\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106167 kubelet[2477]: I0710 00:31:24.105787 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/552a43bb-11a3-41d5-9ee4-9126c34ecb10-cilium-config-path\") pod \"552a43bb-11a3-41d5-9ee4-9126c34ecb10\" (UID: \"552a43bb-11a3-41d5-9ee4-9126c34ecb10\") " Jul 10 00:31:24.106167 kubelet[2477]: I0710 00:31:24.105803 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cilium-cgroup\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106167 kubelet[2477]: I0710 00:31:24.105877 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16e86c90-5453-4e92-b298-392870edbf1c-hubble-tls\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106167 kubelet[2477]: I0710 00:31:24.105896 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-etc-cni-netd\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106167 kubelet[2477]: I0710 00:31:24.105912 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cilium-run\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106167 kubelet[2477]: I0710 00:31:24.105927 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-bpf-maps\") pod \"16e86c90-5453-4e92-b298-392870edbf1c\" (UID: \"16e86c90-5453-4e92-b298-392870edbf1c\") " Jul 10 00:31:24.106287 kubelet[2477]: I0710 00:31:24.105945 2477 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz565\" (UniqueName: \"kubernetes.io/projected/552a43bb-11a3-41d5-9ee4-9126c34ecb10-kube-api-access-xz565\") pod \"552a43bb-11a3-41d5-9ee4-9126c34ecb10\" (UID: \"552a43bb-11a3-41d5-9ee4-9126c34ecb10\") " Jul 10 00:31:24.107183 kubelet[2477]: I0710 00:31:24.107138 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:31:24.107183 kubelet[2477]: I0710 00:31:24.107137 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:31:24.107286 kubelet[2477]: I0710 00:31:24.107195 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-hostproc" (OuterVolumeSpecName: "hostproc") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:31:24.108237 kubelet[2477]: I0710 00:31:24.108183 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:31:24.109430 kubelet[2477]: I0710 00:31:24.108565 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:31:24.109430 kubelet[2477]: I0710 00:31:24.108611 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:31:24.109430 kubelet[2477]: I0710 00:31:24.108635 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:31:24.109677 kubelet[2477]: I0710 00:31:24.109655 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cni-path" (OuterVolumeSpecName: "cni-path") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:31:24.109756 kubelet[2477]: I0710 00:31:24.109742 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:31:24.109825 kubelet[2477]: I0710 00:31:24.109813 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:31:24.110227 kubelet[2477]: I0710 00:31:24.110193 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/552a43bb-11a3-41d5-9ee4-9126c34ecb10-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "552a43bb-11a3-41d5-9ee4-9126c34ecb10" (UID: "552a43bb-11a3-41d5-9ee4-9126c34ecb10"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:31:24.110554 kubelet[2477]: I0710 00:31:24.110530 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552a43bb-11a3-41d5-9ee4-9126c34ecb10-kube-api-access-xz565" (OuterVolumeSpecName: "kube-api-access-xz565") pod "552a43bb-11a3-41d5-9ee4-9126c34ecb10" (UID: "552a43bb-11a3-41d5-9ee4-9126c34ecb10"). InnerVolumeSpecName "kube-api-access-xz565". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:31:24.110648 kubelet[2477]: I0710 00:31:24.110634 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e86c90-5453-4e92-b298-392870edbf1c-kube-api-access-kd86t" (OuterVolumeSpecName: "kube-api-access-kd86t") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "kube-api-access-kd86t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:31:24.110987 kubelet[2477]: I0710 00:31:24.110956 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e86c90-5453-4e92-b298-392870edbf1c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:31:24.111145 kubelet[2477]: I0710 00:31:24.111111 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e86c90-5453-4e92-b298-392870edbf1c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:31:24.111353 kubelet[2477]: I0710 00:31:24.111310 2477 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e86c90-5453-4e92-b298-392870edbf1c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "16e86c90-5453-4e92-b298-392870edbf1c" (UID: "16e86c90-5453-4e92-b298-392870edbf1c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:31:24.206699 kubelet[2477]: I0710 00:31:24.206650 2477 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.206699 kubelet[2477]: I0710 00:31:24.206685 2477 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kd86t\" (UniqueName: \"kubernetes.io/projected/16e86c90-5453-4e92-b298-392870edbf1c-kube-api-access-kd86t\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209490 kubelet[2477]: I0710 00:31:24.209436 2477 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/552a43bb-11a3-41d5-9ee4-9126c34ecb10-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209490 kubelet[2477]: I0710 00:31:24.209476 2477 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209490 kubelet[2477]: I0710 00:31:24.209487 2477 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16e86c90-5453-4e92-b298-392870edbf1c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209490 kubelet[2477]: I0710 00:31:24.209495 2477 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209622 kubelet[2477]: I0710 00:31:24.209504 2477 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209622 kubelet[2477]: I0710 00:31:24.209513 2477 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209622 kubelet[2477]: I0710 00:31:24.209523 2477 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xz565\" (UniqueName: \"kubernetes.io/projected/552a43bb-11a3-41d5-9ee4-9126c34ecb10-kube-api-access-xz565\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209622 kubelet[2477]: I0710 00:31:24.209532 2477 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16e86c90-5453-4e92-b298-392870edbf1c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209622 kubelet[2477]: I0710 00:31:24.209540 2477 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209622 kubelet[2477]: I0710 00:31:24.209548 2477 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209622 kubelet[2477]: I0710 00:31:24.209557 2477 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16e86c90-5453-4e92-b298-392870edbf1c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209622 kubelet[2477]: I0710 00:31:24.209566 2477 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209787 kubelet[2477]: I0710 00:31:24.209574 2477 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.209787 kubelet[2477]: I0710 00:31:24.209582 2477 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16e86c90-5453-4e92-b298-392870edbf1c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:24.559341 kubelet[2477]: I0710 00:31:24.559221 2477 scope.go:117] "RemoveContainer" containerID="766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f" Jul 10 00:31:24.561043 containerd[1441]: time="2025-07-10T00:31:24.560819393Z" level=info msg="RemoveContainer for \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\"" Jul 10 00:31:24.566837 containerd[1441]: time="2025-07-10T00:31:24.566801089Z" level=info msg="RemoveContainer for \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\" returns successfully" Jul 10 00:31:24.567235 kubelet[2477]: I0710 00:31:24.567049 2477 scope.go:117] "RemoveContainer" containerID="766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f" Jul 10 00:31:24.567321 containerd[1441]: time="2025-07-10T00:31:24.567288877Z" level=error msg="ContainerStatus for \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\": not found" Jul 10 00:31:24.567419 systemd[1]: Removed slice kubepods-burstable-pod16e86c90_5453_4e92_b298_392870edbf1c.slice - libcontainer container kubepods-burstable-pod16e86c90_5453_4e92_b298_392870edbf1c.slice. Jul 10 00:31:24.567759 systemd[1]: kubepods-burstable-pod16e86c90_5453_4e92_b298_392870edbf1c.slice: Consumed 6.992s CPU time. Jul 10 00:31:24.568428 kubelet[2477]: E0710 00:31:24.567530 2477 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\": not found" containerID="766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f" Jul 10 00:31:24.569506 systemd[1]: Removed slice kubepods-besteffort-pod552a43bb_11a3_41d5_9ee4_9126c34ecb10.slice - libcontainer container kubepods-besteffort-pod552a43bb_11a3_41d5_9ee4_9126c34ecb10.slice. Jul 10 00:31:24.573676 kubelet[2477]: I0710 00:31:24.573546 2477 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f"} err="failed to get container status \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"766305a6949343c714ce1732b1d5244389789c6d1ae679dd90668ebf0eec8b6f\": not found" Jul 10 00:31:24.573676 kubelet[2477]: I0710 00:31:24.573676 2477 scope.go:117] "RemoveContainer" containerID="081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4" Jul 10 00:31:24.575754 containerd[1441]: time="2025-07-10T00:31:24.575695155Z" level=info msg="RemoveContainer for \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\"" Jul 10 00:31:24.582010 containerd[1441]: time="2025-07-10T00:31:24.581968524Z" level=info msg="RemoveContainer for \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\" returns successfully" Jul 10 00:31:24.582294 kubelet[2477]: I0710 00:31:24.582272 2477 scope.go:117] "RemoveContainer" containerID="7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db" Jul 10 00:31:24.583345 containerd[1441]: time="2025-07-10T00:31:24.583311692Z" level=info msg="RemoveContainer for \"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db\"" Jul 10 00:31:24.598752 containerd[1441]: time="2025-07-10T00:31:24.598650883Z" level=info msg="RemoveContainer for \"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db\" returns successfully" Jul 10 00:31:24.598883 kubelet[2477]: I0710 00:31:24.598849 2477 scope.go:117] "RemoveContainer" containerID="46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0" Jul 10 00:31:24.601336 containerd[1441]: time="2025-07-10T00:31:24.601147783Z" level=info msg="RemoveContainer for \"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0\"" Jul 10 00:31:24.606473 containerd[1441]: time="2025-07-10T00:31:24.606405816Z" level=info msg="RemoveContainer for \"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0\" returns successfully" Jul 10 00:31:24.608102 kubelet[2477]: I0710 00:31:24.608072 2477 scope.go:117] "RemoveContainer" containerID="c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de" Jul 10 00:31:24.609646 containerd[1441]: time="2025-07-10T00:31:24.609603779Z" level=info msg="RemoveContainer for \"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de\"" Jul 10 00:31:24.617042 containerd[1441]: time="2025-07-10T00:31:24.616996522Z" level=info msg="RemoveContainer for \"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de\" returns successfully" Jul 10 00:31:24.617369 kubelet[2477]: I0710 00:31:24.617245 2477 scope.go:117] "RemoveContainer" containerID="595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634" Jul 10 00:31:24.618244 containerd[1441]: time="2025-07-10T00:31:24.618217732Z" level=info msg="RemoveContainer for \"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634\"" Jul 10 00:31:24.620302 containerd[1441]: time="2025-07-10T00:31:24.620261523Z" level=info msg="RemoveContainer for \"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634\" returns successfully" Jul 10 00:31:24.620617 kubelet[2477]: I0710 00:31:24.620519 2477 scope.go:117] "RemoveContainer" containerID="081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4" Jul 10 00:31:24.620759 containerd[1441]: time="2025-07-10T00:31:24.620710192Z" level=error msg="ContainerStatus for \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\": not found" Jul 10 00:31:24.620842 kubelet[2477]: E0710 00:31:24.620818 2477 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\": not found" containerID="081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4" Jul 10 00:31:24.620880 kubelet[2477]: I0710 00:31:24.620848 2477 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4"} err="failed to get container status \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\": rpc error: code = NotFound desc = an error occurred when try to find container \"081886ef96d5133be04eb82e6988a9664f1f931494442a253509ebb891547fe4\": not found" Jul 10 00:31:24.620880 kubelet[2477]: I0710 00:31:24.620869 2477 scope.go:117] "RemoveContainer" containerID="7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db" Jul 10 00:31:24.621056 containerd[1441]: time="2025-07-10T00:31:24.621024025Z" level=error msg="ContainerStatus for \"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db\": not found" Jul 10 00:31:24.621156 kubelet[2477]: E0710 00:31:24.621134 2477 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db\": not found" containerID="7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db" Jul 10 00:31:24.621189 kubelet[2477]: I0710 00:31:24.621162 2477 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db"} err="failed to get container status \"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db\": rpc error: code = NotFound desc = an error occurred when try to find container \"7027ad87d77a6733130b73729a75b125d5020da1c8ca3606b42128e86ccb72db\": not found" Jul 10 00:31:24.621189 kubelet[2477]: I0710 00:31:24.621182 2477 scope.go:117] "RemoveContainer" containerID="46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0" Jul 10 00:31:24.621351 containerd[1441]: time="2025-07-10T00:31:24.621317978Z" level=error msg="ContainerStatus for \"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0\": not found" Jul 10 00:31:24.621473 kubelet[2477]: E0710 00:31:24.621402 2477 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0\": not found" containerID="46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0" Jul 10 00:31:24.621473 kubelet[2477]: I0710 00:31:24.621421 2477 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0"} err="failed to get container status \"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0\": rpc error: code = NotFound desc = an error occurred when try to find container \"46c250f2db0f74908af18d3befb945ba35305f6c6f076e43af1b5fc1b38c6af0\": not found" Jul 10 00:31:24.621473 kubelet[2477]: I0710 00:31:24.621434 2477 scope.go:117] "RemoveContainer" containerID="c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de" Jul 10 00:31:24.621971 containerd[1441]: time="2025-07-10T00:31:24.621695809Z" level=error msg="ContainerStatus for \"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de\": not found" Jul 10 00:31:24.622049 kubelet[2477]: E0710 00:31:24.621844 2477 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de\": not found" containerID="c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de" Jul 10 00:31:24.622049 kubelet[2477]: I0710 00:31:24.621869 2477 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de"} err="failed to get container status \"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5820dc945e5e48590fc8180326277997f74a0c93f0848ae7745890e3516c1de\": not found" Jul 10 00:31:24.622049 kubelet[2477]: I0710 00:31:24.621885 2477 scope.go:117] "RemoveContainer" containerID="595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634" Jul 10 00:31:24.622141 containerd[1441]: time="2025-07-10T00:31:24.622045520Z" level=error msg="ContainerStatus for \"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634\": not found" Jul 10 00:31:24.622165 kubelet[2477]: E0710 00:31:24.622144 2477 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634\": not found" containerID="595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634" Jul 10 00:31:24.622194 kubelet[2477]: I0710 00:31:24.622163 2477 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634"} err="failed to get container status \"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634\": rpc error: code = NotFound desc = an error occurred when try to find container \"595bf764593c7225cfdd0d05a9dd5db3d766f1e6f1a9e21b731b707f76a0a634\": not found" Jul 10 00:31:24.876493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf-rootfs.mount: Deactivated successfully. Jul 10 00:31:24.876592 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51cf48a2571133e36d7fa2a9c92b28ca61de0e7543f408e2c5a2f4745de7fadf-shm.mount: Deactivated successfully. Jul 10 00:31:24.876660 systemd[1]: var-lib-kubelet-pods-552a43bb\x2d11a3\x2d41d5\x2d9ee4\x2d9126c34ecb10-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxz565.mount: Deactivated successfully. Jul 10 00:31:24.876712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5-rootfs.mount: Deactivated successfully. Jul 10 00:31:24.876770 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90332baa8ecf12a628bb59e00f436b5aa3bf812f9cc0e80b39782bac033df3b5-shm.mount: Deactivated successfully. Jul 10 00:31:24.876817 systemd[1]: var-lib-kubelet-pods-16e86c90\x2d5453\x2d4e92\x2db298\x2d392870edbf1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkd86t.mount: Deactivated successfully. Jul 10 00:31:24.876866 systemd[1]: var-lib-kubelet-pods-16e86c90\x2d5453\x2d4e92\x2db298\x2d392870edbf1c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:31:24.876913 systemd[1]: var-lib-kubelet-pods-16e86c90\x2d5453\x2d4e92\x2db298\x2d392870edbf1c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:31:25.370537 kubelet[2477]: I0710 00:31:25.370489 2477 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e86c90-5453-4e92-b298-392870edbf1c" path="/var/lib/kubelet/pods/16e86c90-5453-4e92-b298-392870edbf1c/volumes" Jul 10 00:31:25.371145 kubelet[2477]: I0710 00:31:25.371109 2477 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552a43bb-11a3-41d5-9ee4-9126c34ecb10" path="/var/lib/kubelet/pods/552a43bb-11a3-41d5-9ee4-9126c34ecb10/volumes" Jul 10 00:31:25.414235 kubelet[2477]: E0710 00:31:25.414148 2477 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:31:25.807596 sshd[4100]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:25.817050 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:58664.service: Deactivated successfully. Jul 10 00:31:25.818790 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:31:25.820052 systemd-logind[1425]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:31:25.831824 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:33216.service - OpenSSH per-connection server daemon (10.0.0.1:33216). Jul 10 00:31:25.833656 systemd-logind[1425]: Removed session 22. Jul 10 00:31:25.869766 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 33216 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:25.871092 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:25.874873 systemd-logind[1425]: New session 23 of user core. Jul 10 00:31:25.886610 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:31:27.020211 kubelet[2477]: I0710 00:31:27.019362 2477 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:31:27Z","lastTransitionTime":"2025-07-10T00:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:31:27.679134 sshd[4266]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:27.697133 kubelet[2477]: I0710 00:31:27.695862 2477 memory_manager.go:355] "RemoveStaleState removing state" podUID="16e86c90-5453-4e92-b298-392870edbf1c" containerName="cilium-agent" Jul 10 00:31:27.697133 kubelet[2477]: I0710 00:31:27.695950 2477 memory_manager.go:355] "RemoveStaleState removing state" podUID="552a43bb-11a3-41d5-9ee4-9126c34ecb10" containerName="cilium-operator" Jul 10 00:31:27.696337 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:33216.service: Deactivated successfully. Jul 10 00:31:27.703488 kubelet[2477]: I0710 00:31:27.701627 2477 status_manager.go:890] "Failed to get status for pod" podUID="f3a92964-1971-4aae-9a04-5e88eff2e132" pod="kube-system/cilium-r4w67" err="pods \"cilium-r4w67\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jul 10 00:31:27.702260 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:31:27.702421 systemd[1]: session-23.scope: Consumed 1.685s CPU time. Jul 10 00:31:27.704307 kubelet[2477]: W0710 00:31:27.703684 2477 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 10 00:31:27.704307 kubelet[2477]: E0710 00:31:27.703745 2477 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 10 00:31:27.706481 systemd-logind[1425]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:31:27.708215 systemd-logind[1425]: Removed session 23. Jul 10 00:31:27.716774 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:33228.service - OpenSSH per-connection server daemon (10.0.0.1:33228). Jul 10 00:31:27.725781 systemd[1]: Created slice kubepods-burstable-podf3a92964_1971_4aae_9a04_5e88eff2e132.slice - libcontainer container kubepods-burstable-podf3a92964_1971_4aae_9a04_5e88eff2e132.slice. Jul 10 00:31:27.731499 kubelet[2477]: I0710 00:31:27.729852 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3a92964-1971-4aae-9a04-5e88eff2e132-etc-cni-netd\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731499 kubelet[2477]: I0710 00:31:27.729892 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3a92964-1971-4aae-9a04-5e88eff2e132-xtables-lock\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731499 kubelet[2477]: I0710 00:31:27.729917 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3a92964-1971-4aae-9a04-5e88eff2e132-host-proc-sys-kernel\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731499 kubelet[2477]: I0710 00:31:27.729937 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3a92964-1971-4aae-9a04-5e88eff2e132-bpf-maps\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731499 kubelet[2477]: I0710 00:31:27.729952 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3a92964-1971-4aae-9a04-5e88eff2e132-cni-path\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731499 kubelet[2477]: I0710 00:31:27.729969 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3a92964-1971-4aae-9a04-5e88eff2e132-host-proc-sys-net\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731746 kubelet[2477]: I0710 00:31:27.729984 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3a92964-1971-4aae-9a04-5e88eff2e132-cilium-run\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731746 kubelet[2477]: I0710 00:31:27.729998 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3a92964-1971-4aae-9a04-5e88eff2e132-hostproc\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731746 kubelet[2477]: I0710 00:31:27.730013 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3a92964-1971-4aae-9a04-5e88eff2e132-cilium-cgroup\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731746 kubelet[2477]: I0710 00:31:27.730046 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3a92964-1971-4aae-9a04-5e88eff2e132-lib-modules\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731746 kubelet[2477]: I0710 00:31:27.730066 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3a92964-1971-4aae-9a04-5e88eff2e132-cilium-config-path\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731746 kubelet[2477]: I0710 00:31:27.730083 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3a92964-1971-4aae-9a04-5e88eff2e132-hubble-tls\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731861 kubelet[2477]: I0710 00:31:27.730098 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rckr\" (UniqueName: \"kubernetes.io/projected/f3a92964-1971-4aae-9a04-5e88eff2e132-kube-api-access-4rckr\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731861 kubelet[2477]: I0710 00:31:27.730117 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f3a92964-1971-4aae-9a04-5e88eff2e132-cilium-ipsec-secrets\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.731861 kubelet[2477]: I0710 00:31:27.730134 2477 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3a92964-1971-4aae-9a04-5e88eff2e132-clustermesh-secrets\") pod \"cilium-r4w67\" (UID: \"f3a92964-1971-4aae-9a04-5e88eff2e132\") " pod="kube-system/cilium-r4w67" Jul 10 00:31:27.756082 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 33228 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:27.758524 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:27.762166 systemd-logind[1425]: New session 24 of user core. Jul 10 00:31:27.768687 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:31:27.819279 sshd[4279]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:27.829996 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:33228.service: Deactivated successfully. Jul 10 00:31:27.842381 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:31:27.845645 systemd-logind[1425]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:31:27.855797 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:33232.service - OpenSSH per-connection server daemon (10.0.0.1:33232). Jul 10 00:31:27.861873 systemd-logind[1425]: Removed session 24. Jul 10 00:31:27.895301 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 33232 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:27.896707 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:27.900362 systemd-logind[1425]: New session 25 of user core. Jul 10 00:31:27.908623 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:31:28.834519 kubelet[2477]: E0710 00:31:28.834437 2477 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 10 00:31:28.834935 kubelet[2477]: E0710 00:31:28.834545 2477 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3a92964-1971-4aae-9a04-5e88eff2e132-clustermesh-secrets podName:f3a92964-1971-4aae-9a04-5e88eff2e132 nodeName:}" failed. No retries permitted until 2025-07-10 00:31:29.334523886 +0000 UTC m=+74.069226903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/f3a92964-1971-4aae-9a04-5e88eff2e132-clustermesh-secrets") pod "cilium-r4w67" (UID: "f3a92964-1971-4aae-9a04-5e88eff2e132") : failed to sync secret cache: timed out waiting for the condition Jul 10 00:31:29.534060 kubelet[2477]: E0710 00:31:29.533518 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:29.534198 containerd[1441]: time="2025-07-10T00:31:29.534080973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r4w67,Uid:f3a92964-1971-4aae-9a04-5e88eff2e132,Namespace:kube-system,Attempt:0,}" Jul 10 00:31:29.561212 containerd[1441]: time="2025-07-10T00:31:29.557318979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:31:29.561212 containerd[1441]: time="2025-07-10T00:31:29.557412137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:31:29.561212 containerd[1441]: time="2025-07-10T00:31:29.557444457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:29.561212 containerd[1441]: time="2025-07-10T00:31:29.557878050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:29.581689 systemd[1]: Started cri-containerd-78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc.scope - libcontainer container 78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc. Jul 10 00:31:29.609744 containerd[1441]: time="2025-07-10T00:31:29.609697532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r4w67,Uid:f3a92964-1971-4aae-9a04-5e88eff2e132,Namespace:kube-system,Attempt:0,} returns sandbox id \"78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc\"" Jul 10 00:31:29.610594 kubelet[2477]: E0710 00:31:29.610562 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:29.612617 containerd[1441]: time="2025-07-10T00:31:29.612579283Z" level=info msg="CreateContainer within sandbox \"78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:31:29.638077 containerd[1441]: time="2025-07-10T00:31:29.637812656Z" level=info msg="CreateContainer within sandbox \"78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b9bc7afdea84a0bb4e0299e5d9c615019a292b78754ca73180297a817710207\"" Jul 10 00:31:29.638752 containerd[1441]: time="2025-07-10T00:31:29.638499885Z" level=info msg="StartContainer for \"9b9bc7afdea84a0bb4e0299e5d9c615019a292b78754ca73180297a817710207\"" Jul 10 00:31:29.662643 systemd[1]: Started cri-containerd-9b9bc7afdea84a0bb4e0299e5d9c615019a292b78754ca73180297a817710207.scope - libcontainer container 9b9bc7afdea84a0bb4e0299e5d9c615019a292b78754ca73180297a817710207. Jul 10 00:31:29.684736 containerd[1441]: time="2025-07-10T00:31:29.684692342Z" level=info msg="StartContainer for \"9b9bc7afdea84a0bb4e0299e5d9c615019a292b78754ca73180297a817710207\" returns successfully" Jul 10 00:31:29.695578 systemd[1]: cri-containerd-9b9bc7afdea84a0bb4e0299e5d9c615019a292b78754ca73180297a817710207.scope: Deactivated successfully. Jul 10 00:31:29.734736 containerd[1441]: time="2025-07-10T00:31:29.734673736Z" level=info msg="shim disconnected" id=9b9bc7afdea84a0bb4e0299e5d9c615019a292b78754ca73180297a817710207 namespace=k8s.io Jul 10 00:31:29.734736 containerd[1441]: time="2025-07-10T00:31:29.734730415Z" level=warning msg="cleaning up after shim disconnected" id=9b9bc7afdea84a0bb4e0299e5d9c615019a292b78754ca73180297a817710207 namespace=k8s.io Jul 10 00:31:29.734736 containerd[1441]: time="2025-07-10T00:31:29.734741975Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:31:30.416178 kubelet[2477]: E0710 00:31:30.416138 2477 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:31:30.593964 kubelet[2477]: E0710 00:31:30.593916 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:30.595483 containerd[1441]: time="2025-07-10T00:31:30.595423206Z" level=info msg="CreateContainer within sandbox \"78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:31:30.607970 containerd[1441]: time="2025-07-10T00:31:30.607827332Z" level=info msg="CreateContainer within sandbox \"78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec17964eee3015e66cb8d92aff7d1fb376065bc866ccde8690684f9389d3ff5e\"" Jul 10 00:31:30.609289 containerd[1441]: time="2025-07-10T00:31:30.609256510Z" level=info msg="StartContainer for \"ec17964eee3015e66cb8d92aff7d1fb376065bc866ccde8690684f9389d3ff5e\"" Jul 10 00:31:30.638677 systemd[1]: Started cri-containerd-ec17964eee3015e66cb8d92aff7d1fb376065bc866ccde8690684f9389d3ff5e.scope - libcontainer container ec17964eee3015e66cb8d92aff7d1fb376065bc866ccde8690684f9389d3ff5e. Jul 10 00:31:30.665621 containerd[1441]: time="2025-07-10T00:31:30.665566469Z" level=info msg="StartContainer for \"ec17964eee3015e66cb8d92aff7d1fb376065bc866ccde8690684f9389d3ff5e\" returns successfully" Jul 10 00:31:30.672630 systemd[1]: cri-containerd-ec17964eee3015e66cb8d92aff7d1fb376065bc866ccde8690684f9389d3ff5e.scope: Deactivated successfully. Jul 10 00:31:30.690985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec17964eee3015e66cb8d92aff7d1fb376065bc866ccde8690684f9389d3ff5e-rootfs.mount: Deactivated successfully. Jul 10 00:31:30.698048 containerd[1441]: time="2025-07-10T00:31:30.697986842Z" level=info msg="shim disconnected" id=ec17964eee3015e66cb8d92aff7d1fb376065bc866ccde8690684f9389d3ff5e namespace=k8s.io Jul 10 00:31:30.698048 containerd[1441]: time="2025-07-10T00:31:30.698043801Z" level=warning msg="cleaning up after shim disconnected" id=ec17964eee3015e66cb8d92aff7d1fb376065bc866ccde8690684f9389d3ff5e namespace=k8s.io Jul 10 00:31:30.698281 containerd[1441]: time="2025-07-10T00:31:30.698054601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:31:31.597277 kubelet[2477]: E0710 00:31:31.597224 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:31.602233 containerd[1441]: time="2025-07-10T00:31:31.602089416Z" level=info msg="CreateContainer within sandbox \"78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:31:31.634351 containerd[1441]: time="2025-07-10T00:31:31.634299473Z" level=info msg="CreateContainer within sandbox \"78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3a9201ec59d4626f6913fec156c81a394ee678c51108ab69bee558471d3505bd\"" Jul 10 00:31:31.635826 containerd[1441]: time="2025-07-10T00:31:31.635788331Z" level=info msg="StartContainer for \"3a9201ec59d4626f6913fec156c81a394ee678c51108ab69bee558471d3505bd\"" Jul 10 00:31:31.675622 systemd[1]: Started cri-containerd-3a9201ec59d4626f6913fec156c81a394ee678c51108ab69bee558471d3505bd.scope - libcontainer container 3a9201ec59d4626f6913fec156c81a394ee678c51108ab69bee558471d3505bd. Jul 10 00:31:31.703994 systemd[1]: cri-containerd-3a9201ec59d4626f6913fec156c81a394ee678c51108ab69bee558471d3505bd.scope: Deactivated successfully. Jul 10 00:31:31.705526 containerd[1441]: time="2025-07-10T00:31:31.705412490Z" level=info msg="StartContainer for \"3a9201ec59d4626f6913fec156c81a394ee678c51108ab69bee558471d3505bd\" returns successfully" Jul 10 00:31:31.725324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a9201ec59d4626f6913fec156c81a394ee678c51108ab69bee558471d3505bd-rootfs.mount: Deactivated successfully. Jul 10 00:31:31.738292 containerd[1441]: time="2025-07-10T00:31:31.738056060Z" level=info msg="shim disconnected" id=3a9201ec59d4626f6913fec156c81a394ee678c51108ab69bee558471d3505bd namespace=k8s.io Jul 10 00:31:31.738292 containerd[1441]: time="2025-07-10T00:31:31.738115179Z" level=warning msg="cleaning up after shim disconnected" id=3a9201ec59d4626f6913fec156c81a394ee678c51108ab69bee558471d3505bd namespace=k8s.io Jul 10 00:31:31.738292 containerd[1441]: time="2025-07-10T00:31:31.738123739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:31:32.601491 kubelet[2477]: E0710 00:31:32.600898 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:32.602627 containerd[1441]: time="2025-07-10T00:31:32.602589909Z" level=info msg="CreateContainer within sandbox \"78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:31:32.616529 containerd[1441]: time="2025-07-10T00:31:32.616475726Z" level=info msg="CreateContainer within sandbox \"78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"99ad9ee6dee12d3370152c47bc3d6a440dcdd91eb67b7c1ac31418a68eda215c\"" Jul 10 00:31:32.617827 containerd[1441]: time="2025-07-10T00:31:32.617078118Z" level=info msg="StartContainer for \"99ad9ee6dee12d3370152c47bc3d6a440dcdd91eb67b7c1ac31418a68eda215c\"" Jul 10 00:31:32.650651 systemd[1]: Started cri-containerd-99ad9ee6dee12d3370152c47bc3d6a440dcdd91eb67b7c1ac31418a68eda215c.scope - libcontainer container 99ad9ee6dee12d3370152c47bc3d6a440dcdd91eb67b7c1ac31418a68eda215c. Jul 10 00:31:32.652267 systemd[1]: run-containerd-runc-k8s.io-99ad9ee6dee12d3370152c47bc3d6a440dcdd91eb67b7c1ac31418a68eda215c-runc.zNATPp.mount: Deactivated successfully. Jul 10 00:31:32.672326 systemd[1]: cri-containerd-99ad9ee6dee12d3370152c47bc3d6a440dcdd91eb67b7c1ac31418a68eda215c.scope: Deactivated successfully. Jul 10 00:31:32.683062 containerd[1441]: time="2025-07-10T00:31:32.683011650Z" level=info msg="StartContainer for \"99ad9ee6dee12d3370152c47bc3d6a440dcdd91eb67b7c1ac31418a68eda215c\" returns successfully" Jul 10 00:31:32.697885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99ad9ee6dee12d3370152c47bc3d6a440dcdd91eb67b7c1ac31418a68eda215c-rootfs.mount: Deactivated successfully. Jul 10 00:31:32.710880 containerd[1441]: time="2025-07-10T00:31:32.710669126Z" level=info msg="shim disconnected" id=99ad9ee6dee12d3370152c47bc3d6a440dcdd91eb67b7c1ac31418a68eda215c namespace=k8s.io Jul 10 00:31:32.710880 containerd[1441]: time="2025-07-10T00:31:32.710724885Z" level=warning msg="cleaning up after shim disconnected" id=99ad9ee6dee12d3370152c47bc3d6a440dcdd91eb67b7c1ac31418a68eda215c namespace=k8s.io Jul 10 00:31:32.710880 containerd[1441]: time="2025-07-10T00:31:32.710734605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:31:33.606763 kubelet[2477]: E0710 00:31:33.606717 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:33.608529 containerd[1441]: time="2025-07-10T00:31:33.608494370Z" level=info msg="CreateContainer within sandbox \"78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:31:33.631803 containerd[1441]: time="2025-07-10T00:31:33.631664932Z" level=info msg="CreateContainer within sandbox \"78a218a94d38343ff435b299948d15c9e79db33bb1ef53af86994c3399af08bc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"84a9c51c0da5f3aff3d8206580ae9b239c9159db975d3d8d62b324816e612e5b\"" Jul 10 00:31:33.632390 containerd[1441]: time="2025-07-10T00:31:33.632300444Z" level=info msg="StartContainer for \"84a9c51c0da5f3aff3d8206580ae9b239c9159db975d3d8d62b324816e612e5b\"" Jul 10 00:31:33.663628 systemd[1]: Started cri-containerd-84a9c51c0da5f3aff3d8206580ae9b239c9159db975d3d8d62b324816e612e5b.scope - libcontainer container 84a9c51c0da5f3aff3d8206580ae9b239c9159db975d3d8d62b324816e612e5b. Jul 10 00:31:33.695752 containerd[1441]: time="2025-07-10T00:31:33.695704244Z" level=info msg="StartContainer for \"84a9c51c0da5f3aff3d8206580ae9b239c9159db975d3d8d62b324816e612e5b\" returns successfully" Jul 10 00:31:33.988415 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 10 00:31:34.611773 kubelet[2477]: E0710 00:31:34.611742 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:34.629259 kubelet[2477]: I0710 00:31:34.629197 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r4w67" podStartSLOduration=7.629181602 podStartE2EDuration="7.629181602s" podCreationTimestamp="2025-07-10 00:31:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:31:34.629021324 +0000 UTC m=+79.363724381" watchObservedRunningTime="2025-07-10 00:31:34.629181602 +0000 UTC m=+79.363884659" Jul 10 00:31:35.613095 kubelet[2477]: E0710 00:31:35.613054 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:36.868921 systemd-networkd[1381]: lxc_health: Link UP Jul 10 00:31:36.880514 systemd-networkd[1381]: lxc_health: Gained carrier Jul 10 00:31:37.362779 kubelet[2477]: E0710 00:31:37.362728 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:37.535439 kubelet[2477]: E0710 00:31:37.535387 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:37.617463 kubelet[2477]: E0710 00:31:37.617319 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:38.619075 kubelet[2477]: E0710 00:31:38.618701 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:38.652317 systemd[1]: run-containerd-runc-k8s.io-84a9c51c0da5f3aff3d8206580ae9b239c9159db975d3d8d62b324816e612e5b-runc.MnNcVN.mount: Deactivated successfully. Jul 10 00:31:38.742612 systemd-networkd[1381]: lxc_health: Gained IPv6LL Jul 10 00:31:40.788769 kubelet[2477]: E0710 00:31:40.788584 2477 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58244->127.0.0.1:43265: write tcp 127.0.0.1:58244->127.0.0.1:43265: write: broken pipe Jul 10 00:31:42.892058 kubelet[2477]: E0710 00:31:42.892023 2477 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:58258->127.0.0.1:43265: write tcp 10.0.0.71:10250->10.0.0.71:50288: write: broken pipe Jul 10 00:31:42.894552 sshd[4290]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:42.897379 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:33232.service: Deactivated successfully. Jul 10 00:31:42.899239 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:31:42.901067 systemd-logind[1425]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:31:42.902201 systemd-logind[1425]: Removed session 25. Jul 10 00:31:44.362335 kubelet[2477]: E0710 00:31:44.362250 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"