Jul 14 23:39:35.902940 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 23:39:35.902963 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Jul 14 22:18:15 -00 2025 Jul 14 23:39:35.902974 kernel: KASLR enabled Jul 14 23:39:35.902980 kernel: efi: EFI v2.7 by EDK II Jul 14 23:39:35.902985 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jul 14 23:39:35.902991 kernel: random: crng init done Jul 14 23:39:35.902998 kernel: secureboot: Secure boot disabled Jul 14 23:39:35.903004 kernel: ACPI: Early table checksum verification disabled Jul 14 23:39:35.903010 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jul 14 23:39:35.903018 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 23:39:35.903024 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:39:35.903030 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:39:35.903036 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:39:35.903042 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:39:35.903049 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:39:35.903057 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:39:35.903063 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:39:35.903070 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:39:35.903076 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:39:35.903082 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 23:39:35.903089 kernel: NUMA: Failed to initialise from firmware Jul 14 23:39:35.903095 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 23:39:35.903102 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 14 23:39:35.903108 kernel: Zone ranges: Jul 14 23:39:35.903114 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 23:39:35.903121 kernel: DMA32 empty Jul 14 23:39:35.903127 kernel: Normal empty Jul 14 23:39:35.903134 kernel: Movable zone start for each node Jul 14 23:39:35.903140 kernel: Early memory node ranges Jul 14 23:39:35.903146 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jul 14 23:39:35.903153 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jul 14 23:39:35.903159 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jul 14 23:39:35.903166 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 14 23:39:35.903172 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 14 23:39:35.903178 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 14 23:39:35.903185 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 14 23:39:35.903191 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 14 23:39:35.903198 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 14 23:39:35.903205 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 23:39:35.903211 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 23:39:35.903220 kernel: psci: probing for conduit method from ACPI. Jul 14 23:39:35.903226 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 23:39:35.903233 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 23:39:35.903241 kernel: psci: Trusted OS migration not required Jul 14 23:39:35.903248 kernel: psci: SMC Calling Convention v1.1 Jul 14 23:39:35.903254 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 23:39:35.903261 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 14 23:39:35.903268 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 14 23:39:35.903275 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 23:39:35.903281 kernel: Detected PIPT I-cache on CPU0 Jul 14 23:39:35.903288 kernel: CPU features: detected: GIC system register CPU interface Jul 14 23:39:35.903295 kernel: CPU features: detected: Hardware dirty bit management Jul 14 23:39:35.903311 kernel: CPU features: detected: Spectre-v4 Jul 14 23:39:35.903320 kernel: CPU features: detected: Spectre-BHB Jul 14 23:39:35.903327 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 23:39:35.903334 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 23:39:35.903340 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 23:39:35.903347 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 23:39:35.903353 kernel: alternatives: applying boot alternatives Jul 14 23:39:35.903361 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=512217804ec478510322362c135dfb84c13b721a5aed5e04313ad4e2676ce8f7 Jul 14 23:39:35.903368 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 23:39:35.903375 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 23:39:35.903382 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 23:39:35.903389 kernel: Fallback order for Node 0: 0 Jul 14 23:39:35.903397 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 23:39:35.903403 kernel: Policy zone: DMA Jul 14 23:39:35.903410 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 23:39:35.903416 kernel: software IO TLB: area num 4. Jul 14 23:39:35.903423 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 14 23:39:35.903430 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) Jul 14 23:39:35.903437 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 23:39:35.903444 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 23:39:35.903451 kernel: rcu: RCU event tracing is enabled. Jul 14 23:39:35.903458 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 23:39:35.903465 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 23:39:35.903472 kernel: Tracing variant of Tasks RCU enabled. Jul 14 23:39:35.903480 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 23:39:35.903486 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 23:39:35.903496 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 23:39:35.903504 kernel: GICv3: 256 SPIs implemented Jul 14 23:39:35.903510 kernel: GICv3: 0 Extended SPIs implemented Jul 14 23:39:35.903517 kernel: Root IRQ handler: gic_handle_irq Jul 14 23:39:35.903523 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 14 23:39:35.903529 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 23:39:35.903536 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 23:39:35.903543 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 23:39:35.903549 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 14 23:39:35.903558 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 14 23:39:35.903564 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 14 23:39:35.903571 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 23:39:35.903578 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 23:39:35.903584 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 23:39:35.903591 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 23:39:35.903597 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 23:39:35.903604 kernel: arm-pv: using stolen time PV Jul 14 23:39:35.903611 kernel: Console: colour dummy device 80x25 Jul 14 23:39:35.903618 kernel: ACPI: Core revision 20230628 Jul 14 23:39:35.903625 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 23:39:35.903633 kernel: pid_max: default: 32768 minimum: 301 Jul 14 23:39:35.903640 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 23:39:35.903647 kernel: landlock: Up and running. Jul 14 23:39:35.903653 kernel: SELinux: Initializing. Jul 14 23:39:35.903660 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 23:39:35.903667 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 23:39:35.903674 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 23:39:35.903681 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 23:39:35.903687 kernel: rcu: Hierarchical SRCU implementation. Jul 14 23:39:35.903695 kernel: rcu: Max phase no-delay instances is 400. Jul 14 23:39:35.903702 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 23:39:35.903708 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 23:39:35.903715 kernel: Remapping and enabling EFI services. Jul 14 23:39:35.903722 kernel: smp: Bringing up secondary CPUs ... Jul 14 23:39:35.903729 kernel: Detected PIPT I-cache on CPU1 Jul 14 23:39:35.903735 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 23:39:35.903742 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 14 23:39:35.903749 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 23:39:35.903757 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 23:39:35.903764 kernel: Detected PIPT I-cache on CPU2 Jul 14 23:39:35.903775 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 23:39:35.903783 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 14 23:39:35.903790 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 23:39:35.903797 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 23:39:35.903804 kernel: Detected PIPT I-cache on CPU3 Jul 14 23:39:35.903811 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 23:39:35.903818 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 14 23:39:35.903827 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 23:39:35.903834 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 23:39:35.903841 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 23:39:35.903848 kernel: SMP: Total of 4 processors activated. Jul 14 23:39:35.903901 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 23:39:35.903909 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 23:39:35.903916 kernel: CPU features: detected: Common not Private translations Jul 14 23:39:35.903923 kernel: CPU features: detected: CRC32 instructions Jul 14 23:39:35.903930 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 14 23:39:35.903940 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 23:39:35.903947 kernel: CPU features: detected: LSE atomic instructions Jul 14 23:39:35.903954 kernel: CPU features: detected: Privileged Access Never Jul 14 23:39:35.903961 kernel: CPU features: detected: RAS Extension Support Jul 14 23:39:35.903968 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 23:39:35.903975 kernel: CPU: All CPU(s) started at EL1 Jul 14 23:39:35.903982 kernel: alternatives: applying system-wide alternatives Jul 14 23:39:35.903989 kernel: devtmpfs: initialized Jul 14 23:39:35.903996 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 23:39:35.904004 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 23:39:35.904011 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 23:39:35.904018 kernel: SMBIOS 3.0.0 present. Jul 14 23:39:35.904025 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 14 23:39:35.904032 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 23:39:35.904040 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 23:39:35.904047 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 23:39:35.904054 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 23:39:35.904062 kernel: audit: initializing netlink subsys (disabled) Jul 14 23:39:35.904070 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Jul 14 23:39:35.904077 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 23:39:35.904084 kernel: cpuidle: using governor menu Jul 14 23:39:35.904091 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 23:39:35.904098 kernel: ASID allocator initialised with 32768 entries Jul 14 23:39:35.904105 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 23:39:35.904112 kernel: Serial: AMBA PL011 UART driver Jul 14 23:39:35.904119 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 14 23:39:35.904126 kernel: Modules: 0 pages in range for non-PLT usage Jul 14 23:39:35.904134 kernel: Modules: 509264 pages in range for PLT usage Jul 14 23:39:35.904141 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 23:39:35.904148 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 23:39:35.904155 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 23:39:35.904162 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 14 23:39:35.904169 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 23:39:35.904176 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 23:39:35.904183 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 23:39:35.904191 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 14 23:39:35.904199 kernel: ACPI: Added _OSI(Module Device) Jul 14 23:39:35.904206 kernel: ACPI: Added _OSI(Processor Device) Jul 14 23:39:35.904213 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 23:39:35.904220 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 23:39:35.904227 kernel: ACPI: Interpreter enabled Jul 14 23:39:35.904234 kernel: ACPI: Using GIC for interrupt routing Jul 14 23:39:35.904241 kernel: ACPI: MCFG table detected, 1 entries Jul 14 23:39:35.904249 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 23:39:35.904255 kernel: printk: console [ttyAMA0] enabled Jul 14 23:39:35.904264 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 23:39:35.904409 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 23:39:35.904484 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 23:39:35.904549 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 23:39:35.904613 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 23:39:35.904676 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 23:39:35.904685 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 23:39:35.904695 kernel: PCI host bridge to bus 0000:00 Jul 14 23:39:35.904764 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 23:39:35.904824 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 23:39:35.904899 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 23:39:35.904959 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 23:39:35.905040 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 23:39:35.905123 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 23:39:35.905197 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 23:39:35.905273 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 23:39:35.905356 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 23:39:35.905455 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 23:39:35.905527 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 23:39:35.905597 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 23:39:35.905659 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 23:39:35.905723 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 23:39:35.905784 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 23:39:35.905793 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 23:39:35.905801 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 23:39:35.905808 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 23:39:35.905815 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 23:39:35.905822 kernel: iommu: Default domain type: Translated Jul 14 23:39:35.905830 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 23:39:35.905839 kernel: efivars: Registered efivars operations Jul 14 23:39:35.905846 kernel: vgaarb: loaded Jul 14 23:39:35.905862 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 23:39:35.905869 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 23:39:35.905877 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 23:39:35.905884 kernel: pnp: PnP ACPI init Jul 14 23:39:35.905966 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 23:39:35.905977 kernel: pnp: PnP ACPI: found 1 devices Jul 14 23:39:35.905986 kernel: NET: Registered PF_INET protocol family Jul 14 23:39:35.905993 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 23:39:35.906001 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 23:39:35.906008 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 23:39:35.906016 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 23:39:35.906023 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 23:39:35.906030 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 23:39:35.906037 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 23:39:35.906045 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 23:39:35.906053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 23:39:35.906061 kernel: PCI: CLS 0 bytes, default 64 Jul 14 23:39:35.906068 kernel: kvm [1]: HYP mode not available Jul 14 23:39:35.906075 kernel: Initialise system trusted keyrings Jul 14 23:39:35.906082 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 23:39:35.906089 kernel: Key type asymmetric registered Jul 14 23:39:35.906097 kernel: Asymmetric key parser 'x509' registered Jul 14 23:39:35.906104 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 14 23:39:35.906111 kernel: io scheduler mq-deadline registered Jul 14 23:39:35.906120 kernel: io scheduler kyber registered Jul 14 23:39:35.906127 kernel: io scheduler bfq registered Jul 14 23:39:35.906134 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 23:39:35.906141 kernel: ACPI: button: Power Button [PWRB] Jul 14 23:39:35.906149 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 23:39:35.906220 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 23:39:35.906230 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 23:39:35.906237 kernel: thunder_xcv, ver 1.0 Jul 14 23:39:35.906244 kernel: thunder_bgx, ver 1.0 Jul 14 23:39:35.906253 kernel: nicpf, ver 1.0 Jul 14 23:39:35.906261 kernel: nicvf, ver 1.0 Jul 14 23:39:35.906347 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 23:39:35.906415 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T23:39:35 UTC (1752536375) Jul 14 23:39:35.906424 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 23:39:35.906432 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 23:39:35.906439 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 14 23:39:35.906446 kernel: watchdog: Hard watchdog permanently disabled Jul 14 23:39:35.906456 kernel: NET: Registered PF_INET6 protocol family Jul 14 23:39:35.906464 kernel: Segment Routing with IPv6 Jul 14 23:39:35.906471 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 23:39:35.906478 kernel: NET: Registered PF_PACKET protocol family Jul 14 23:39:35.906485 kernel: Key type dns_resolver registered Jul 14 23:39:35.906492 kernel: registered taskstats version 1 Jul 14 23:39:35.906500 kernel: Loading compiled-in X.509 certificates Jul 14 23:39:35.906507 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 8eaa625e87337058757867ffb1653686b1cfa386' Jul 14 23:39:35.906514 kernel: Key type .fscrypt registered Jul 14 23:39:35.906523 kernel: Key type fscrypt-provisioning registered Jul 14 23:39:35.906530 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 23:39:35.906538 kernel: ima: Allocated hash algorithm: sha1 Jul 14 23:39:35.906545 kernel: ima: No architecture policies found Jul 14 23:39:35.906552 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 23:39:35.906560 kernel: clk: Disabling unused clocks Jul 14 23:39:35.906567 kernel: Freeing unused kernel memory: 38336K Jul 14 23:39:35.906574 kernel: Run /init as init process Jul 14 23:39:35.906581 kernel: with arguments: Jul 14 23:39:35.906589 kernel: /init Jul 14 23:39:35.906596 kernel: with environment: Jul 14 23:39:35.906603 kernel: HOME=/ Jul 14 23:39:35.906611 kernel: TERM=linux Jul 14 23:39:35.906618 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 23:39:35.906626 systemd[1]: Successfully made /usr/ read-only. Jul 14 23:39:35.906636 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 23:39:35.906646 systemd[1]: Detected virtualization kvm. Jul 14 23:39:35.906654 systemd[1]: Detected architecture arm64. Jul 14 23:39:35.906661 systemd[1]: Running in initrd. Jul 14 23:39:35.906669 systemd[1]: No hostname configured, using default hostname. Jul 14 23:39:35.906677 systemd[1]: Hostname set to . Jul 14 23:39:35.906684 systemd[1]: Initializing machine ID from VM UUID. Jul 14 23:39:35.906692 systemd[1]: Queued start job for default target initrd.target. Jul 14 23:39:35.906700 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 23:39:35.906708 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 23:39:35.906718 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 23:39:35.906726 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 23:39:35.906734 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 23:39:35.906742 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 23:39:35.906751 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 23:39:35.906759 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 23:39:35.906769 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 23:39:35.906782 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 23:39:35.906790 systemd[1]: Reached target paths.target - Path Units. Jul 14 23:39:35.906800 systemd[1]: Reached target slices.target - Slice Units. Jul 14 23:39:35.906808 systemd[1]: Reached target swap.target - Swaps. Jul 14 23:39:35.906818 systemd[1]: Reached target timers.target - Timer Units. Jul 14 23:39:35.906828 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 23:39:35.906837 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 23:39:35.906845 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 23:39:35.906864 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 14 23:39:35.906884 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 23:39:35.906893 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 23:39:35.906901 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 23:39:35.906909 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 23:39:35.906917 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 23:39:35.906924 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 23:39:35.906932 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 23:39:35.906940 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 23:39:35.906950 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 23:39:35.906959 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 23:39:35.906966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:39:35.906974 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 23:39:35.906982 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 23:39:35.906992 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 23:39:35.907000 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 23:39:35.907008 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 23:39:35.907016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:39:35.907042 systemd-journald[238]: Collecting audit messages is disabled. Jul 14 23:39:35.907062 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 23:39:35.907070 kernel: Bridge firewalling registered Jul 14 23:39:35.907078 systemd-journald[238]: Journal started Jul 14 23:39:35.907097 systemd-journald[238]: Runtime Journal (/run/log/journal/68ced0be7f7f420c814d99c4b95c8623) is 5.9M, max 47.3M, 41.4M free. Jul 14 23:39:35.891354 systemd-modules-load[240]: Inserted module 'overlay' Jul 14 23:39:35.917050 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:39:35.907493 systemd-modules-load[240]: Inserted module 'br_netfilter' Jul 14 23:39:35.920895 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 23:39:35.921870 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 23:39:35.923329 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 23:39:35.927847 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 23:39:35.930409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 23:39:35.933309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 23:39:35.937324 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:39:35.938744 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:39:35.940821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 23:39:35.956982 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 23:39:35.959212 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 23:39:35.967379 dracut-cmdline[279]: dracut-dracut-053 Jul 14 23:39:35.969863 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=512217804ec478510322362c135dfb84c13b721a5aed5e04313ad4e2676ce8f7 Jul 14 23:39:35.992050 systemd-resolved[281]: Positive Trust Anchors: Jul 14 23:39:35.992068 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 23:39:35.992104 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 23:39:35.996735 systemd-resolved[281]: Defaulting to hostname 'linux'. Jul 14 23:39:35.997706 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 23:39:36.001447 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 23:39:36.039882 kernel: SCSI subsystem initialized Jul 14 23:39:36.044871 kernel: Loading iSCSI transport class v2.0-870. Jul 14 23:39:36.053877 kernel: iscsi: registered transport (tcp) Jul 14 23:39:36.066889 kernel: iscsi: registered transport (qla4xxx) Jul 14 23:39:36.066910 kernel: QLogic iSCSI HBA Driver Jul 14 23:39:36.108899 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 23:39:36.120015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 23:39:36.136014 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 23:39:36.136061 kernel: device-mapper: uevent: version 1.0.3 Jul 14 23:39:36.139959 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 23:39:36.183887 kernel: raid6: neonx8 gen() 15766 MB/s Jul 14 23:39:36.200873 kernel: raid6: neonx4 gen() 15779 MB/s Jul 14 23:39:36.217891 kernel: raid6: neonx2 gen() 13116 MB/s Jul 14 23:39:36.234874 kernel: raid6: neonx1 gen() 10453 MB/s Jul 14 23:39:36.251867 kernel: raid6: int64x8 gen() 6766 MB/s Jul 14 23:39:36.268872 kernel: raid6: int64x4 gen() 7321 MB/s Jul 14 23:39:36.285878 kernel: raid6: int64x2 gen() 6105 MB/s Jul 14 23:39:36.303008 kernel: raid6: int64x1 gen() 5049 MB/s Jul 14 23:39:36.303034 kernel: raid6: using algorithm neonx4 gen() 15779 MB/s Jul 14 23:39:36.320928 kernel: raid6: .... xor() 12440 MB/s, rmw enabled Jul 14 23:39:36.320941 kernel: raid6: using neon recovery algorithm Jul 14 23:39:36.326334 kernel: xor: measuring software checksum speed Jul 14 23:39:36.326349 kernel: 8regs : 21550 MB/sec Jul 14 23:39:36.327019 kernel: 32regs : 21624 MB/sec Jul 14 23:39:36.328232 kernel: arm64_neon : 27766 MB/sec Jul 14 23:39:36.328253 kernel: xor: using function: arm64_neon (27766 MB/sec) Jul 14 23:39:36.378873 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 23:39:36.389164 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 23:39:36.407078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 23:39:36.420452 systemd-udevd[465]: Using default interface naming scheme 'v255'. Jul 14 23:39:36.424095 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 23:39:36.436173 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 23:39:36.447017 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Jul 14 23:39:36.471782 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 23:39:36.483012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 23:39:36.522242 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 23:39:36.531149 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 23:39:36.547062 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 23:39:36.548576 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 23:39:36.550413 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 23:39:36.552749 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 23:39:36.562058 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 23:39:36.572392 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 23:39:36.583765 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 14 23:39:36.583934 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 23:39:36.589010 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 23:39:36.589127 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:39:36.597177 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 23:39:36.597199 kernel: GPT:9289727 != 19775487 Jul 14 23:39:36.597209 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 23:39:36.597218 kernel: GPT:9289727 != 19775487 Jul 14 23:39:36.597227 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 23:39:36.597237 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 23:39:36.597216 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:39:36.598553 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 23:39:36.598685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:39:36.602581 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:39:36.610869 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (516) Jul 14 23:39:36.611156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:39:36.616112 kernel: BTRFS: device fsid 8895327b-1920-447b-b5a7-0382bfb5e640 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (509) Jul 14 23:39:36.623364 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 23:39:36.624741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:39:36.634593 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 23:39:36.654951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 23:39:36.661106 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 23:39:36.662317 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 23:39:36.677022 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 23:39:36.681017 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:39:36.683735 disk-uuid[555]: Primary Header is updated. Jul 14 23:39:36.683735 disk-uuid[555]: Secondary Entries is updated. Jul 14 23:39:36.683735 disk-uuid[555]: Secondary Header is updated. Jul 14 23:39:36.689880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 23:39:36.701111 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:39:37.698881 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 23:39:37.699306 disk-uuid[556]: The operation has completed successfully. Jul 14 23:39:37.719912 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 23:39:37.720000 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 23:39:37.760988 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 23:39:37.763572 sh[575]: Success Jul 14 23:39:37.778063 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 23:39:37.801103 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 23:39:37.816079 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 23:39:37.817462 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 23:39:37.826637 kernel: BTRFS info (device dm-0): first mount of filesystem 8895327b-1920-447b-b5a7-0382bfb5e640 Jul 14 23:39:37.826687 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 14 23:39:37.826708 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 23:39:37.828445 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 23:39:37.828462 kernel: BTRFS info (device dm-0): using free space tree Jul 14 23:39:37.832416 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 23:39:37.833676 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 23:39:37.844028 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 23:39:37.845581 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 23:39:37.858895 kernel: BTRFS info (device vda6): first mount of filesystem eef90d40-9937-4493-9431-dd224d491776 Jul 14 23:39:37.858936 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 23:39:37.858947 kernel: BTRFS info (device vda6): using free space tree Jul 14 23:39:37.862868 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 23:39:37.866877 kernel: BTRFS info (device vda6): last unmount of filesystem eef90d40-9937-4493-9431-dd224d491776 Jul 14 23:39:37.869725 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 23:39:37.882035 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 23:39:37.931330 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 23:39:37.950008 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 23:39:37.967020 ignition[663]: Ignition 2.20.0 Jul 14 23:39:37.967030 ignition[663]: Stage: fetch-offline Jul 14 23:39:37.967061 ignition[663]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:39:37.967069 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:39:37.967224 ignition[663]: parsed url from cmdline: "" Jul 14 23:39:37.967227 ignition[663]: no config URL provided Jul 14 23:39:37.967231 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 23:39:37.967238 ignition[663]: no config at "/usr/lib/ignition/user.ign" Jul 14 23:39:37.967261 ignition[663]: op(1): [started] loading QEMU firmware config module Jul 14 23:39:37.967266 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 23:39:37.975175 systemd-networkd[766]: lo: Link UP Jul 14 23:39:37.975187 systemd-networkd[766]: lo: Gained carrier Jul 14 23:39:37.976034 systemd-networkd[766]: Enumeration completed Jul 14 23:39:37.976520 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 23:39:37.976523 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 23:39:37.979099 ignition[663]: op(1): [finished] loading QEMU firmware config module Jul 14 23:39:37.977555 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 23:39:37.977905 systemd-networkd[766]: eth0: Link UP Jul 14 23:39:37.977908 systemd-networkd[766]: eth0: Gained carrier Jul 14 23:39:37.977915 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 23:39:37.978802 systemd[1]: Reached target network.target - Network. Jul 14 23:39:37.998895 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 23:39:38.022848 ignition[663]: parsing config with SHA512: 8677d7bda354a5b835f9ac634e87f4b6f817a2c73d061f54ab25b19f832feb14078f25d093241719adfcc771c4193992b92dd8674f238e42635834c1de8266ce Jul 14 23:39:38.027645 unknown[663]: fetched base config from "system" Jul 14 23:39:38.027653 unknown[663]: fetched user config from "qemu" Jul 14 23:39:38.028047 ignition[663]: fetch-offline: fetch-offline passed Jul 14 23:39:38.029802 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 23:39:38.028115 ignition[663]: Ignition finished successfully Jul 14 23:39:38.032132 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 23:39:38.041979 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 23:39:38.054674 ignition[774]: Ignition 2.20.0 Jul 14 23:39:38.054684 ignition[774]: Stage: kargs Jul 14 23:39:38.054845 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:39:38.054875 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:39:38.058291 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 23:39:38.055744 ignition[774]: kargs: kargs passed Jul 14 23:39:38.055785 ignition[774]: Ignition finished successfully Jul 14 23:39:38.073046 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 23:39:38.081879 ignition[783]: Ignition 2.20.0 Jul 14 23:39:38.081889 ignition[783]: Stage: disks Jul 14 23:39:38.082056 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:39:38.082066 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:39:38.084374 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 23:39:38.082903 ignition[783]: disks: disks passed Jul 14 23:39:38.086085 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 23:39:38.082947 ignition[783]: Ignition finished successfully Jul 14 23:39:38.087724 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 23:39:38.088888 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 23:39:38.090328 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 23:39:38.092158 systemd[1]: Reached target basic.target - Basic System. Jul 14 23:39:38.106004 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 23:39:38.114160 systemd-resolved[281]: Detected conflict on linux IN A 10.0.0.8 Jul 14 23:39:38.114173 systemd-resolved[281]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Jul 14 23:39:38.116984 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 23:39:38.120405 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 23:39:38.122495 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 23:39:38.167800 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 23:39:38.169379 kernel: EXT4-fs (vda9): mounted filesystem fa8fac2d-2db4-48ec-a1b8-c715c551ae15 r/w with ordered data mode. Quota mode: none. Jul 14 23:39:38.169161 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 23:39:38.189943 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 23:39:38.191666 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 23:39:38.193059 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 23:39:38.193098 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 23:39:38.199823 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Jul 14 23:39:38.199860 kernel: BTRFS info (device vda6): first mount of filesystem eef90d40-9937-4493-9431-dd224d491776 Jul 14 23:39:38.193121 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 23:39:38.203453 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 23:39:38.203473 kernel: BTRFS info (device vda6): using free space tree Jul 14 23:39:38.200214 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 23:39:38.205922 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 23:39:38.205606 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 23:39:38.207791 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 23:39:38.246859 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 23:39:38.251022 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jul 14 23:39:38.254649 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 23:39:38.257446 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 23:39:38.325243 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 23:39:38.339945 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 23:39:38.342290 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 23:39:38.346865 kernel: BTRFS info (device vda6): last unmount of filesystem eef90d40-9937-4493-9431-dd224d491776 Jul 14 23:39:38.359443 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 23:39:38.364504 ignition[914]: INFO : Ignition 2.20.0 Jul 14 23:39:38.364504 ignition[914]: INFO : Stage: mount Jul 14 23:39:38.366819 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:39:38.366819 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:39:38.366819 ignition[914]: INFO : mount: mount passed Jul 14 23:39:38.366819 ignition[914]: INFO : Ignition finished successfully Jul 14 23:39:38.367176 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 23:39:38.377980 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 23:39:38.955304 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 23:39:38.964060 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 23:39:38.970548 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jul 14 23:39:38.970576 kernel: BTRFS info (device vda6): first mount of filesystem eef90d40-9937-4493-9431-dd224d491776 Jul 14 23:39:38.970594 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 23:39:38.972119 kernel: BTRFS info (device vda6): using free space tree Jul 14 23:39:38.973872 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 23:39:38.975241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 23:39:38.990525 ignition[944]: INFO : Ignition 2.20.0 Jul 14 23:39:38.990525 ignition[944]: INFO : Stage: files Jul 14 23:39:38.992105 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:39:38.992105 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:39:38.992105 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jul 14 23:39:38.995444 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 23:39:38.995444 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 23:39:38.995444 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 23:39:38.995444 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 23:39:38.995444 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 23:39:38.995444 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 14 23:39:38.994196 unknown[944]: wrote ssh authorized keys file for user: core Jul 14 23:39:39.004427 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 14 23:39:39.045495 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 23:39:39.239353 systemd-networkd[766]: eth0: Gained IPv6LL Jul 14 23:39:39.431083 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 14 23:39:39.433047 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 23:39:39.433047 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 14 23:39:39.792194 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 23:39:39.979505 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 23:39:39.981374 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 14 23:39:40.500016 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 23:39:41.427629 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 23:39:41.427629 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 14 23:39:41.431419 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 23:39:41.431419 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 23:39:41.431419 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 14 23:39:41.431419 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 14 23:39:41.431419 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 23:39:41.431419 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 23:39:41.431419 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 14 23:39:41.431419 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 23:39:41.456724 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 23:39:41.460351 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 23:39:41.461914 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 23:39:41.461914 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 14 23:39:41.461914 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 23:39:41.461914 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 23:39:41.461914 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 23:39:41.461914 ignition[944]: INFO : files: files passed Jul 14 23:39:41.461914 ignition[944]: INFO : Ignition finished successfully Jul 14 23:39:41.463084 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 23:39:41.472100 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 23:39:41.474554 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 23:39:41.476143 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 23:39:41.476215 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 23:39:41.483995 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 23:39:41.486394 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:39:41.486394 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:39:41.490354 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:39:41.491421 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 23:39:41.493345 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 23:39:41.508074 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 23:39:41.525317 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 23:39:41.525432 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 23:39:41.527592 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 23:39:41.529388 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 23:39:41.531135 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 23:39:41.531881 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 23:39:41.546917 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 23:39:41.560005 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 23:39:41.567382 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 23:39:41.568667 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 23:39:41.570678 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 23:39:41.572472 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 23:39:41.572597 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 23:39:41.575026 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 23:39:41.576954 systemd[1]: Stopped target basic.target - Basic System. Jul 14 23:39:41.578617 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 23:39:41.580304 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 23:39:41.582187 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 23:39:41.584100 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 23:39:41.585895 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 23:39:41.587876 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 23:39:41.589895 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 23:39:41.591658 systemd[1]: Stopped target swap.target - Swaps. Jul 14 23:39:41.593164 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 23:39:41.593284 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 23:39:41.595738 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 23:39:41.596883 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 23:39:41.598806 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 23:39:41.602659 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 23:39:41.603968 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 23:39:41.604099 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 23:39:41.606985 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 23:39:41.607105 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 23:39:41.609020 systemd[1]: Stopped target paths.target - Path Units. Jul 14 23:39:41.610588 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 23:39:41.614285 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 23:39:41.615541 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 23:39:41.617635 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 23:39:41.619165 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 23:39:41.619240 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 23:39:41.620757 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 23:39:41.620837 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 23:39:41.622355 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 23:39:41.622462 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 23:39:41.624184 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 23:39:41.624282 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 23:39:41.636002 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 23:39:41.636900 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 23:39:41.637027 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 23:39:41.643683 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 23:39:41.644569 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 23:39:41.644704 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 23:39:41.650799 ignition[999]: INFO : Ignition 2.20.0 Jul 14 23:39:41.650799 ignition[999]: INFO : Stage: umount Jul 14 23:39:41.650799 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:39:41.650799 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:39:41.650799 ignition[999]: INFO : umount: umount passed Jul 14 23:39:41.650799 ignition[999]: INFO : Ignition finished successfully Jul 14 23:39:41.647109 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 23:39:41.647211 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 23:39:41.650600 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 23:39:41.650701 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 23:39:41.652918 systemd[1]: Stopped target network.target - Network. Jul 14 23:39:41.653902 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 23:39:41.653966 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 23:39:41.655750 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 23:39:41.655793 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 23:39:41.658017 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 23:39:41.658059 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 23:39:41.659606 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 23:39:41.659646 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 23:39:41.661679 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 23:39:41.663662 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 23:39:41.666494 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 23:39:41.667300 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 23:39:41.667395 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 23:39:41.669059 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 23:39:41.669143 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 23:39:41.673827 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 14 23:39:41.674168 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 23:39:41.674356 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 23:39:41.677647 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 14 23:39:41.679727 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 23:39:41.679777 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 23:39:41.689969 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 23:39:41.691098 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 23:39:41.691182 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 23:39:41.693151 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 23:39:41.693202 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:39:41.698180 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 23:39:41.698227 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 23:39:41.700196 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 23:39:41.700242 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 23:39:41.703445 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 23:39:41.705768 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 23:39:41.705824 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 14 23:39:41.713793 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 23:39:41.713934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 23:39:41.716407 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 23:39:41.716534 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 23:39:41.719490 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 23:39:41.719554 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 23:39:41.720914 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 23:39:41.720944 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 23:39:41.722764 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 23:39:41.722818 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 23:39:41.725880 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 23:39:41.725929 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 23:39:41.728907 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 23:39:41.728956 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:39:41.741008 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 23:39:41.742040 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 23:39:41.742094 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 23:39:41.745172 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 23:39:41.745213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:39:41.748916 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 14 23:39:41.748967 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 14 23:39:41.749252 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 23:39:41.749345 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 23:39:41.750456 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 23:39:41.750523 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 23:39:41.753231 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 23:39:41.754734 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 23:39:41.754788 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 23:39:41.757282 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 23:39:41.766081 systemd[1]: Switching root. Jul 14 23:39:41.791994 systemd-journald[238]: Journal stopped Jul 14 23:39:42.571720 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 14 23:39:42.571775 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 23:39:42.571787 kernel: SELinux: policy capability open_perms=1 Jul 14 23:39:42.571800 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 23:39:42.571810 kernel: SELinux: policy capability always_check_network=0 Jul 14 23:39:42.571819 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 23:39:42.571829 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 23:39:42.571839 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 23:39:42.571848 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 23:39:42.571911 systemd[1]: Successfully loaded SELinux policy in 34.069ms. Jul 14 23:39:42.571929 kernel: audit: type=1403 audit(1752536381.957:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 23:39:42.571941 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.439ms. Jul 14 23:39:42.571952 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 23:39:42.571967 systemd[1]: Detected virtualization kvm. Jul 14 23:39:42.571977 systemd[1]: Detected architecture arm64. Jul 14 23:39:42.571988 systemd[1]: Detected first boot. Jul 14 23:39:42.571998 systemd[1]: Initializing machine ID from VM UUID. Jul 14 23:39:42.572008 zram_generator::config[1045]: No configuration found. Jul 14 23:39:42.572019 kernel: NET: Registered PF_VSOCK protocol family Jul 14 23:39:42.572029 systemd[1]: Populated /etc with preset unit settings. Jul 14 23:39:42.572041 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 14 23:39:42.572052 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 23:39:42.572067 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 23:39:42.572080 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 23:39:42.572091 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 23:39:42.572104 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 23:39:42.572114 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 23:39:42.572124 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 23:39:42.572136 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 23:39:42.572147 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 23:39:42.572157 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 23:39:42.572167 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 23:39:42.572178 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 23:39:42.572188 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 23:39:42.572199 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 23:39:42.572209 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 23:39:42.572219 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 23:39:42.572231 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 23:39:42.572243 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 14 23:39:42.572253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 23:39:42.572264 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 23:39:42.572274 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 23:39:42.572284 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 23:39:42.572300 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 23:39:42.572319 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 23:39:42.572331 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 23:39:42.572341 systemd[1]: Reached target slices.target - Slice Units. Jul 14 23:39:42.572351 systemd[1]: Reached target swap.target - Swaps. Jul 14 23:39:42.572361 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 23:39:42.572371 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 23:39:42.572382 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 14 23:39:42.572392 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 23:39:42.572403 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 23:39:42.572413 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 23:39:42.572425 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 23:39:42.572435 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 23:39:42.572445 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 23:39:42.572455 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 23:39:42.572465 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 23:39:42.572476 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 23:39:42.572486 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 23:39:42.572497 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 23:39:42.572509 systemd[1]: Reached target machines.target - Containers. Jul 14 23:39:42.572520 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 23:39:42.572530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 23:39:42.572541 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 23:39:42.572552 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 23:39:42.572562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 23:39:42.572572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 23:39:42.572582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 23:39:42.572591 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 23:39:42.572603 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 23:39:42.572614 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 23:39:42.572624 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 23:39:42.572634 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 23:39:42.572643 kernel: fuse: init (API version 7.39) Jul 14 23:39:42.572654 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 23:39:42.572664 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 23:39:42.572675 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 23:39:42.572686 kernel: loop: module loaded Jul 14 23:39:42.572696 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 23:39:42.572707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 23:39:42.572717 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 23:39:42.572726 kernel: ACPI: bus type drm_connector registered Jul 14 23:39:42.572736 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 23:39:42.572746 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 14 23:39:42.572756 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 23:39:42.572768 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 23:39:42.572778 systemd[1]: Stopped verity-setup.service. Jul 14 23:39:42.572788 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 23:39:42.572798 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 23:39:42.572808 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 23:39:42.572818 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 23:39:42.572850 systemd-journald[1113]: Collecting audit messages is disabled. Jul 14 23:39:42.572880 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 23:39:42.572891 systemd-journald[1113]: Journal started Jul 14 23:39:42.572911 systemd-journald[1113]: Runtime Journal (/run/log/journal/68ced0be7f7f420c814d99c4b95c8623) is 5.9M, max 47.3M, 41.4M free. Jul 14 23:39:42.360753 systemd[1]: Queued start job for default target multi-user.target. Jul 14 23:39:42.371799 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 23:39:42.372173 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 23:39:42.575612 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 23:39:42.576247 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 23:39:42.578885 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 23:39:42.580299 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 23:39:42.581756 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 23:39:42.581956 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 23:39:42.583425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:39:42.583582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 23:39:42.585091 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 23:39:42.585256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 23:39:42.586540 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:39:42.586694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 23:39:42.588229 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 23:39:42.588395 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 23:39:42.591185 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:39:42.591359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 23:39:42.592768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 23:39:42.595228 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 23:39:42.596847 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 23:39:42.598337 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 14 23:39:42.610646 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 23:39:42.618944 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 23:39:42.620940 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 23:39:42.622036 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 23:39:42.622077 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 23:39:42.624010 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 14 23:39:42.626134 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 23:39:42.628147 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 23:39:42.629211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:39:42.630748 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 23:39:42.632928 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 23:39:42.634217 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 23:39:42.635365 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 23:39:42.636542 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 23:39:42.640904 systemd-journald[1113]: Time spent on flushing to /var/log/journal/68ced0be7f7f420c814d99c4b95c8623 is 27.625ms for 870 entries. Jul 14 23:39:42.640904 systemd-journald[1113]: System Journal (/var/log/journal/68ced0be7f7f420c814d99c4b95c8623) is 8M, max 195.6M, 187.6M free. Jul 14 23:39:42.677416 systemd-journald[1113]: Received client request to flush runtime journal. Jul 14 23:39:42.677455 kernel: loop0: detected capacity change from 0 to 207008 Jul 14 23:39:42.642032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 23:39:42.647297 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 23:39:42.650942 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 23:39:42.655050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 23:39:42.657719 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 23:39:42.661037 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 23:39:42.665015 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 23:39:42.666577 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 23:39:42.676959 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 23:39:42.684275 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 23:39:42.690030 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 14 23:39:42.695074 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 23:39:42.697590 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 23:39:42.700149 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:39:42.706663 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 23:39:42.707911 kernel: loop1: detected capacity change from 0 to 113512 Jul 14 23:39:42.713888 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 14 23:39:42.723113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 23:39:42.724614 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 23:39:42.744983 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jul 14 23:39:42.745001 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jul 14 23:39:42.749919 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 23:39:42.754877 kernel: loop2: detected capacity change from 0 to 123192 Jul 14 23:39:42.791887 kernel: loop3: detected capacity change from 0 to 207008 Jul 14 23:39:42.798884 kernel: loop4: detected capacity change from 0 to 113512 Jul 14 23:39:42.803885 kernel: loop5: detected capacity change from 0 to 123192 Jul 14 23:39:42.807443 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 23:39:42.807827 (sd-merge)[1186]: Merged extensions into '/usr'. Jul 14 23:39:42.811445 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 23:39:42.811463 systemd[1]: Reloading... Jul 14 23:39:42.873879 zram_generator::config[1217]: No configuration found. Jul 14 23:39:42.932894 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 23:39:42.971480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:39:43.022030 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 23:39:43.022218 systemd[1]: Reloading finished in 210 ms. Jul 14 23:39:43.039521 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 23:39:43.041230 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 23:39:43.051077 systemd[1]: Starting ensure-sysext.service... Jul 14 23:39:43.052793 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 23:39:43.065960 systemd[1]: Reload requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jul 14 23:39:43.065974 systemd[1]: Reloading... Jul 14 23:39:43.070695 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 23:39:43.071229 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 23:39:43.071981 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 23:39:43.072315 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jul 14 23:39:43.072439 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jul 14 23:39:43.075162 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 23:39:43.075266 systemd-tmpfiles[1249]: Skipping /boot Jul 14 23:39:43.084344 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 23:39:43.084444 systemd-tmpfiles[1249]: Skipping /boot Jul 14 23:39:43.107888 zram_generator::config[1278]: No configuration found. Jul 14 23:39:43.195720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:39:43.245650 systemd[1]: Reloading finished in 179 ms. Jul 14 23:39:43.263644 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 23:39:43.280011 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 23:39:43.287728 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 23:39:43.290249 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 23:39:43.292604 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 23:39:43.298232 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 23:39:43.302471 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 23:39:43.307148 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 23:39:43.310982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 23:39:43.313915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 23:39:43.318342 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 23:39:43.320693 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 23:39:43.321809 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:39:43.322037 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 23:39:43.327107 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 23:39:43.329400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:39:43.329640 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 23:39:43.331348 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:39:43.331740 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 23:39:43.334279 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 23:39:43.351173 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Jul 14 23:39:43.351629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 23:39:43.353564 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 23:39:43.355714 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 23:39:43.358084 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:39:43.358196 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 23:39:43.361133 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 23:39:43.363111 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 23:39:43.365137 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 23:39:43.366849 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:39:43.367079 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 23:39:43.368776 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:39:43.368948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 23:39:43.370578 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 23:39:43.375089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:39:43.375319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 23:39:43.377333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 23:39:43.386499 augenrules[1366]: No rules Jul 14 23:39:43.387185 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 23:39:43.387404 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 23:39:43.393330 systemd[1]: Finished ensure-sysext.service. Jul 14 23:39:43.397462 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 23:39:43.399514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 23:39:43.409221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 23:39:43.418050 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 23:39:43.421083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 23:39:43.426013 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 23:39:43.426878 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1358) Jul 14 23:39:43.427337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:39:43.427402 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 23:39:43.430576 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 23:39:43.438156 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 23:39:43.439296 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 23:39:43.441439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:39:43.441622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 23:39:43.443942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:39:43.444102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 23:39:43.460317 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 23:39:43.460511 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 23:39:43.461942 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:39:43.462096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 23:39:43.468676 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 23:39:43.468733 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 23:39:43.472025 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 14 23:39:43.504775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 23:39:43.521042 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 23:39:43.524280 systemd-resolved[1318]: Positive Trust Anchors: Jul 14 23:39:43.524305 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 23:39:43.524335 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 23:39:43.531660 systemd-resolved[1318]: Defaulting to hostname 'linux'. Jul 14 23:39:43.539212 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 23:39:43.540611 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 23:39:43.545503 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 23:39:43.547350 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 23:39:43.549467 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 23:39:43.553302 systemd-networkd[1391]: lo: Link UP Jul 14 23:39:43.553311 systemd-networkd[1391]: lo: Gained carrier Jul 14 23:39:43.554571 systemd-networkd[1391]: Enumeration completed Jul 14 23:39:43.554660 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 23:39:43.555142 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 23:39:43.555152 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 23:39:43.555812 systemd-networkd[1391]: eth0: Link UP Jul 14 23:39:43.555820 systemd-networkd[1391]: eth0: Gained carrier Jul 14 23:39:43.555835 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 23:39:43.556147 systemd[1]: Reached target network.target - Network. Jul 14 23:39:43.569063 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 14 23:39:43.573536 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 23:39:43.580078 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 23:39:43.580899 systemd-timesyncd[1392]: Network configuration changed, trying to establish connection. Jul 14 23:39:43.144737 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 23:39:43.152647 systemd-journald[1113]: Time jumped backwards, rotating. Jul 14 23:39:43.144793 systemd-timesyncd[1392]: Initial clock synchronization to Mon 2025-07-14 23:39:43.144654 UTC. Jul 14 23:39:43.145327 systemd-resolved[1318]: Clock change detected. Flushing caches. Jul 14 23:39:43.148317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:39:43.160253 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 23:39:43.162168 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 14 23:39:43.181422 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 23:39:43.195130 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:39:43.199533 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 23:39:43.236617 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 23:39:43.238152 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 23:39:43.239260 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 23:39:43.240377 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 23:39:43.241587 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 23:39:43.242961 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 23:39:43.244132 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 23:39:43.245462 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 23:39:43.246638 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 23:39:43.246674 systemd[1]: Reached target paths.target - Path Units. Jul 14 23:39:43.247562 systemd[1]: Reached target timers.target - Timer Units. Jul 14 23:39:43.249404 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 23:39:43.251814 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 23:39:43.255026 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 14 23:39:43.256435 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 14 23:39:43.257705 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 14 23:39:43.260855 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 23:39:43.262290 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 14 23:39:43.264566 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 23:39:43.266197 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 23:39:43.267313 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 23:39:43.268242 systemd[1]: Reached target basic.target - Basic System. Jul 14 23:39:43.269169 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 23:39:43.269198 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 23:39:43.270125 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 23:39:43.271908 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 23:39:43.273279 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 23:39:43.277244 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 23:39:43.280258 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 23:39:43.282517 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 23:39:43.283833 jq[1428]: false Jul 14 23:39:43.284322 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 23:39:43.288456 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 23:39:43.288757 dbus-daemon[1427]: [system] SELinux support is enabled Jul 14 23:39:43.292317 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 23:39:43.298989 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 23:39:43.300129 extend-filesystems[1429]: Found loop3 Jul 14 23:39:43.300129 extend-filesystems[1429]: Found loop4 Jul 14 23:39:43.300129 extend-filesystems[1429]: Found loop5 Jul 14 23:39:43.300129 extend-filesystems[1429]: Found vda Jul 14 23:39:43.300129 extend-filesystems[1429]: Found vda1 Jul 14 23:39:43.300129 extend-filesystems[1429]: Found vda2 Jul 14 23:39:43.300129 extend-filesystems[1429]: Found vda3 Jul 14 23:39:43.300129 extend-filesystems[1429]: Found usr Jul 14 23:39:43.300129 extend-filesystems[1429]: Found vda4 Jul 14 23:39:43.300129 extend-filesystems[1429]: Found vda6 Jul 14 23:39:43.300129 extend-filesystems[1429]: Found vda7 Jul 14 23:39:43.300129 extend-filesystems[1429]: Found vda9 Jul 14 23:39:43.300129 extend-filesystems[1429]: Checking size of /dev/vda9 Jul 14 23:39:43.303296 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 23:39:43.306640 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 23:39:43.307139 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 23:39:43.308663 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 23:39:43.315048 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 23:39:43.319172 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 23:39:43.322583 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 23:39:43.325165 jq[1447]: true Jul 14 23:39:43.325346 extend-filesystems[1429]: Resized partition /dev/vda9 Jul 14 23:39:43.326700 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 23:39:43.326897 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 23:39:43.327196 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 23:39:43.327352 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 23:39:43.329480 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 23:39:43.329668 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 23:39:43.333279 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Jul 14 23:39:43.338097 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 23:39:43.342930 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 23:39:43.342968 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 23:39:43.345096 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1367) Jul 14 23:39:43.348306 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 23:39:43.348355 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 23:39:43.364889 update_engine[1445]: I20250714 23:39:43.364316 1445 main.cc:92] Flatcar Update Engine starting Jul 14 23:39:43.367039 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 23:39:43.368384 jq[1453]: true Jul 14 23:39:43.369203 update_engine[1445]: I20250714 23:39:43.369045 1445 update_check_scheduler.cc:74] Next update check in 2m28s Jul 14 23:39:43.380131 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 23:39:43.381640 systemd[1]: Started update-engine.service - Update Engine. Jul 14 23:39:43.388686 tar[1452]: linux-arm64/LICENSE Jul 14 23:39:43.395679 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 23:39:43.400295 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 23:39:43.400812 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 23:39:43.400812 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 23:39:43.400812 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 23:39:43.413750 tar[1452]: linux-arm64/helm Jul 14 23:39:43.401788 systemd-logind[1441]: New seat seat0. Jul 14 23:39:43.413838 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Jul 14 23:39:43.402258 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 23:39:43.404724 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 23:39:43.412777 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 23:39:43.435519 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Jul 14 23:39:43.438603 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 23:39:43.443580 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 23:39:43.452359 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 23:39:43.573460 containerd[1462]: time="2025-07-14T23:39:43.573361399Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 14 23:39:43.604769 containerd[1462]: time="2025-07-14T23:39:43.604716399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:39:43.606146 containerd[1462]: time="2025-07-14T23:39:43.606104399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:39:43.606146 containerd[1462]: time="2025-07-14T23:39:43.606134879Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 23:39:43.606232 containerd[1462]: time="2025-07-14T23:39:43.606156479Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 23:39:43.606337 containerd[1462]: time="2025-07-14T23:39:43.606310159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 23:39:43.606361 containerd[1462]: time="2025-07-14T23:39:43.606334399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 23:39:43.606404 containerd[1462]: time="2025-07-14T23:39:43.606389719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:39:43.606422 containerd[1462]: time="2025-07-14T23:39:43.606405199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:39:43.606632 containerd[1462]: time="2025-07-14T23:39:43.606604399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:39:43.606632 containerd[1462]: time="2025-07-14T23:39:43.606626559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 23:39:43.606667 containerd[1462]: time="2025-07-14T23:39:43.606640519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:39:43.606667 containerd[1462]: time="2025-07-14T23:39:43.606651559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 23:39:43.606734 containerd[1462]: time="2025-07-14T23:39:43.606721079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:39:43.606919 containerd[1462]: time="2025-07-14T23:39:43.606903079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:39:43.607034 containerd[1462]: time="2025-07-14T23:39:43.607019679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:39:43.607054 containerd[1462]: time="2025-07-14T23:39:43.607035599Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 23:39:43.607144 containerd[1462]: time="2025-07-14T23:39:43.607129799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 23:39:43.607188 containerd[1462]: time="2025-07-14T23:39:43.607176879Z" level=info msg="metadata content store policy set" policy=shared Jul 14 23:39:43.611616 containerd[1462]: time="2025-07-14T23:39:43.611588999Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 23:39:43.611676 containerd[1462]: time="2025-07-14T23:39:43.611633999Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 23:39:43.611676 containerd[1462]: time="2025-07-14T23:39:43.611649959Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 23:39:43.611676 containerd[1462]: time="2025-07-14T23:39:43.611666199Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 23:39:43.611741 containerd[1462]: time="2025-07-14T23:39:43.611680719Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 23:39:43.611851 containerd[1462]: time="2025-07-14T23:39:43.611834959Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 23:39:43.612123 containerd[1462]: time="2025-07-14T23:39:43.612063839Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 23:39:43.612202 containerd[1462]: time="2025-07-14T23:39:43.612181999Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 23:39:43.612226 containerd[1462]: time="2025-07-14T23:39:43.612205599Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 23:39:43.612226 containerd[1462]: time="2025-07-14T23:39:43.612221119Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 23:39:43.612268 containerd[1462]: time="2025-07-14T23:39:43.612236039Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 23:39:43.612268 containerd[1462]: time="2025-07-14T23:39:43.612248959Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 23:39:43.612268 containerd[1462]: time="2025-07-14T23:39:43.612261599Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 23:39:43.612313 containerd[1462]: time="2025-07-14T23:39:43.612275919Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 23:39:43.612313 containerd[1462]: time="2025-07-14T23:39:43.612290759Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 23:39:43.612313 containerd[1462]: time="2025-07-14T23:39:43.612302959Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 23:39:43.612357 containerd[1462]: time="2025-07-14T23:39:43.612314999Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 23:39:43.612357 containerd[1462]: time="2025-07-14T23:39:43.612332919Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 23:39:43.612389 containerd[1462]: time="2025-07-14T23:39:43.612355199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612389 containerd[1462]: time="2025-07-14T23:39:43.612369239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612457 containerd[1462]: time="2025-07-14T23:39:43.612396719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612457 containerd[1462]: time="2025-07-14T23:39:43.612409999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612457 containerd[1462]: time="2025-07-14T23:39:43.612421919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612457 containerd[1462]: time="2025-07-14T23:39:43.612434999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612457 containerd[1462]: time="2025-07-14T23:39:43.612446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612545 containerd[1462]: time="2025-07-14T23:39:43.612459679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612545 containerd[1462]: time="2025-07-14T23:39:43.612472919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612545 containerd[1462]: time="2025-07-14T23:39:43.612486639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612545 containerd[1462]: time="2025-07-14T23:39:43.612498199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612545 containerd[1462]: time="2025-07-14T23:39:43.612519639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612545 containerd[1462]: time="2025-07-14T23:39:43.612534999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612637 containerd[1462]: time="2025-07-14T23:39:43.612549839Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 23:39:43.612637 containerd[1462]: time="2025-07-14T23:39:43.612571359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612637 containerd[1462]: time="2025-07-14T23:39:43.612584159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612637 containerd[1462]: time="2025-07-14T23:39:43.612595079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 23:39:43.612770 containerd[1462]: time="2025-07-14T23:39:43.612754319Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 23:39:43.612799 containerd[1462]: time="2025-07-14T23:39:43.612776079Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 23:39:43.612799 containerd[1462]: time="2025-07-14T23:39:43.612788239Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 23:39:43.612834 containerd[1462]: time="2025-07-14T23:39:43.612800279Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 23:39:43.612834 containerd[1462]: time="2025-07-14T23:39:43.612809759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.612834 containerd[1462]: time="2025-07-14T23:39:43.612828159Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 23:39:43.612885 containerd[1462]: time="2025-07-14T23:39:43.612837639Z" level=info msg="NRI interface is disabled by configuration." Jul 14 23:39:43.612885 containerd[1462]: time="2025-07-14T23:39:43.612847959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 23:39:43.613236 containerd[1462]: time="2025-07-14T23:39:43.613192679Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 23:39:43.613348 containerd[1462]: time="2025-07-14T23:39:43.613244479Z" level=info msg="Connect containerd service" Jul 14 23:39:43.613348 containerd[1462]: time="2025-07-14T23:39:43.613277639Z" level=info msg="using legacy CRI server" Jul 14 23:39:43.613348 containerd[1462]: time="2025-07-14T23:39:43.613284559Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 23:39:43.613529 containerd[1462]: time="2025-07-14T23:39:43.613505239Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 23:39:43.614414 containerd[1462]: time="2025-07-14T23:39:43.614383679Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 23:39:43.614837 containerd[1462]: time="2025-07-14T23:39:43.614619559Z" level=info msg="Start subscribing containerd event" Jul 14 23:39:43.615212 containerd[1462]: time="2025-07-14T23:39:43.615140199Z" level=info msg="Start recovering state" Jul 14 23:39:43.615867 containerd[1462]: time="2025-07-14T23:39:43.615657159Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 23:39:43.615867 containerd[1462]: time="2025-07-14T23:39:43.615734959Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 23:39:43.616031 containerd[1462]: time="2025-07-14T23:39:43.616006199Z" level=info msg="Start event monitor" Jul 14 23:39:43.616064 containerd[1462]: time="2025-07-14T23:39:43.616036079Z" level=info msg="Start snapshots syncer" Jul 14 23:39:43.616064 containerd[1462]: time="2025-07-14T23:39:43.616052039Z" level=info msg="Start cni network conf syncer for default" Jul 14 23:39:43.616064 containerd[1462]: time="2025-07-14T23:39:43.616062679Z" level=info msg="Start streaming server" Jul 14 23:39:43.616298 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 23:39:43.617271 containerd[1462]: time="2025-07-14T23:39:43.617247679Z" level=info msg="containerd successfully booted in 0.045271s" Jul 14 23:39:43.775096 tar[1452]: linux-arm64/README.md Jul 14 23:39:43.787117 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 23:39:44.177542 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 23:39:44.195958 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 23:39:44.210431 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 23:39:44.215846 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 23:39:44.217134 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 23:39:44.220125 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 23:39:44.231316 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 23:39:44.234260 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 23:39:44.236444 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 14 23:39:44.237781 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 23:39:44.690202 systemd-networkd[1391]: eth0: Gained IPv6LL Jul 14 23:39:44.694130 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 23:39:44.695898 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 23:39:44.707376 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 23:39:44.709799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:39:44.711925 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 23:39:44.725872 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 23:39:44.727145 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 23:39:44.729156 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 23:39:44.735360 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 23:39:45.261030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:39:45.262594 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 23:39:45.264479 (kubelet)[1541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 23:39:45.268243 systemd[1]: Startup finished in 553ms (kernel) + 6.257s (initrd) + 3.781s (userspace) = 10.593s. Jul 14 23:39:45.661354 kubelet[1541]: E0714 23:39:45.661225 1541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:39:45.663588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:39:45.663732 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:39:45.666161 systemd[1]: kubelet.service: Consumed 803ms CPU time, 255M memory peak. Jul 14 23:39:48.122819 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 23:39:48.124122 systemd[1]: Started sshd@0-10.0.0.8:22-10.0.0.1:39906.service - OpenSSH per-connection server daemon (10.0.0.1:39906). Jul 14 23:39:48.183952 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 39906 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:39:48.185770 sshd-session[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:39:48.201344 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 23:39:48.212319 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 23:39:48.214059 systemd-logind[1441]: New session 1 of user core. Jul 14 23:39:48.221694 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 23:39:48.223747 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 23:39:48.230752 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:39:48.232797 systemd-logind[1441]: New session c1 of user core. Jul 14 23:39:48.341875 systemd[1558]: Queued start job for default target default.target. Jul 14 23:39:48.349055 systemd[1558]: Created slice app.slice - User Application Slice. Jul 14 23:39:48.349104 systemd[1558]: Reached target paths.target - Paths. Jul 14 23:39:48.349144 systemd[1558]: Reached target timers.target - Timers. Jul 14 23:39:48.350394 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 23:39:48.359612 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 23:39:48.359673 systemd[1558]: Reached target sockets.target - Sockets. Jul 14 23:39:48.359710 systemd[1558]: Reached target basic.target - Basic System. Jul 14 23:39:48.359740 systemd[1558]: Reached target default.target - Main User Target. Jul 14 23:39:48.359766 systemd[1558]: Startup finished in 121ms. Jul 14 23:39:48.360019 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 23:39:48.361860 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 23:39:48.422277 systemd[1]: Started sshd@1-10.0.0.8:22-10.0.0.1:39908.service - OpenSSH per-connection server daemon (10.0.0.1:39908). Jul 14 23:39:48.460774 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 39908 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:39:48.461882 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:39:48.466158 systemd-logind[1441]: New session 2 of user core. Jul 14 23:39:48.481318 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 23:39:48.532250 sshd[1571]: Connection closed by 10.0.0.1 port 39908 Jul 14 23:39:48.532633 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Jul 14 23:39:48.542208 systemd[1]: sshd@1-10.0.0.8:22-10.0.0.1:39908.service: Deactivated successfully. Jul 14 23:39:48.543665 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 23:39:48.545635 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jul 14 23:39:48.560434 systemd[1]: Started sshd@2-10.0.0.8:22-10.0.0.1:39918.service - OpenSSH per-connection server daemon (10.0.0.1:39918). Jul 14 23:39:48.561416 systemd-logind[1441]: Removed session 2. Jul 14 23:39:48.594338 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 39918 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:39:48.595441 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:39:48.600306 systemd-logind[1441]: New session 3 of user core. Jul 14 23:39:48.614266 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 23:39:48.662096 sshd[1579]: Connection closed by 10.0.0.1 port 39918 Jul 14 23:39:48.662566 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Jul 14 23:39:48.672734 systemd[1]: sshd@2-10.0.0.8:22-10.0.0.1:39918.service: Deactivated successfully. Jul 14 23:39:48.674448 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 23:39:48.675214 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Jul 14 23:39:48.684334 systemd[1]: Started sshd@3-10.0.0.8:22-10.0.0.1:39924.service - OpenSSH per-connection server daemon (10.0.0.1:39924). Jul 14 23:39:48.685432 systemd-logind[1441]: Removed session 3. Jul 14 23:39:48.719884 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 39924 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:39:48.721314 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:39:48.726524 systemd-logind[1441]: New session 4 of user core. Jul 14 23:39:48.738261 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 23:39:48.790004 sshd[1587]: Connection closed by 10.0.0.1 port 39924 Jul 14 23:39:48.791348 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Jul 14 23:39:48.809538 systemd[1]: sshd@3-10.0.0.8:22-10.0.0.1:39924.service: Deactivated successfully. Jul 14 23:39:48.811137 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 23:39:48.811936 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Jul 14 23:39:48.816439 systemd[1]: Started sshd@4-10.0.0.8:22-10.0.0.1:39938.service - OpenSSH per-connection server daemon (10.0.0.1:39938). Jul 14 23:39:48.817236 systemd-logind[1441]: Removed session 4. Jul 14 23:39:48.851280 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 39938 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:39:48.852419 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:39:48.856141 systemd-logind[1441]: New session 5 of user core. Jul 14 23:39:48.867211 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 23:39:48.927544 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 23:39:48.927822 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:39:48.945932 sudo[1596]: pam_unix(sudo:session): session closed for user root Jul 14 23:39:48.947257 sshd[1595]: Connection closed by 10.0.0.1 port 39938 Jul 14 23:39:48.947794 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Jul 14 23:39:48.962207 systemd[1]: sshd@4-10.0.0.8:22-10.0.0.1:39938.service: Deactivated successfully. Jul 14 23:39:48.964369 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 23:39:48.965873 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Jul 14 23:39:48.976352 systemd[1]: Started sshd@5-10.0.0.8:22-10.0.0.1:39948.service - OpenSSH per-connection server daemon (10.0.0.1:39948). Jul 14 23:39:48.977721 systemd-logind[1441]: Removed session 5. Jul 14 23:39:49.011125 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 39948 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:39:49.012386 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:39:49.015876 systemd-logind[1441]: New session 6 of user core. Jul 14 23:39:49.028238 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 23:39:49.078105 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 23:39:49.078377 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:39:49.081286 sudo[1606]: pam_unix(sudo:session): session closed for user root Jul 14 23:39:49.085633 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 14 23:39:49.085892 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:39:49.102379 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 23:39:49.124144 augenrules[1628]: No rules Jul 14 23:39:49.125172 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 23:39:49.125426 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 23:39:49.126247 sudo[1605]: pam_unix(sudo:session): session closed for user root Jul 14 23:39:49.127923 sshd[1604]: Connection closed by 10.0.0.1 port 39948 Jul 14 23:39:49.127801 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Jul 14 23:39:49.146132 systemd[1]: sshd@5-10.0.0.8:22-10.0.0.1:39948.service: Deactivated successfully. Jul 14 23:39:49.147414 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 23:39:49.149291 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jul 14 23:39:49.156331 systemd[1]: Started sshd@6-10.0.0.8:22-10.0.0.1:39958.service - OpenSSH per-connection server daemon (10.0.0.1:39958). Jul 14 23:39:49.157105 systemd-logind[1441]: Removed session 6. Jul 14 23:39:49.190346 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 39958 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:39:49.191324 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:39:49.195546 systemd-logind[1441]: New session 7 of user core. Jul 14 23:39:49.209249 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 23:39:49.259483 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 23:39:49.259758 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:39:49.597385 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 23:39:49.597393 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 23:39:49.842919 dockerd[1661]: time="2025-07-14T23:39:49.842861679Z" level=info msg="Starting up" Jul 14 23:39:50.039877 dockerd[1661]: time="2025-07-14T23:39:50.039777559Z" level=info msg="Loading containers: start." Jul 14 23:39:50.182149 kernel: Initializing XFRM netlink socket Jul 14 23:39:50.246476 systemd-networkd[1391]: docker0: Link UP Jul 14 23:39:50.278306 dockerd[1661]: time="2025-07-14T23:39:50.278266079Z" level=info msg="Loading containers: done." Jul 14 23:39:50.291473 dockerd[1661]: time="2025-07-14T23:39:50.291385039Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 23:39:50.291473 dockerd[1661]: time="2025-07-14T23:39:50.291466319Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 14 23:39:50.291742 dockerd[1661]: time="2025-07-14T23:39:50.291635879Z" level=info msg="Daemon has completed initialization" Jul 14 23:39:50.318532 dockerd[1661]: time="2025-07-14T23:39:50.318423279Z" level=info msg="API listen on /run/docker.sock" Jul 14 23:39:50.318702 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 23:39:51.140419 containerd[1462]: time="2025-07-14T23:39:51.140359319Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 14 23:39:52.132146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426825102.mount: Deactivated successfully. Jul 14 23:39:53.146009 containerd[1462]: time="2025-07-14T23:39:53.145960799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:53.146467 containerd[1462]: time="2025-07-14T23:39:53.146438439Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 14 23:39:53.147421 containerd[1462]: time="2025-07-14T23:39:53.147367199Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:53.150558 containerd[1462]: time="2025-07-14T23:39:53.150510959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:53.151542 containerd[1462]: time="2025-07-14T23:39:53.151511719Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.0110964s" Jul 14 23:39:53.151592 containerd[1462]: time="2025-07-14T23:39:53.151545719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 14 23:39:53.152298 containerd[1462]: time="2025-07-14T23:39:53.152274999Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 14 23:39:54.417508 containerd[1462]: time="2025-07-14T23:39:54.417454839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:54.418437 containerd[1462]: time="2025-07-14T23:39:54.418205799Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 14 23:39:54.419149 containerd[1462]: time="2025-07-14T23:39:54.419116079Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:54.422159 containerd[1462]: time="2025-07-14T23:39:54.422131759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:54.423926 containerd[1462]: time="2025-07-14T23:39:54.423890119Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.27158532s" Jul 14 23:39:54.423964 containerd[1462]: time="2025-07-14T23:39:54.423927119Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 14 23:39:54.424459 containerd[1462]: time="2025-07-14T23:39:54.424420959Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 14 23:39:55.534332 containerd[1462]: time="2025-07-14T23:39:55.534281759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:55.535395 containerd[1462]: time="2025-07-14T23:39:55.535308879Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 14 23:39:55.536089 containerd[1462]: time="2025-07-14T23:39:55.536011479Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:55.539537 containerd[1462]: time="2025-07-14T23:39:55.539506279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:55.541475 containerd[1462]: time="2025-07-14T23:39:55.541339799Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.11688612s" Jul 14 23:39:55.541475 containerd[1462]: time="2025-07-14T23:39:55.541382519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 14 23:39:55.542205 containerd[1462]: time="2025-07-14T23:39:55.542122599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 14 23:39:55.914044 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 23:39:55.922299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:39:56.023920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:39:56.027020 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 23:39:56.065009 kubelet[1930]: E0714 23:39:56.064953 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:39:56.067795 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:39:56.067938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:39:56.068376 systemd[1]: kubelet.service: Consumed 134ms CPU time, 108.7M memory peak. Jul 14 23:39:56.614789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3030157479.mount: Deactivated successfully. Jul 14 23:39:56.941994 containerd[1462]: time="2025-07-14T23:39:56.941873919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:56.942527 containerd[1462]: time="2025-07-14T23:39:56.942457399Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 14 23:39:56.943351 containerd[1462]: time="2025-07-14T23:39:56.943316559Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:56.945406 containerd[1462]: time="2025-07-14T23:39:56.945374359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:56.946007 containerd[1462]: time="2025-07-14T23:39:56.945968119Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.40381616s" Jul 14 23:39:56.946007 containerd[1462]: time="2025-07-14T23:39:56.946003399Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 14 23:39:56.946470 containerd[1462]: time="2025-07-14T23:39:56.946449039Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 23:39:57.645917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3837433546.mount: Deactivated successfully. Jul 14 23:39:58.460375 containerd[1462]: time="2025-07-14T23:39:58.460312639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:58.460988 containerd[1462]: time="2025-07-14T23:39:58.460930119Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 14 23:39:58.461859 containerd[1462]: time="2025-07-14T23:39:58.461820519Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:58.465032 containerd[1462]: time="2025-07-14T23:39:58.464998199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:58.466401 containerd[1462]: time="2025-07-14T23:39:58.466363839Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.51988348s" Jul 14 23:39:58.466442 containerd[1462]: time="2025-07-14T23:39:58.466401199Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 14 23:39:58.466918 containerd[1462]: time="2025-07-14T23:39:58.466890399Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 23:39:58.932121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1927773899.mount: Deactivated successfully. Jul 14 23:39:58.935524 containerd[1462]: time="2025-07-14T23:39:58.935462479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:58.936147 containerd[1462]: time="2025-07-14T23:39:58.935926079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 14 23:39:58.936890 containerd[1462]: time="2025-07-14T23:39:58.936829959Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:58.939199 containerd[1462]: time="2025-07-14T23:39:58.939160959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:39:58.940215 containerd[1462]: time="2025-07-14T23:39:58.940142999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 473.22348ms" Jul 14 23:39:58.940215 containerd[1462]: time="2025-07-14T23:39:58.940171999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 23:39:58.940760 containerd[1462]: time="2025-07-14T23:39:58.940587319Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 14 23:39:59.494319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2557053519.mount: Deactivated successfully. Jul 14 23:40:01.338586 containerd[1462]: time="2025-07-14T23:40:01.338529079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:40:01.339101 containerd[1462]: time="2025-07-14T23:40:01.339038039Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 14 23:40:01.339865 containerd[1462]: time="2025-07-14T23:40:01.339831959Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:40:01.343396 containerd[1462]: time="2025-07-14T23:40:01.343331919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:40:01.344662 containerd[1462]: time="2025-07-14T23:40:01.344626159Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.40400644s" Jul 14 23:40:01.344722 containerd[1462]: time="2025-07-14T23:40:01.344662039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 14 23:40:06.318714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 23:40:06.329254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:40:06.558002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:40:06.561199 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 23:40:06.593012 kubelet[2086]: E0714 23:40:06.592895 2086 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:40:06.595706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:40:06.595949 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:40:06.596367 systemd[1]: kubelet.service: Consumed 123ms CPU time, 109M memory peak. Jul 14 23:40:08.086492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:40:08.086940 systemd[1]: kubelet.service: Consumed 123ms CPU time, 109M memory peak. Jul 14 23:40:08.097280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:40:08.116616 systemd[1]: Reload requested from client PID 2101 ('systemctl') (unit session-7.scope)... Jul 14 23:40:08.116636 systemd[1]: Reloading... Jul 14 23:40:08.188148 zram_generator::config[2148]: No configuration found. Jul 14 23:40:08.343489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:40:08.416159 systemd[1]: Reloading finished in 299 ms. Jul 14 23:40:08.455057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:40:08.457955 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:40:08.458621 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 23:40:08.460118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:40:08.460225 systemd[1]: kubelet.service: Consumed 84ms CPU time, 95.1M memory peak. Jul 14 23:40:08.461772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:40:08.563121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:40:08.566789 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 23:40:08.603245 kubelet[2192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:40:08.603245 kubelet[2192]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 23:40:08.603245 kubelet[2192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:40:08.603245 kubelet[2192]: I0714 23:40:08.603208 2192 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 23:40:09.771103 kubelet[2192]: I0714 23:40:09.770767 2192 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 23:40:09.771103 kubelet[2192]: I0714 23:40:09.770801 2192 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 23:40:09.771103 kubelet[2192]: I0714 23:40:09.771059 2192 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 23:40:09.801048 kubelet[2192]: E0714 23:40:09.800995 2192 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:40:09.801931 kubelet[2192]: I0714 23:40:09.801846 2192 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 23:40:09.813961 kubelet[2192]: E0714 23:40:09.813932 2192 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 23:40:09.813961 kubelet[2192]: I0714 23:40:09.813959 2192 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 23:40:09.817145 kubelet[2192]: I0714 23:40:09.817118 2192 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 23:40:09.817754 kubelet[2192]: I0714 23:40:09.817702 2192 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 23:40:09.817919 kubelet[2192]: I0714 23:40:09.817750 2192 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 23:40:09.818260 kubelet[2192]: I0714 23:40:09.818247 2192 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 23:40:09.818260 kubelet[2192]: I0714 23:40:09.818260 2192 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 23:40:09.818694 kubelet[2192]: I0714 23:40:09.818680 2192 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:40:09.824680 kubelet[2192]: I0714 23:40:09.824654 2192 kubelet.go:446] "Attempting to sync node with API server" Jul 14 23:40:09.824745 kubelet[2192]: I0714 23:40:09.824685 2192 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 23:40:09.824745 kubelet[2192]: I0714 23:40:09.824706 2192 kubelet.go:352] "Adding apiserver pod source" Jul 14 23:40:09.824745 kubelet[2192]: I0714 23:40:09.824716 2192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 23:40:09.840199 kubelet[2192]: I0714 23:40:09.828658 2192 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 14 23:40:09.840199 kubelet[2192]: I0714 23:40:09.830006 2192 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 23:40:09.840199 kubelet[2192]: W0714 23:40:09.830543 2192 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 23:40:09.840199 kubelet[2192]: I0714 23:40:09.832959 2192 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 23:40:09.840199 kubelet[2192]: I0714 23:40:09.832988 2192 server.go:1287] "Started kubelet" Jul 14 23:40:09.845992 kubelet[2192]: W0714 23:40:09.845888 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Jul 14 23:40:09.845992 kubelet[2192]: W0714 23:40:09.845927 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Jul 14 23:40:09.845992 kubelet[2192]: E0714 23:40:09.845958 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:40:09.845992 kubelet[2192]: E0714 23:40:09.845981 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:40:09.846141 kubelet[2192]: I0714 23:40:09.846004 2192 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 23:40:09.846893 kubelet[2192]: I0714 23:40:09.846861 2192 server.go:479] "Adding debug handlers to kubelet server" Jul 14 23:40:09.852279 kubelet[2192]: I0714 23:40:09.852213 2192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 23:40:09.852693 kubelet[2192]: I0714 23:40:09.852667 2192 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 23:40:09.854320 kubelet[2192]: E0714 23:40:09.853614 2192 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18524294540856bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 23:40:09.832969919 +0000 UTC m=+1.263008561,LastTimestamp:2025-07-14 23:40:09.832969919 +0000 UTC m=+1.263008561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 23:40:09.854630 kubelet[2192]: I0714 23:40:09.854606 2192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 23:40:09.858286 kubelet[2192]: I0714 23:40:09.858262 2192 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 23:40:09.858471 kubelet[2192]: I0714 23:40:09.858458 2192 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 23:40:09.860509 kubelet[2192]: I0714 23:40:09.859107 2192 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 23:40:09.860509 kubelet[2192]: E0714 23:40:09.858735 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:40:09.860509 kubelet[2192]: I0714 23:40:09.859163 2192 reconciler.go:26] "Reconciler: start to sync state" Jul 14 23:40:09.860509 kubelet[2192]: W0714 23:40:09.860300 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Jul 14 23:40:09.860509 kubelet[2192]: E0714 23:40:09.860352 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:40:09.860509 kubelet[2192]: E0714 23:40:09.860421 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="200ms" Jul 14 23:40:09.862835 kubelet[2192]: I0714 23:40:09.862799 2192 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 23:40:09.864168 kubelet[2192]: E0714 23:40:09.864136 2192 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 23:40:09.864664 kubelet[2192]: I0714 23:40:09.864636 2192 factory.go:221] Registration of the containerd container factory successfully Jul 14 23:40:09.864664 kubelet[2192]: I0714 23:40:09.864654 2192 factory.go:221] Registration of the systemd container factory successfully Jul 14 23:40:09.870822 kubelet[2192]: I0714 23:40:09.870789 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 23:40:09.871871 kubelet[2192]: I0714 23:40:09.871845 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 23:40:09.871871 kubelet[2192]: I0714 23:40:09.871868 2192 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 23:40:09.871936 kubelet[2192]: I0714 23:40:09.871886 2192 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 23:40:09.871936 kubelet[2192]: I0714 23:40:09.871894 2192 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 23:40:09.871982 kubelet[2192]: E0714 23:40:09.871937 2192 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 23:40:09.876285 kubelet[2192]: W0714 23:40:09.876181 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Jul 14 23:40:09.876285 kubelet[2192]: E0714 23:40:09.876221 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:40:09.876940 kubelet[2192]: I0714 23:40:09.876924 2192 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 23:40:09.876940 kubelet[2192]: I0714 23:40:09.876938 2192 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 23:40:09.877064 kubelet[2192]: I0714 23:40:09.876955 2192 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:40:09.951767 kubelet[2192]: I0714 23:40:09.951740 2192 policy_none.go:49] "None policy: Start" Jul 14 23:40:09.951767 kubelet[2192]: I0714 23:40:09.951770 2192 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 23:40:09.951767 kubelet[2192]: I0714 23:40:09.951782 2192 state_mem.go:35] "Initializing new in-memory state store" Jul 14 23:40:09.956421 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 23:40:09.960144 kubelet[2192]: E0714 23:40:09.960105 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:40:09.969977 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 23:40:09.972179 kubelet[2192]: E0714 23:40:09.972131 2192 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 23:40:09.972771 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 23:40:09.987374 kubelet[2192]: I0714 23:40:09.987110 2192 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 23:40:09.987374 kubelet[2192]: I0714 23:40:09.987323 2192 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 23:40:09.987374 kubelet[2192]: I0714 23:40:09.987335 2192 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 23:40:09.987656 kubelet[2192]: I0714 23:40:09.987636 2192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 23:40:09.988511 kubelet[2192]: E0714 23:40:09.988485 2192 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 23:40:09.988589 kubelet[2192]: E0714 23:40:09.988542 2192 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 23:40:10.061284 kubelet[2192]: E0714 23:40:10.061176 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="400ms" Jul 14 23:40:10.089325 kubelet[2192]: I0714 23:40:10.089297 2192 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:40:10.089730 kubelet[2192]: E0714 23:40:10.089698 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Jul 14 23:40:10.180580 systemd[1]: Created slice kubepods-burstable-podb8b42cbba3b58db2494c3fab9bdd0f41.slice - libcontainer container kubepods-burstable-podb8b42cbba3b58db2494c3fab9bdd0f41.slice. Jul 14 23:40:10.187761 kubelet[2192]: E0714 23:40:10.187729 2192 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:40:10.189558 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 14 23:40:10.206099 kubelet[2192]: E0714 23:40:10.206041 2192 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:40:10.208258 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 14 23:40:10.209641 kubelet[2192]: E0714 23:40:10.209612 2192 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:40:10.261935 kubelet[2192]: I0714 23:40:10.261889 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:10.261935 kubelet[2192]: I0714 23:40:10.261930 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:10.262095 kubelet[2192]: I0714 23:40:10.261950 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8b42cbba3b58db2494c3fab9bdd0f41-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8b42cbba3b58db2494c3fab9bdd0f41\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:40:10.262095 kubelet[2192]: I0714 23:40:10.261969 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8b42cbba3b58db2494c3fab9bdd0f41-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8b42cbba3b58db2494c3fab9bdd0f41\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:40:10.262095 kubelet[2192]: I0714 23:40:10.261989 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:10.262095 kubelet[2192]: I0714 23:40:10.262003 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 23:40:10.262095 kubelet[2192]: I0714 23:40:10.262031 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8b42cbba3b58db2494c3fab9bdd0f41-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8b42cbba3b58db2494c3fab9bdd0f41\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:40:10.262197 kubelet[2192]: I0714 23:40:10.262056 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:10.262197 kubelet[2192]: I0714 23:40:10.262071 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:10.291024 kubelet[2192]: I0714 23:40:10.290991 2192 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:40:10.291419 kubelet[2192]: E0714 23:40:10.291389 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Jul 14 23:40:10.461993 kubelet[2192]: E0714 23:40:10.461856 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="800ms" Jul 14 23:40:10.488494 kubelet[2192]: E0714 23:40:10.488402 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:10.490957 containerd[1462]: time="2025-07-14T23:40:10.490909759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8b42cbba3b58db2494c3fab9bdd0f41,Namespace:kube-system,Attempt:0,}" Jul 14 23:40:10.506957 kubelet[2192]: E0714 23:40:10.506926 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:10.507394 containerd[1462]: time="2025-07-14T23:40:10.507361319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 14 23:40:10.510782 kubelet[2192]: E0714 23:40:10.510714 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:10.511408 containerd[1462]: time="2025-07-14T23:40:10.511313439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 14 23:40:10.691946 kubelet[2192]: W0714 23:40:10.691886 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Jul 14 23:40:10.692040 kubelet[2192]: E0714 23:40:10.691949 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:40:10.692696 kubelet[2192]: I0714 23:40:10.692672 2192 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:40:10.693011 kubelet[2192]: E0714 23:40:10.692971 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Jul 14 23:40:11.021907 kubelet[2192]: W0714 23:40:11.021855 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Jul 14 23:40:11.021907 kubelet[2192]: E0714 23:40:11.021905 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:40:11.074597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105756737.mount: Deactivated successfully. Jul 14 23:40:11.077815 containerd[1462]: time="2025-07-14T23:40:11.077764839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:40:11.080126 containerd[1462]: time="2025-07-14T23:40:11.080069279Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 14 23:40:11.081415 containerd[1462]: time="2025-07-14T23:40:11.081370879Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:40:11.082841 containerd[1462]: time="2025-07-14T23:40:11.082802039Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:40:11.083933 containerd[1462]: time="2025-07-14T23:40:11.083888399Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 23:40:11.084808 containerd[1462]: time="2025-07-14T23:40:11.084739599Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:40:11.085262 containerd[1462]: time="2025-07-14T23:40:11.085223759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 23:40:11.087968 containerd[1462]: time="2025-07-14T23:40:11.087926839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:40:11.089672 containerd[1462]: time="2025-07-14T23:40:11.089537039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.13716ms" Jul 14 23:40:11.093569 containerd[1462]: time="2025-07-14T23:40:11.091996079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 600.99252ms" Jul 14 23:40:11.095348 containerd[1462]: time="2025-07-14T23:40:11.095259079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 587.8208ms" Jul 14 23:40:11.105150 kubelet[2192]: W0714 23:40:11.105022 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Jul 14 23:40:11.105150 kubelet[2192]: E0714 23:40:11.105068 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:40:11.239663 containerd[1462]: time="2025-07-14T23:40:11.239092959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:40:11.239663 containerd[1462]: time="2025-07-14T23:40:11.239192199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:40:11.239663 containerd[1462]: time="2025-07-14T23:40:11.239209319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:11.239663 containerd[1462]: time="2025-07-14T23:40:11.239309279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:11.241162 containerd[1462]: time="2025-07-14T23:40:11.241065319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:40:11.241244 containerd[1462]: time="2025-07-14T23:40:11.241174599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:40:11.241833 containerd[1462]: time="2025-07-14T23:40:11.241727199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:11.241833 containerd[1462]: time="2025-07-14T23:40:11.241820439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:11.243195 containerd[1462]: time="2025-07-14T23:40:11.243114399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:40:11.243246 containerd[1462]: time="2025-07-14T23:40:11.243217319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:40:11.243279 containerd[1462]: time="2025-07-14T23:40:11.243251599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:11.243902 containerd[1462]: time="2025-07-14T23:40:11.243818679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:11.263327 kubelet[2192]: E0714 23:40:11.263242 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="1.6s" Jul 14 23:40:11.263333 systemd[1]: Started cri-containerd-46456f65fd0c19ed39effa0ab7a31833ef65c736de78ee36d6b1122af3f576af.scope - libcontainer container 46456f65fd0c19ed39effa0ab7a31833ef65c736de78ee36d6b1122af3f576af. Jul 14 23:40:11.264439 systemd[1]: Started cri-containerd-8b05945f12866f87d7df9ed0f1eb62d5e7a143db77567386f3c206824cb509be.scope - libcontainer container 8b05945f12866f87d7df9ed0f1eb62d5e7a143db77567386f3c206824cb509be. Jul 14 23:40:11.265849 systemd[1]: Started cri-containerd-a636ed1182952ddf308d6a3149d75cfc7c18430140b73a7d3b4ea62e46f0c300.scope - libcontainer container a636ed1182952ddf308d6a3149d75cfc7c18430140b73a7d3b4ea62e46f0c300. Jul 14 23:40:11.295717 containerd[1462]: time="2025-07-14T23:40:11.294486679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b05945f12866f87d7df9ed0f1eb62d5e7a143db77567386f3c206824cb509be\"" Jul 14 23:40:11.296459 kubelet[2192]: E0714 23:40:11.296422 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:11.300549 containerd[1462]: time="2025-07-14T23:40:11.300503359Z" level=info msg="CreateContainer within sandbox \"8b05945f12866f87d7df9ed0f1eb62d5e7a143db77567386f3c206824cb509be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 23:40:11.305968 containerd[1462]: time="2025-07-14T23:40:11.305900599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"46456f65fd0c19ed39effa0ab7a31833ef65c736de78ee36d6b1122af3f576af\"" Jul 14 23:40:11.307658 containerd[1462]: time="2025-07-14T23:40:11.307383799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8b42cbba3b58db2494c3fab9bdd0f41,Namespace:kube-system,Attempt:0,} returns sandbox id \"a636ed1182952ddf308d6a3149d75cfc7c18430140b73a7d3b4ea62e46f0c300\"" Jul 14 23:40:11.307841 kubelet[2192]: E0714 23:40:11.307417 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:11.307955 kubelet[2192]: E0714 23:40:11.307928 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:11.309256 containerd[1462]: time="2025-07-14T23:40:11.309228479Z" level=info msg="CreateContainer within sandbox \"46456f65fd0c19ed39effa0ab7a31833ef65c736de78ee36d6b1122af3f576af\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 23:40:11.310586 containerd[1462]: time="2025-07-14T23:40:11.310551319Z" level=info msg="CreateContainer within sandbox \"a636ed1182952ddf308d6a3149d75cfc7c18430140b73a7d3b4ea62e46f0c300\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 23:40:11.317986 containerd[1462]: time="2025-07-14T23:40:11.317940439Z" level=info msg="CreateContainer within sandbox \"8b05945f12866f87d7df9ed0f1eb62d5e7a143db77567386f3c206824cb509be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b92fea1f9e978872fb385fe5cced4d6b9c93192aa09ac5d3ba5decf79bfa13a1\"" Jul 14 23:40:11.318585 containerd[1462]: time="2025-07-14T23:40:11.318531239Z" level=info msg="StartContainer for \"b92fea1f9e978872fb385fe5cced4d6b9c93192aa09ac5d3ba5decf79bfa13a1\"" Jul 14 23:40:11.325065 containerd[1462]: time="2025-07-14T23:40:11.325027879Z" level=info msg="CreateContainer within sandbox \"46456f65fd0c19ed39effa0ab7a31833ef65c736de78ee36d6b1122af3f576af\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c54acdc1da1830594d3650bdc9642af8dc5847894544183949bef0d8cb29cb1a\"" Jul 14 23:40:11.325653 containerd[1462]: time="2025-07-14T23:40:11.325566479Z" level=info msg="StartContainer for \"c54acdc1da1830594d3650bdc9642af8dc5847894544183949bef0d8cb29cb1a\"" Jul 14 23:40:11.327366 containerd[1462]: time="2025-07-14T23:40:11.327259319Z" level=info msg="CreateContainer within sandbox \"a636ed1182952ddf308d6a3149d75cfc7c18430140b73a7d3b4ea62e46f0c300\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0fd5a43d5ebe08a7df8faf6992c66e006e2dd29ed15c119e99c185e91750dae\"" Jul 14 23:40:11.328092 containerd[1462]: time="2025-07-14T23:40:11.327692159Z" level=info msg="StartContainer for \"a0fd5a43d5ebe08a7df8faf6992c66e006e2dd29ed15c119e99c185e91750dae\"" Jul 14 23:40:11.350283 systemd[1]: Started cri-containerd-b92fea1f9e978872fb385fe5cced4d6b9c93192aa09ac5d3ba5decf79bfa13a1.scope - libcontainer container b92fea1f9e978872fb385fe5cced4d6b9c93192aa09ac5d3ba5decf79bfa13a1. Jul 14 23:40:11.353946 systemd[1]: Started cri-containerd-a0fd5a43d5ebe08a7df8faf6992c66e006e2dd29ed15c119e99c185e91750dae.scope - libcontainer container a0fd5a43d5ebe08a7df8faf6992c66e006e2dd29ed15c119e99c185e91750dae. Jul 14 23:40:11.354994 systemd[1]: Started cri-containerd-c54acdc1da1830594d3650bdc9642af8dc5847894544183949bef0d8cb29cb1a.scope - libcontainer container c54acdc1da1830594d3650bdc9642af8dc5847894544183949bef0d8cb29cb1a. Jul 14 23:40:11.388776 containerd[1462]: time="2025-07-14T23:40:11.388736999Z" level=info msg="StartContainer for \"b92fea1f9e978872fb385fe5cced4d6b9c93192aa09ac5d3ba5decf79bfa13a1\" returns successfully" Jul 14 23:40:11.413983 kubelet[2192]: W0714 23:40:11.413915 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Jul 14 23:40:11.414116 kubelet[2192]: E0714 23:40:11.413989 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:40:11.427129 containerd[1462]: time="2025-07-14T23:40:11.426984319Z" level=info msg="StartContainer for \"a0fd5a43d5ebe08a7df8faf6992c66e006e2dd29ed15c119e99c185e91750dae\" returns successfully" Jul 14 23:40:11.427249 containerd[1462]: time="2025-07-14T23:40:11.426984359Z" level=info msg="StartContainer for \"c54acdc1da1830594d3650bdc9642af8dc5847894544183949bef0d8cb29cb1a\" returns successfully" Jul 14 23:40:11.494647 kubelet[2192]: I0714 23:40:11.494623 2192 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:40:11.495247 kubelet[2192]: E0714 23:40:11.495218 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Jul 14 23:40:11.882651 kubelet[2192]: E0714 23:40:11.881921 2192 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:40:11.882651 kubelet[2192]: E0714 23:40:11.882056 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:11.884385 kubelet[2192]: E0714 23:40:11.884359 2192 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:40:11.884480 kubelet[2192]: E0714 23:40:11.884459 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:11.886391 kubelet[2192]: E0714 23:40:11.886368 2192 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:40:11.886509 kubelet[2192]: E0714 23:40:11.886461 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:12.867438 kubelet[2192]: E0714 23:40:12.867388 2192 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 23:40:12.888570 kubelet[2192]: E0714 23:40:12.888535 2192 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:40:12.888691 kubelet[2192]: E0714 23:40:12.888651 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:12.888910 kubelet[2192]: E0714 23:40:12.888895 2192 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:40:12.888994 kubelet[2192]: E0714 23:40:12.888981 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:13.098658 kubelet[2192]: I0714 23:40:13.096298 2192 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:40:13.108351 kubelet[2192]: I0714 23:40:13.107483 2192 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 23:40:13.108351 kubelet[2192]: E0714 23:40:13.107521 2192 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 23:40:13.114725 kubelet[2192]: E0714 23:40:13.114699 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:40:13.215498 kubelet[2192]: E0714 23:40:13.215021 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:40:13.315833 kubelet[2192]: E0714 23:40:13.315786 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:40:13.359717 kubelet[2192]: I0714 23:40:13.359667 2192 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 23:40:13.365942 kubelet[2192]: E0714 23:40:13.365773 2192 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 14 23:40:13.365942 kubelet[2192]: I0714 23:40:13.365801 2192 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:13.367614 kubelet[2192]: E0714 23:40:13.367430 2192 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:13.367614 kubelet[2192]: I0714 23:40:13.367457 2192 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 23:40:13.369167 kubelet[2192]: E0714 23:40:13.369119 2192 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 14 23:40:13.831019 kubelet[2192]: I0714 23:40:13.830958 2192 apiserver.go:52] "Watching apiserver" Jul 14 23:40:13.859722 kubelet[2192]: I0714 23:40:13.859615 2192 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 23:40:15.024000 systemd[1]: Reload requested from client PID 2470 ('systemctl') (unit session-7.scope)... Jul 14 23:40:15.024015 systemd[1]: Reloading... Jul 14 23:40:15.099115 zram_generator::config[2514]: No configuration found. Jul 14 23:40:15.180241 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:40:15.264480 systemd[1]: Reloading finished in 240 ms. Jul 14 23:40:15.285457 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:40:15.298986 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 23:40:15.299252 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:40:15.299310 systemd[1]: kubelet.service: Consumed 1.660s CPU time, 128M memory peak. Jul 14 23:40:15.307493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:40:15.407732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:40:15.410922 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 23:40:15.447762 kubelet[2556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:40:15.447762 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 23:40:15.448059 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:40:15.448059 kubelet[2556]: I0714 23:40:15.447889 2556 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 23:40:15.453911 kubelet[2556]: I0714 23:40:15.453877 2556 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 23:40:15.454554 kubelet[2556]: I0714 23:40:15.454009 2556 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 23:40:15.454554 kubelet[2556]: I0714 23:40:15.454271 2556 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 23:40:15.455491 kubelet[2556]: I0714 23:40:15.455474 2556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 23:40:15.457796 kubelet[2556]: I0714 23:40:15.457664 2556 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 23:40:15.460451 kubelet[2556]: E0714 23:40:15.460423 2556 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 23:40:15.460451 kubelet[2556]: I0714 23:40:15.460452 2556 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 23:40:15.464423 kubelet[2556]: I0714 23:40:15.463319 2556 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 23:40:15.464423 kubelet[2556]: I0714 23:40:15.463553 2556 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 23:40:15.464423 kubelet[2556]: I0714 23:40:15.463582 2556 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 23:40:15.464423 kubelet[2556]: I0714 23:40:15.463865 2556 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 23:40:15.464600 kubelet[2556]: I0714 23:40:15.463875 2556 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 23:40:15.464600 kubelet[2556]: I0714 23:40:15.463925 2556 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:40:15.464600 kubelet[2556]: I0714 23:40:15.464054 2556 kubelet.go:446] "Attempting to sync node with API server" Jul 14 23:40:15.464600 kubelet[2556]: I0714 23:40:15.464069 2556 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 23:40:15.464600 kubelet[2556]: I0714 23:40:15.464146 2556 kubelet.go:352] "Adding apiserver pod source" Jul 14 23:40:15.464600 kubelet[2556]: I0714 23:40:15.464196 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 23:40:15.465279 kubelet[2556]: I0714 23:40:15.465258 2556 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 14 23:40:15.466741 kubelet[2556]: I0714 23:40:15.466719 2556 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 23:40:15.467411 kubelet[2556]: I0714 23:40:15.467217 2556 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 23:40:15.467411 kubelet[2556]: I0714 23:40:15.467249 2556 server.go:1287] "Started kubelet" Jul 14 23:40:15.469290 kubelet[2556]: I0714 23:40:15.469248 2556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 23:40:15.469824 kubelet[2556]: I0714 23:40:15.469630 2556 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 23:40:15.469824 kubelet[2556]: I0714 23:40:15.469693 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 23:40:15.471744 kubelet[2556]: I0714 23:40:15.471726 2556 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 23:40:15.475116 kubelet[2556]: I0714 23:40:15.469278 2556 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 23:40:15.475116 kubelet[2556]: I0714 23:40:15.472693 2556 server.go:479] "Adding debug handlers to kubelet server" Jul 14 23:40:15.475116 kubelet[2556]: E0714 23:40:15.473213 2556 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 23:40:15.476480 kubelet[2556]: I0714 23:40:15.476454 2556 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 23:40:15.476797 kubelet[2556]: I0714 23:40:15.476555 2556 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 23:40:15.476797 kubelet[2556]: I0714 23:40:15.476668 2556 reconciler.go:26] "Reconciler: start to sync state" Jul 14 23:40:15.477678 kubelet[2556]: E0714 23:40:15.477275 2556 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:40:15.481030 kubelet[2556]: I0714 23:40:15.479784 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 23:40:15.481030 kubelet[2556]: I0714 23:40:15.480730 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 23:40:15.481030 kubelet[2556]: I0714 23:40:15.480747 2556 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 23:40:15.481030 kubelet[2556]: I0714 23:40:15.480766 2556 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 23:40:15.481030 kubelet[2556]: I0714 23:40:15.480772 2556 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 23:40:15.481030 kubelet[2556]: E0714 23:40:15.480809 2556 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 23:40:15.481218 kubelet[2556]: I0714 23:40:15.481198 2556 factory.go:221] Registration of the systemd container factory successfully Jul 14 23:40:15.481330 kubelet[2556]: I0714 23:40:15.481287 2556 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 23:40:15.492162 kubelet[2556]: I0714 23:40:15.491823 2556 factory.go:221] Registration of the containerd container factory successfully Jul 14 23:40:15.521985 kubelet[2556]: I0714 23:40:15.521957 2556 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 23:40:15.521985 kubelet[2556]: I0714 23:40:15.521976 2556 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 23:40:15.522160 kubelet[2556]: I0714 23:40:15.521997 2556 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:40:15.522182 kubelet[2556]: I0714 23:40:15.522162 2556 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 23:40:15.522200 kubelet[2556]: I0714 23:40:15.522173 2556 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 23:40:15.522200 kubelet[2556]: I0714 23:40:15.522189 2556 policy_none.go:49] "None policy: Start" Jul 14 23:40:15.522200 kubelet[2556]: I0714 23:40:15.522197 2556 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 23:40:15.522256 kubelet[2556]: I0714 23:40:15.522205 2556 state_mem.go:35] "Initializing new in-memory state store" Jul 14 23:40:15.522314 kubelet[2556]: I0714 23:40:15.522298 2556 state_mem.go:75] "Updated machine memory state" Jul 14 23:40:15.525915 kubelet[2556]: I0714 23:40:15.525890 2556 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 23:40:15.526119 kubelet[2556]: I0714 23:40:15.526089 2556 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 23:40:15.526168 kubelet[2556]: I0714 23:40:15.526108 2556 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 23:40:15.526428 kubelet[2556]: I0714 23:40:15.526312 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 23:40:15.528010 kubelet[2556]: E0714 23:40:15.527980 2556 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 23:40:15.583113 kubelet[2556]: I0714 23:40:15.581590 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 23:40:15.583113 kubelet[2556]: I0714 23:40:15.581948 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 23:40:15.583113 kubelet[2556]: I0714 23:40:15.582224 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:15.630602 kubelet[2556]: I0714 23:40:15.630560 2556 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:40:15.636172 kubelet[2556]: I0714 23:40:15.636129 2556 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 14 23:40:15.636279 kubelet[2556]: I0714 23:40:15.636208 2556 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 23:40:15.677842 kubelet[2556]: I0714 23:40:15.677791 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:15.677960 kubelet[2556]: I0714 23:40:15.677830 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:15.677960 kubelet[2556]: I0714 23:40:15.677927 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:15.678004 kubelet[2556]: I0714 23:40:15.677970 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 23:40:15.678057 kubelet[2556]: I0714 23:40:15.678014 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8b42cbba3b58db2494c3fab9bdd0f41-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8b42cbba3b58db2494c3fab9bdd0f41\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:40:15.678096 kubelet[2556]: I0714 23:40:15.678054 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:15.678096 kubelet[2556]: I0714 23:40:15.678089 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:40:15.678144 kubelet[2556]: I0714 23:40:15.678111 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8b42cbba3b58db2494c3fab9bdd0f41-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8b42cbba3b58db2494c3fab9bdd0f41\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:40:15.678144 kubelet[2556]: I0714 23:40:15.678128 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8b42cbba3b58db2494c3fab9bdd0f41-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8b42cbba3b58db2494c3fab9bdd0f41\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:40:15.887315 kubelet[2556]: E0714 23:40:15.887166 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:15.887315 kubelet[2556]: E0714 23:40:15.887242 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:15.887315 kubelet[2556]: E0714 23:40:15.887252 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:16.028855 sudo[2592]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 23:40:16.029151 sudo[2592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 14 23:40:16.460299 sudo[2592]: pam_unix(sudo:session): session closed for user root Jul 14 23:40:16.465509 kubelet[2556]: I0714 23:40:16.465487 2556 apiserver.go:52] "Watching apiserver" Jul 14 23:40:16.477209 kubelet[2556]: I0714 23:40:16.477179 2556 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 23:40:16.503709 kubelet[2556]: I0714 23:40:16.503615 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 23:40:16.503956 kubelet[2556]: I0714 23:40:16.503942 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 23:40:16.504672 kubelet[2556]: E0714 23:40:16.504653 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:16.511632 kubelet[2556]: E0714 23:40:16.511412 2556 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 23:40:16.511632 kubelet[2556]: E0714 23:40:16.511564 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:16.512250 kubelet[2556]: E0714 23:40:16.512214 2556 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 23:40:16.512351 kubelet[2556]: E0714 23:40:16.512334 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:16.529894 kubelet[2556]: I0714 23:40:16.529838 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5298238199999998 podStartE2EDuration="1.52982382s" podCreationTimestamp="2025-07-14 23:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:40:16.52257532 +0000 UTC m=+1.108453042" watchObservedRunningTime="2025-07-14 23:40:16.52982382 +0000 UTC m=+1.115701542" Jul 14 23:40:16.537640 kubelet[2556]: I0714 23:40:16.536730 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.536716642 podStartE2EDuration="1.536716642s" podCreationTimestamp="2025-07-14 23:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:40:16.530236659 +0000 UTC m=+1.116114381" watchObservedRunningTime="2025-07-14 23:40:16.536716642 +0000 UTC m=+1.122594364" Jul 14 23:40:16.545212 kubelet[2556]: I0714 23:40:16.545173 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.545049219 podStartE2EDuration="1.545049219s" podCreationTimestamp="2025-07-14 23:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:40:16.537024201 +0000 UTC m=+1.122901923" watchObservedRunningTime="2025-07-14 23:40:16.545049219 +0000 UTC m=+1.130926941" Jul 14 23:40:17.506404 kubelet[2556]: E0714 23:40:17.506369 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:17.507188 kubelet[2556]: E0714 23:40:17.506945 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:17.890173 sudo[1640]: pam_unix(sudo:session): session closed for user root Jul 14 23:40:17.891730 sshd[1639]: Connection closed by 10.0.0.1 port 39958 Jul 14 23:40:17.893257 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Jul 14 23:40:17.896861 systemd[1]: sshd@6-10.0.0.8:22-10.0.0.1:39958.service: Deactivated successfully. Jul 14 23:40:17.898945 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 23:40:17.899214 systemd[1]: session-7.scope: Consumed 8.433s CPU time, 258.3M memory peak. Jul 14 23:40:17.900955 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jul 14 23:40:17.902016 systemd-logind[1441]: Removed session 7. Jul 14 23:40:18.321670 kubelet[2556]: E0714 23:40:18.321644 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:22.283993 kubelet[2556]: I0714 23:40:22.283961 2556 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 23:40:22.284548 containerd[1462]: time="2025-07-14T23:40:22.284295353Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 23:40:22.285089 kubelet[2556]: I0714 23:40:22.284783 2556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 23:40:23.129646 kubelet[2556]: E0714 23:40:23.129568 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:23.287964 systemd[1]: Created slice kubepods-besteffort-podd96f351a_e7b6_4fa8_9476_89d0d4152267.slice - libcontainer container kubepods-besteffort-podd96f351a_e7b6_4fa8_9476_89d0d4152267.slice. Jul 14 23:40:23.297574 systemd[1]: Created slice kubepods-burstable-podc1391798_becd_4448_8c96_43c288f8f16a.slice - libcontainer container kubepods-burstable-podc1391798_becd_4448_8c96_43c288f8f16a.slice. Jul 14 23:40:23.329416 kubelet[2556]: I0714 23:40:23.329378 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d96f351a-e7b6-4fa8-9476-89d0d4152267-xtables-lock\") pod \"kube-proxy-gqn8f\" (UID: \"d96f351a-e7b6-4fa8-9476-89d0d4152267\") " pod="kube-system/kube-proxy-gqn8f" Jul 14 23:40:23.329885 kubelet[2556]: I0714 23:40:23.329849 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cilium-run\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.329986 kubelet[2556]: I0714 23:40:23.329973 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cilium-cgroup\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330118 kubelet[2556]: I0714 23:40:23.330067 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-lib-modules\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330118 kubelet[2556]: I0714 23:40:23.330116 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-host-proc-sys-kernel\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330181 kubelet[2556]: I0714 23:40:23.330134 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d96f351a-e7b6-4fa8-9476-89d0d4152267-lib-modules\") pod \"kube-proxy-gqn8f\" (UID: \"d96f351a-e7b6-4fa8-9476-89d0d4152267\") " pod="kube-system/kube-proxy-gqn8f" Jul 14 23:40:23.330181 kubelet[2556]: I0714 23:40:23.330151 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-host-proc-sys-net\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330181 kubelet[2556]: I0714 23:40:23.330166 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-bpf-maps\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330181 kubelet[2556]: I0714 23:40:23.330183 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1391798-becd-4448-8c96-43c288f8f16a-clustermesh-secrets\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330303 kubelet[2556]: I0714 23:40:23.330198 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1391798-becd-4448-8c96-43c288f8f16a-hubble-tls\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330303 kubelet[2556]: I0714 23:40:23.330213 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cni-path\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330303 kubelet[2556]: I0714 23:40:23.330227 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-xtables-lock\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330303 kubelet[2556]: I0714 23:40:23.330240 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1391798-becd-4448-8c96-43c288f8f16a-cilium-config-path\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330303 kubelet[2556]: I0714 23:40:23.330256 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dpsn\" (UniqueName: \"kubernetes.io/projected/c1391798-becd-4448-8c96-43c288f8f16a-kube-api-access-6dpsn\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330413 kubelet[2556]: I0714 23:40:23.330272 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfgt6\" (UniqueName: \"kubernetes.io/projected/d96f351a-e7b6-4fa8-9476-89d0d4152267-kube-api-access-wfgt6\") pod \"kube-proxy-gqn8f\" (UID: \"d96f351a-e7b6-4fa8-9476-89d0d4152267\") " pod="kube-system/kube-proxy-gqn8f" Jul 14 23:40:23.330413 kubelet[2556]: I0714 23:40:23.330289 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-hostproc\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.330413 kubelet[2556]: I0714 23:40:23.330315 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d96f351a-e7b6-4fa8-9476-89d0d4152267-kube-proxy\") pod \"kube-proxy-gqn8f\" (UID: \"d96f351a-e7b6-4fa8-9476-89d0d4152267\") " pod="kube-system/kube-proxy-gqn8f" Jul 14 23:40:23.330413 kubelet[2556]: I0714 23:40:23.330329 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-etc-cni-netd\") pod \"cilium-tqxjs\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " pod="kube-system/cilium-tqxjs" Jul 14 23:40:23.371120 systemd[1]: Created slice kubepods-besteffort-pod04f21409_337a_40dc_9947_a9187a0a59cb.slice - libcontainer container kubepods-besteffort-pod04f21409_337a_40dc_9947_a9187a0a59cb.slice. Jul 14 23:40:23.431309 kubelet[2556]: I0714 23:40:23.431194 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76dbb\" (UniqueName: \"kubernetes.io/projected/04f21409-337a-40dc-9947-a9187a0a59cb-kube-api-access-76dbb\") pod \"cilium-operator-6c4d7847fc-t5lkn\" (UID: \"04f21409-337a-40dc-9947-a9187a0a59cb\") " pod="kube-system/cilium-operator-6c4d7847fc-t5lkn" Jul 14 23:40:23.431408 kubelet[2556]: I0714 23:40:23.431313 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04f21409-337a-40dc-9947-a9187a0a59cb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-t5lkn\" (UID: \"04f21409-337a-40dc-9947-a9187a0a59cb\") " pod="kube-system/cilium-operator-6c4d7847fc-t5lkn" Jul 14 23:40:23.514942 kubelet[2556]: E0714 23:40:23.514897 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:23.595118 kubelet[2556]: E0714 23:40:23.595043 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:23.596025 containerd[1462]: time="2025-07-14T23:40:23.595622340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqn8f,Uid:d96f351a-e7b6-4fa8-9476-89d0d4152267,Namespace:kube-system,Attempt:0,}" Jul 14 23:40:23.600267 kubelet[2556]: E0714 23:40:23.600171 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:23.600926 containerd[1462]: time="2025-07-14T23:40:23.600899890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tqxjs,Uid:c1391798-becd-4448-8c96-43c288f8f16a,Namespace:kube-system,Attempt:0,}" Jul 14 23:40:23.621234 containerd[1462]: time="2025-07-14T23:40:23.621149056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:40:23.621234 containerd[1462]: time="2025-07-14T23:40:23.621199696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:40:23.621234 containerd[1462]: time="2025-07-14T23:40:23.621210736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:23.622031 containerd[1462]: time="2025-07-14T23:40:23.621748095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:40:23.622031 containerd[1462]: time="2025-07-14T23:40:23.621794215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:40:23.622031 containerd[1462]: time="2025-07-14T23:40:23.621808815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:23.622031 containerd[1462]: time="2025-07-14T23:40:23.621623055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:23.622538 containerd[1462]: time="2025-07-14T23:40:23.622456933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:23.647261 systemd[1]: Started cri-containerd-038fdbd0d4f563191629f7a257949e6aacb896127ba1bfefe259e34d33be04c5.scope - libcontainer container 038fdbd0d4f563191629f7a257949e6aacb896127ba1bfefe259e34d33be04c5. Jul 14 23:40:23.648381 systemd[1]: Started cri-containerd-ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e.scope - libcontainer container ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e. Jul 14 23:40:23.672143 containerd[1462]: time="2025-07-14T23:40:23.672106168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tqxjs,Uid:c1391798-becd-4448-8c96-43c288f8f16a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\"" Jul 14 23:40:23.673093 kubelet[2556]: E0714 23:40:23.673058 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:23.674492 containerd[1462]: time="2025-07-14T23:40:23.674456964Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 23:40:23.675976 kubelet[2556]: E0714 23:40:23.675875 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:23.676601 containerd[1462]: time="2025-07-14T23:40:23.676483001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-t5lkn,Uid:04f21409-337a-40dc-9947-a9187a0a59cb,Namespace:kube-system,Attempt:0,}" Jul 14 23:40:23.676944 containerd[1462]: time="2025-07-14T23:40:23.676900240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqn8f,Uid:d96f351a-e7b6-4fa8-9476-89d0d4152267,Namespace:kube-system,Attempt:0,} returns sandbox id \"038fdbd0d4f563191629f7a257949e6aacb896127ba1bfefe259e34d33be04c5\"" Jul 14 23:40:23.678399 kubelet[2556]: E0714 23:40:23.678242 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:23.681214 containerd[1462]: time="2025-07-14T23:40:23.681144153Z" level=info msg="CreateContainer within sandbox \"038fdbd0d4f563191629f7a257949e6aacb896127ba1bfefe259e34d33be04c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 23:40:23.702068 containerd[1462]: time="2025-07-14T23:40:23.701913877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:40:23.702068 containerd[1462]: time="2025-07-14T23:40:23.701976037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:40:23.702068 containerd[1462]: time="2025-07-14T23:40:23.701990677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:23.702284 containerd[1462]: time="2025-07-14T23:40:23.702071157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:23.711682 containerd[1462]: time="2025-07-14T23:40:23.711596420Z" level=info msg="CreateContainer within sandbox \"038fdbd0d4f563191629f7a257949e6aacb896127ba1bfefe259e34d33be04c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d026aa3a467d02df097aea8f67eaea179259045de78372ac1bb6310a9b28d39e\"" Jul 14 23:40:23.713851 containerd[1462]: time="2025-07-14T23:40:23.713226658Z" level=info msg="StartContainer for \"d026aa3a467d02df097aea8f67eaea179259045de78372ac1bb6310a9b28d39e\"" Jul 14 23:40:23.718667 systemd[1]: Started cri-containerd-25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d.scope - libcontainer container 25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d. Jul 14 23:40:23.749300 systemd[1]: Started cri-containerd-d026aa3a467d02df097aea8f67eaea179259045de78372ac1bb6310a9b28d39e.scope - libcontainer container d026aa3a467d02df097aea8f67eaea179259045de78372ac1bb6310a9b28d39e. Jul 14 23:40:23.757703 containerd[1462]: time="2025-07-14T23:40:23.757665301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-t5lkn,Uid:04f21409-337a-40dc-9947-a9187a0a59cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d\"" Jul 14 23:40:23.758506 kubelet[2556]: E0714 23:40:23.758476 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:23.780961 containerd[1462]: time="2025-07-14T23:40:23.780919981Z" level=info msg="StartContainer for \"d026aa3a467d02df097aea8f67eaea179259045de78372ac1bb6310a9b28d39e\" returns successfully" Jul 14 23:40:24.351595 kubelet[2556]: E0714 23:40:24.348972 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:24.519520 kubelet[2556]: E0714 23:40:24.519480 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:24.519623 kubelet[2556]: E0714 23:40:24.519593 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:24.536563 kubelet[2556]: I0714 23:40:24.536483 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gqn8f" podStartSLOduration=1.536465702 podStartE2EDuration="1.536465702s" podCreationTimestamp="2025-07-14 23:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:40:24.536236862 +0000 UTC m=+9.122114584" watchObservedRunningTime="2025-07-14 23:40:24.536465702 +0000 UTC m=+9.122343424" Jul 14 23:40:28.329267 kubelet[2556]: E0714 23:40:28.329188 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:28.680829 update_engine[1445]: I20250714 23:40:28.680606 1445 update_attempter.cc:509] Updating boot flags... Jul 14 23:40:28.721149 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2939) Jul 14 23:40:28.773188 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2942) Jul 14 23:40:30.196383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119342143.mount: Deactivated successfully. Jul 14 23:40:33.005589 containerd[1462]: time="2025-07-14T23:40:33.005526175Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:40:33.006605 containerd[1462]: time="2025-07-14T23:40:33.006017014Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 14 23:40:33.007190 containerd[1462]: time="2025-07-14T23:40:33.007141733Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:40:33.015345 containerd[1462]: time="2025-07-14T23:40:33.015201486Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.340706122s" Jul 14 23:40:33.015345 containerd[1462]: time="2025-07-14T23:40:33.015243126Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 14 23:40:33.018199 containerd[1462]: time="2025-07-14T23:40:33.017696764Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 23:40:33.020992 containerd[1462]: time="2025-07-14T23:40:33.020963801Z" level=info msg="CreateContainer within sandbox \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 23:40:33.040911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458149470.mount: Deactivated successfully. Jul 14 23:40:33.041479 containerd[1462]: time="2025-07-14T23:40:33.041346262Z" level=info msg="CreateContainer within sandbox \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d\"" Jul 14 23:40:33.042033 containerd[1462]: time="2025-07-14T23:40:33.041980742Z" level=info msg="StartContainer for \"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d\"" Jul 14 23:40:33.068242 systemd[1]: Started cri-containerd-00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d.scope - libcontainer container 00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d. Jul 14 23:40:33.093170 containerd[1462]: time="2025-07-14T23:40:33.093122056Z" level=info msg="StartContainer for \"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d\" returns successfully" Jul 14 23:40:33.139832 systemd[1]: cri-containerd-00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d.scope: Deactivated successfully. Jul 14 23:40:33.140190 systemd[1]: cri-containerd-00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d.scope: Consumed 59ms CPU time, 6.7M memory peak, 3.1M written to disk. Jul 14 23:40:33.311733 containerd[1462]: time="2025-07-14T23:40:33.297594632Z" level=info msg="shim disconnected" id=00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d namespace=k8s.io Jul 14 23:40:33.311733 containerd[1462]: time="2025-07-14T23:40:33.311655979Z" level=warning msg="cleaning up after shim disconnected" id=00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d namespace=k8s.io Jul 14 23:40:33.311733 containerd[1462]: time="2025-07-14T23:40:33.311671019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:40:33.536155 kubelet[2556]: E0714 23:40:33.536116 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:33.538244 containerd[1462]: time="2025-07-14T23:40:33.538182055Z" level=info msg="CreateContainer within sandbox \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 23:40:33.553509 containerd[1462]: time="2025-07-14T23:40:33.553466081Z" level=info msg="CreateContainer within sandbox \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71\"" Jul 14 23:40:33.553932 containerd[1462]: time="2025-07-14T23:40:33.553904081Z" level=info msg="StartContainer for \"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71\"" Jul 14 23:40:33.581278 systemd[1]: Started cri-containerd-3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71.scope - libcontainer container 3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71. Jul 14 23:40:33.603691 containerd[1462]: time="2025-07-14T23:40:33.603644356Z" level=info msg="StartContainer for \"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71\" returns successfully" Jul 14 23:40:33.629169 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 23:40:33.629392 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:40:33.629633 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 14 23:40:33.637498 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 23:40:33.637679 systemd[1]: cri-containerd-3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71.scope: Deactivated successfully. Jul 14 23:40:33.651128 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:40:33.668005 containerd[1462]: time="2025-07-14T23:40:33.667938818Z" level=info msg="shim disconnected" id=3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71 namespace=k8s.io Jul 14 23:40:33.668005 containerd[1462]: time="2025-07-14T23:40:33.667992378Z" level=warning msg="cleaning up after shim disconnected" id=3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71 namespace=k8s.io Jul 14 23:40:33.668005 containerd[1462]: time="2025-07-14T23:40:33.668003018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:40:34.038434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d-rootfs.mount: Deactivated successfully. Jul 14 23:40:34.275488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631962557.mount: Deactivated successfully. Jul 14 23:40:34.539160 kubelet[2556]: E0714 23:40:34.538778 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:34.542390 containerd[1462]: time="2025-07-14T23:40:34.542209421Z" level=info msg="CreateContainer within sandbox \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 23:40:34.573975 containerd[1462]: time="2025-07-14T23:40:34.573919475Z" level=info msg="CreateContainer within sandbox \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e\"" Jul 14 23:40:34.574629 containerd[1462]: time="2025-07-14T23:40:34.574580194Z" level=info msg="StartContainer for \"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e\"" Jul 14 23:40:34.604432 systemd[1]: Started cri-containerd-37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e.scope - libcontainer container 37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e. Jul 14 23:40:34.653489 containerd[1462]: time="2025-07-14T23:40:34.653446288Z" level=info msg="StartContainer for \"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e\" returns successfully" Jul 14 23:40:34.664555 systemd[1]: cri-containerd-37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e.scope: Deactivated successfully. Jul 14 23:40:34.753856 containerd[1462]: time="2025-07-14T23:40:34.753765803Z" level=info msg="shim disconnected" id=37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e namespace=k8s.io Jul 14 23:40:34.753856 containerd[1462]: time="2025-07-14T23:40:34.753840523Z" level=warning msg="cleaning up after shim disconnected" id=37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e namespace=k8s.io Jul 14 23:40:34.753856 containerd[1462]: time="2025-07-14T23:40:34.753850443Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:40:34.775317 containerd[1462]: time="2025-07-14T23:40:34.775269945Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:40:34.775814 containerd[1462]: time="2025-07-14T23:40:34.775767864Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 14 23:40:34.776649 containerd[1462]: time="2025-07-14T23:40:34.776608184Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:40:34.778176 containerd[1462]: time="2025-07-14T23:40:34.778144902Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.760409778s" Jul 14 23:40:34.778263 containerd[1462]: time="2025-07-14T23:40:34.778179262Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 14 23:40:34.780218 containerd[1462]: time="2025-07-14T23:40:34.780183861Z" level=info msg="CreateContainer within sandbox \"25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 23:40:34.792968 containerd[1462]: time="2025-07-14T23:40:34.792808970Z" level=info msg="CreateContainer within sandbox \"25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\"" Jul 14 23:40:34.793388 containerd[1462]: time="2025-07-14T23:40:34.793365649Z" level=info msg="StartContainer for \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\"" Jul 14 23:40:34.818264 systemd[1]: Started cri-containerd-e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57.scope - libcontainer container e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57. Jul 14 23:40:34.844789 containerd[1462]: time="2025-07-14T23:40:34.844738486Z" level=info msg="StartContainer for \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\" returns successfully" Jul 14 23:40:35.541273 kubelet[2556]: E0714 23:40:35.541236 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:35.548993 kubelet[2556]: E0714 23:40:35.548962 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:35.551857 containerd[1462]: time="2025-07-14T23:40:35.551812678Z" level=info msg="CreateContainer within sandbox \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 23:40:35.568588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount315551762.mount: Deactivated successfully. Jul 14 23:40:35.572014 containerd[1462]: time="2025-07-14T23:40:35.571969382Z" level=info msg="CreateContainer within sandbox \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8\"" Jul 14 23:40:35.572718 containerd[1462]: time="2025-07-14T23:40:35.572691222Z" level=info msg="StartContainer for \"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8\"" Jul 14 23:40:35.595742 kubelet[2556]: I0714 23:40:35.595680 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-t5lkn" podStartSLOduration=1.575675441 podStartE2EDuration="12.595662204s" podCreationTimestamp="2025-07-14 23:40:23 +0000 UTC" firstStartedPulling="2025-07-14 23:40:23.758954339 +0000 UTC m=+8.344832061" lastFinishedPulling="2025-07-14 23:40:34.778941102 +0000 UTC m=+19.364818824" observedRunningTime="2025-07-14 23:40:35.554122716 +0000 UTC m=+20.140000438" watchObservedRunningTime="2025-07-14 23:40:35.595662204 +0000 UTC m=+20.181539926" Jul 14 23:40:35.625234 systemd[1]: Started cri-containerd-4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8.scope - libcontainer container 4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8. Jul 14 23:40:35.644459 systemd[1]: cri-containerd-4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8.scope: Deactivated successfully. Jul 14 23:40:35.646764 containerd[1462]: time="2025-07-14T23:40:35.646691643Z" level=info msg="StartContainer for \"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8\" returns successfully" Jul 14 23:40:35.670672 containerd[1462]: time="2025-07-14T23:40:35.670608864Z" level=info msg="shim disconnected" id=4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8 namespace=k8s.io Jul 14 23:40:35.671049 containerd[1462]: time="2025-07-14T23:40:35.670895224Z" level=warning msg="cleaning up after shim disconnected" id=4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8 namespace=k8s.io Jul 14 23:40:35.671049 containerd[1462]: time="2025-07-14T23:40:35.670914704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:40:36.039904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8-rootfs.mount: Deactivated successfully. Jul 14 23:40:36.553066 kubelet[2556]: E0714 23:40:36.553027 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:36.553430 kubelet[2556]: E0714 23:40:36.553318 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:36.555046 containerd[1462]: time="2025-07-14T23:40:36.555011832Z" level=info msg="CreateContainer within sandbox \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 23:40:36.583969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2067813304.mount: Deactivated successfully. Jul 14 23:40:36.586333 containerd[1462]: time="2025-07-14T23:40:36.586290089Z" level=info msg="CreateContainer within sandbox \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\"" Jul 14 23:40:36.586799 containerd[1462]: time="2025-07-14T23:40:36.586756448Z" level=info msg="StartContainer for \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\"" Jul 14 23:40:36.614264 systemd[1]: Started cri-containerd-f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c.scope - libcontainer container f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c. Jul 14 23:40:36.640554 containerd[1462]: time="2025-07-14T23:40:36.640402489Z" level=info msg="StartContainer for \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\" returns successfully" Jul 14 23:40:36.761530 kubelet[2556]: I0714 23:40:36.761494 2556 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 23:40:36.816397 systemd[1]: Created slice kubepods-burstable-pod0e173942_0cd2_4c68_84fe_1264bbfe9eb1.slice - libcontainer container kubepods-burstable-pod0e173942_0cd2_4c68_84fe_1264bbfe9eb1.slice. Jul 14 23:40:36.825053 systemd[1]: Created slice kubepods-burstable-pod3c0b2da2_23be_47e6_a35a_ff4d94da78ed.slice - libcontainer container kubepods-burstable-pod3c0b2da2_23be_47e6_a35a_ff4d94da78ed.slice. Jul 14 23:40:36.827064 kubelet[2556]: I0714 23:40:36.827026 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c0b2da2-23be-47e6-a35a-ff4d94da78ed-config-volume\") pod \"coredns-668d6bf9bc-c77cv\" (UID: \"3c0b2da2-23be-47e6-a35a-ff4d94da78ed\") " pod="kube-system/coredns-668d6bf9bc-c77cv" Jul 14 23:40:36.827064 kubelet[2556]: I0714 23:40:36.827066 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e173942-0cd2-4c68-84fe-1264bbfe9eb1-config-volume\") pod \"coredns-668d6bf9bc-4bspd\" (UID: \"0e173942-0cd2-4c68-84fe-1264bbfe9eb1\") " pod="kube-system/coredns-668d6bf9bc-4bspd" Jul 14 23:40:36.827170 kubelet[2556]: I0714 23:40:36.827130 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgc69\" (UniqueName: \"kubernetes.io/projected/3c0b2da2-23be-47e6-a35a-ff4d94da78ed-kube-api-access-wgc69\") pod \"coredns-668d6bf9bc-c77cv\" (UID: \"3c0b2da2-23be-47e6-a35a-ff4d94da78ed\") " pod="kube-system/coredns-668d6bf9bc-c77cv" Jul 14 23:40:36.827170 kubelet[2556]: I0714 23:40:36.827154 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtb9k\" (UniqueName: \"kubernetes.io/projected/0e173942-0cd2-4c68-84fe-1264bbfe9eb1-kube-api-access-mtb9k\") pod \"coredns-668d6bf9bc-4bspd\" (UID: \"0e173942-0cd2-4c68-84fe-1264bbfe9eb1\") " pod="kube-system/coredns-668d6bf9bc-4bspd" Jul 14 23:40:37.120516 kubelet[2556]: E0714 23:40:37.120437 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:37.121908 containerd[1462]: time="2025-07-14T23:40:37.121409337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bspd,Uid:0e173942-0cd2-4c68-84fe-1264bbfe9eb1,Namespace:kube-system,Attempt:0,}" Jul 14 23:40:37.128563 kubelet[2556]: E0714 23:40:37.128535 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:37.129368 containerd[1462]: time="2025-07-14T23:40:37.129145612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c77cv,Uid:3c0b2da2-23be-47e6-a35a-ff4d94da78ed,Namespace:kube-system,Attempt:0,}" Jul 14 23:40:37.558244 kubelet[2556]: E0714 23:40:37.558206 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:38.559950 kubelet[2556]: E0714 23:40:38.559913 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:38.872264 systemd-networkd[1391]: cilium_host: Link UP Jul 14 23:40:38.872822 systemd-networkd[1391]: cilium_net: Link UP Jul 14 23:40:38.873285 systemd-networkd[1391]: cilium_net: Gained carrier Jul 14 23:40:38.873508 systemd-networkd[1391]: cilium_host: Gained carrier Jul 14 23:40:38.873615 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jul 14 23:40:38.873730 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jul 14 23:40:38.951049 systemd-networkd[1391]: cilium_vxlan: Link UP Jul 14 23:40:38.951056 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jul 14 23:40:39.261187 kernel: NET: Registered PF_ALG protocol family Jul 14 23:40:39.561598 kubelet[2556]: E0714 23:40:39.561505 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:39.812803 systemd-networkd[1391]: lxc_health: Link UP Jul 14 23:40:39.813809 systemd-networkd[1391]: lxc_health: Gained carrier Jul 14 23:40:40.285246 kernel: eth0: renamed from tmp15117 Jul 14 23:40:40.302631 systemd-networkd[1391]: lxcfb6f9750902f: Link UP Jul 14 23:40:40.302866 systemd-networkd[1391]: lxcfb6f9750902f: Gained carrier Jul 14 23:40:40.302974 systemd-networkd[1391]: lxcab3d8cf2d67f: Link UP Jul 14 23:40:40.316117 kernel: eth0: renamed from tmp72776 Jul 14 23:40:40.323676 systemd-networkd[1391]: lxcab3d8cf2d67f: Gained carrier Jul 14 23:40:40.498604 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jul 14 23:40:41.378035 systemd[1]: Started sshd@7-10.0.0.8:22-10.0.0.1:34894.service - OpenSSH per-connection server daemon (10.0.0.1:34894). Jul 14 23:40:41.394531 systemd-networkd[1391]: lxcab3d8cf2d67f: Gained IPv6LL Jul 14 23:40:41.427529 sshd[3789]: Accepted publickey for core from 10.0.0.1 port 34894 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:40:41.428691 sshd-session[3789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:40:41.436186 systemd-logind[1441]: New session 8 of user core. Jul 14 23:40:41.445204 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 23:40:41.579040 sshd[3793]: Connection closed by 10.0.0.1 port 34894 Jul 14 23:40:41.579368 sshd-session[3789]: pam_unix(sshd:session): session closed for user core Jul 14 23:40:41.583416 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jul 14 23:40:41.583616 systemd[1]: sshd@7-10.0.0.8:22-10.0.0.1:34894.service: Deactivated successfully. Jul 14 23:40:41.585153 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 23:40:41.585889 systemd-logind[1441]: Removed session 8. Jul 14 23:40:41.611119 kubelet[2556]: E0714 23:40:41.610208 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:41.627445 kubelet[2556]: I0714 23:40:41.627380 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tqxjs" podStartSLOduration=9.283895674 podStartE2EDuration="18.627365153s" podCreationTimestamp="2025-07-14 23:40:23 +0000 UTC" firstStartedPulling="2025-07-14 23:40:23.674029325 +0000 UTC m=+8.259907047" lastFinishedPulling="2025-07-14 23:40:33.017498724 +0000 UTC m=+17.603376526" observedRunningTime="2025-07-14 23:40:37.575852661 +0000 UTC m=+22.161730383" watchObservedRunningTime="2025-07-14 23:40:41.627365153 +0000 UTC m=+26.213242875" Jul 14 23:40:41.842435 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jul 14 23:40:41.970514 systemd-networkd[1391]: lxcfb6f9750902f: Gained IPv6LL Jul 14 23:40:43.743626 containerd[1462]: time="2025-07-14T23:40:43.743439378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:40:43.743626 containerd[1462]: time="2025-07-14T23:40:43.743490858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:40:43.743626 containerd[1462]: time="2025-07-14T23:40:43.743502018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:43.743626 containerd[1462]: time="2025-07-14T23:40:43.743585698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:43.752522 containerd[1462]: time="2025-07-14T23:40:43.752253534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:40:43.752522 containerd[1462]: time="2025-07-14T23:40:43.752322813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:40:43.752522 containerd[1462]: time="2025-07-14T23:40:43.752339693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:43.752522 containerd[1462]: time="2025-07-14T23:40:43.752452133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:40:43.770252 systemd[1]: Started cri-containerd-72776797baa0fd86583cba25a53deba30815b84967f2dddd6b8418b92fee1a1f.scope - libcontainer container 72776797baa0fd86583cba25a53deba30815b84967f2dddd6b8418b92fee1a1f. Jul 14 23:40:43.773179 systemd[1]: Started cri-containerd-1511744abacf6af38e3a21a4b7ea660193e50199f75b71dd6197445b4b7250f2.scope - libcontainer container 1511744abacf6af38e3a21a4b7ea660193e50199f75b71dd6197445b4b7250f2. Jul 14 23:40:43.787506 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:40:43.788445 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:40:43.808104 containerd[1462]: time="2025-07-14T23:40:43.808028467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bspd,Uid:0e173942-0cd2-4c68-84fe-1264bbfe9eb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1511744abacf6af38e3a21a4b7ea660193e50199f75b71dd6197445b4b7250f2\"" Jul 14 23:40:43.808691 kubelet[2556]: E0714 23:40:43.808670 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:43.812137 containerd[1462]: time="2025-07-14T23:40:43.811904905Z" level=info msg="CreateContainer within sandbox \"1511744abacf6af38e3a21a4b7ea660193e50199f75b71dd6197445b4b7250f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 23:40:43.814066 containerd[1462]: time="2025-07-14T23:40:43.813897544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c77cv,Uid:3c0b2da2-23be-47e6-a35a-ff4d94da78ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"72776797baa0fd86583cba25a53deba30815b84967f2dddd6b8418b92fee1a1f\"" Jul 14 23:40:43.814824 kubelet[2556]: E0714 23:40:43.814802 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:43.816633 containerd[1462]: time="2025-07-14T23:40:43.816495943Z" level=info msg="CreateContainer within sandbox \"72776797baa0fd86583cba25a53deba30815b84967f2dddd6b8418b92fee1a1f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 23:40:43.830261 containerd[1462]: time="2025-07-14T23:40:43.830218817Z" level=info msg="CreateContainer within sandbox \"1511744abacf6af38e3a21a4b7ea660193e50199f75b71dd6197445b4b7250f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"485206eb36a288618caaf11e4add3abbcb55be53daec90f934eeeb0c77736e17\"" Jul 14 23:40:43.830880 containerd[1462]: time="2025-07-14T23:40:43.830857576Z" level=info msg="StartContainer for \"485206eb36a288618caaf11e4add3abbcb55be53daec90f934eeeb0c77736e17\"" Jul 14 23:40:43.831754 containerd[1462]: time="2025-07-14T23:40:43.831652376Z" level=info msg="CreateContainer within sandbox \"72776797baa0fd86583cba25a53deba30815b84967f2dddd6b8418b92fee1a1f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd8eb7c4544dbb125d15b693e8ca51b3a8be9d1fd6c9d272f29565fc94a28854\"" Jul 14 23:40:43.832489 containerd[1462]: time="2025-07-14T23:40:43.832446696Z" level=info msg="StartContainer for \"fd8eb7c4544dbb125d15b693e8ca51b3a8be9d1fd6c9d272f29565fc94a28854\"" Jul 14 23:40:43.861283 systemd[1]: Started cri-containerd-485206eb36a288618caaf11e4add3abbcb55be53daec90f934eeeb0c77736e17.scope - libcontainer container 485206eb36a288618caaf11e4add3abbcb55be53daec90f934eeeb0c77736e17. Jul 14 23:40:43.864067 systemd[1]: Started cri-containerd-fd8eb7c4544dbb125d15b693e8ca51b3a8be9d1fd6c9d272f29565fc94a28854.scope - libcontainer container fd8eb7c4544dbb125d15b693e8ca51b3a8be9d1fd6c9d272f29565fc94a28854. Jul 14 23:40:43.903903 containerd[1462]: time="2025-07-14T23:40:43.903860622Z" level=info msg="StartContainer for \"485206eb36a288618caaf11e4add3abbcb55be53daec90f934eeeb0c77736e17\" returns successfully" Jul 14 23:40:43.904263 containerd[1462]: time="2025-07-14T23:40:43.904011662Z" level=info msg="StartContainer for \"fd8eb7c4544dbb125d15b693e8ca51b3a8be9d1fd6c9d272f29565fc94a28854\" returns successfully" Jul 14 23:40:44.573284 kubelet[2556]: E0714 23:40:44.572923 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:44.577803 kubelet[2556]: E0714 23:40:44.576865 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:44.596977 kubelet[2556]: I0714 23:40:44.596904 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c77cv" podStartSLOduration=21.596889232 podStartE2EDuration="21.596889232s" podCreationTimestamp="2025-07-14 23:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:40:44.585373517 +0000 UTC m=+29.171251279" watchObservedRunningTime="2025-07-14 23:40:44.596889232 +0000 UTC m=+29.182766914" Jul 14 23:40:44.611644 kubelet[2556]: I0714 23:40:44.611580 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4bspd" podStartSLOduration=21.611553466 podStartE2EDuration="21.611553466s" podCreationTimestamp="2025-07-14 23:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:40:44.597976552 +0000 UTC m=+29.183854314" watchObservedRunningTime="2025-07-14 23:40:44.611553466 +0000 UTC m=+29.197431188" Jul 14 23:40:44.747948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376807402.mount: Deactivated successfully. Jul 14 23:40:45.578012 kubelet[2556]: E0714 23:40:45.577966 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:45.578352 kubelet[2556]: E0714 23:40:45.578032 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:46.579897 kubelet[2556]: E0714 23:40:46.579855 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:46.581177 kubelet[2556]: E0714 23:40:46.581141 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:46.598442 systemd[1]: Started sshd@8-10.0.0.8:22-10.0.0.1:34804.service - OpenSSH per-connection server daemon (10.0.0.1:34804). Jul 14 23:40:46.644471 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 34804 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:40:46.646137 sshd-session[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:40:46.650622 systemd-logind[1441]: New session 9 of user core. Jul 14 23:40:46.662247 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 23:40:46.785930 sshd[3985]: Connection closed by 10.0.0.1 port 34804 Jul 14 23:40:46.786482 sshd-session[3983]: pam_unix(sshd:session): session closed for user core Jul 14 23:40:46.789598 systemd[1]: sshd@8-10.0.0.8:22-10.0.0.1:34804.service: Deactivated successfully. Jul 14 23:40:46.792610 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 23:40:46.793503 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jul 14 23:40:46.794354 systemd-logind[1441]: Removed session 9. Jul 14 23:40:48.691557 kubelet[2556]: I0714 23:40:48.691499 2556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 23:40:48.692480 kubelet[2556]: E0714 23:40:48.692404 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:49.593130 kubelet[2556]: E0714 23:40:49.593096 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:40:51.801597 systemd[1]: Started sshd@9-10.0.0.8:22-10.0.0.1:34820.service - OpenSSH per-connection server daemon (10.0.0.1:34820). Jul 14 23:40:51.839682 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 34820 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:40:51.840835 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:40:51.844390 systemd-logind[1441]: New session 10 of user core. Jul 14 23:40:51.854238 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 23:40:51.962336 sshd[4001]: Connection closed by 10.0.0.1 port 34820 Jul 14 23:40:51.962692 sshd-session[3999]: pam_unix(sshd:session): session closed for user core Jul 14 23:40:51.965813 systemd[1]: sshd@9-10.0.0.8:22-10.0.0.1:34820.service: Deactivated successfully. Jul 14 23:40:51.967517 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 23:40:51.970264 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jul 14 23:40:51.971181 systemd-logind[1441]: Removed session 10. Jul 14 23:40:56.976423 systemd[1]: Started sshd@10-10.0.0.8:22-10.0.0.1:34768.service - OpenSSH per-connection server daemon (10.0.0.1:34768). Jul 14 23:40:57.017133 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 34768 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:40:57.018287 sshd-session[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:40:57.022168 systemd-logind[1441]: New session 11 of user core. Jul 14 23:40:57.029214 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 23:40:57.149628 sshd[4020]: Connection closed by 10.0.0.1 port 34768 Jul 14 23:40:57.149073 sshd-session[4018]: pam_unix(sshd:session): session closed for user core Jul 14 23:40:57.156317 systemd[1]: sshd@10-10.0.0.8:22-10.0.0.1:34768.service: Deactivated successfully. Jul 14 23:40:57.157973 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 23:40:57.159846 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jul 14 23:40:57.168444 systemd[1]: Started sshd@11-10.0.0.8:22-10.0.0.1:34776.service - OpenSSH per-connection server daemon (10.0.0.1:34776). Jul 14 23:40:57.169427 systemd-logind[1441]: Removed session 11. Jul 14 23:40:57.206927 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 34776 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:40:57.208260 sshd-session[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:40:57.212383 systemd-logind[1441]: New session 12 of user core. Jul 14 23:40:57.227248 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 23:40:57.382251 sshd[4036]: Connection closed by 10.0.0.1 port 34776 Jul 14 23:40:57.382609 sshd-session[4033]: pam_unix(sshd:session): session closed for user core Jul 14 23:40:57.392630 systemd[1]: sshd@11-10.0.0.8:22-10.0.0.1:34776.service: Deactivated successfully. Jul 14 23:40:57.395828 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 23:40:57.399229 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Jul 14 23:40:57.419466 systemd[1]: Started sshd@12-10.0.0.8:22-10.0.0.1:34784.service - OpenSSH per-connection server daemon (10.0.0.1:34784). Jul 14 23:40:57.420776 systemd-logind[1441]: Removed session 12. Jul 14 23:40:57.460933 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 34784 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:40:57.462241 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:40:57.466597 systemd-logind[1441]: New session 13 of user core. Jul 14 23:40:57.473249 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 23:40:57.586824 sshd[4050]: Connection closed by 10.0.0.1 port 34784 Jul 14 23:40:57.587314 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Jul 14 23:40:57.590510 systemd[1]: sshd@12-10.0.0.8:22-10.0.0.1:34784.service: Deactivated successfully. Jul 14 23:40:57.592693 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 23:40:57.593465 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Jul 14 23:40:57.594197 systemd-logind[1441]: Removed session 13. Jul 14 23:41:02.598486 systemd[1]: Started sshd@13-10.0.0.8:22-10.0.0.1:49024.service - OpenSSH per-connection server daemon (10.0.0.1:49024). Jul 14 23:41:02.637320 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 49024 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:02.638524 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:02.642614 systemd-logind[1441]: New session 14 of user core. Jul 14 23:41:02.654276 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 23:41:02.764099 sshd[4066]: Connection closed by 10.0.0.1 port 49024 Jul 14 23:41:02.764453 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:02.767758 systemd[1]: sshd@13-10.0.0.8:22-10.0.0.1:49024.service: Deactivated successfully. Jul 14 23:41:02.769573 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 23:41:02.770592 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Jul 14 23:41:02.772058 systemd-logind[1441]: Removed session 14. Jul 14 23:41:07.783764 systemd[1]: Started sshd@14-10.0.0.8:22-10.0.0.1:49030.service - OpenSSH per-connection server daemon (10.0.0.1:49030). Jul 14 23:41:07.823151 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 49030 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:07.824308 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:07.828520 systemd-logind[1441]: New session 15 of user core. Jul 14 23:41:07.840246 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 23:41:07.967214 sshd[4081]: Connection closed by 10.0.0.1 port 49030 Jul 14 23:41:07.968602 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:07.981521 systemd[1]: sshd@14-10.0.0.8:22-10.0.0.1:49030.service: Deactivated successfully. Jul 14 23:41:07.983151 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 23:41:07.983805 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Jul 14 23:41:07.985666 systemd[1]: Started sshd@15-10.0.0.8:22-10.0.0.1:49042.service - OpenSSH per-connection server daemon (10.0.0.1:49042). Jul 14 23:41:07.986522 systemd-logind[1441]: Removed session 15. Jul 14 23:41:08.031864 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 49042 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:08.032302 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:08.036918 systemd-logind[1441]: New session 16 of user core. Jul 14 23:41:08.048294 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 23:41:08.249730 sshd[4096]: Connection closed by 10.0.0.1 port 49042 Jul 14 23:41:08.249703 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:08.260587 systemd[1]: Started sshd@16-10.0.0.8:22-10.0.0.1:49046.service - OpenSSH per-connection server daemon (10.0.0.1:49046). Jul 14 23:41:08.261619 systemd[1]: sshd@15-10.0.0.8:22-10.0.0.1:49042.service: Deactivated successfully. Jul 14 23:41:08.264591 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 23:41:08.266116 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Jul 14 23:41:08.267100 systemd-logind[1441]: Removed session 16. Jul 14 23:41:08.317048 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 49046 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:08.317422 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:08.321479 systemd-logind[1441]: New session 17 of user core. Jul 14 23:41:08.331229 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 23:41:09.085121 sshd[4110]: Connection closed by 10.0.0.1 port 49046 Jul 14 23:41:09.085013 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:09.099780 systemd[1]: sshd@16-10.0.0.8:22-10.0.0.1:49046.service: Deactivated successfully. Jul 14 23:41:09.102741 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 23:41:09.107305 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Jul 14 23:41:09.113372 systemd[1]: Started sshd@17-10.0.0.8:22-10.0.0.1:49050.service - OpenSSH per-connection server daemon (10.0.0.1:49050). Jul 14 23:41:09.119437 systemd-logind[1441]: Removed session 17. Jul 14 23:41:09.157312 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 49050 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:09.158735 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:09.163548 systemd-logind[1441]: New session 18 of user core. Jul 14 23:41:09.182288 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 23:41:09.411592 sshd[4134]: Connection closed by 10.0.0.1 port 49050 Jul 14 23:41:09.412444 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:09.425245 systemd[1]: sshd@17-10.0.0.8:22-10.0.0.1:49050.service: Deactivated successfully. Jul 14 23:41:09.427023 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 23:41:09.427945 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Jul 14 23:41:09.435914 systemd[1]: Started sshd@18-10.0.0.8:22-10.0.0.1:49060.service - OpenSSH per-connection server daemon (10.0.0.1:49060). Jul 14 23:41:09.437448 systemd-logind[1441]: Removed session 18. Jul 14 23:41:09.472825 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 49060 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:09.474232 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:09.478610 systemd-logind[1441]: New session 19 of user core. Jul 14 23:41:09.484271 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 23:41:09.606004 sshd[4148]: Connection closed by 10.0.0.1 port 49060 Jul 14 23:41:09.606300 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:09.610020 systemd[1]: sshd@18-10.0.0.8:22-10.0.0.1:49060.service: Deactivated successfully. Jul 14 23:41:09.612158 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 23:41:09.612849 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Jul 14 23:41:09.613739 systemd-logind[1441]: Removed session 19. Jul 14 23:41:14.620954 systemd[1]: Started sshd@19-10.0.0.8:22-10.0.0.1:55976.service - OpenSSH per-connection server daemon (10.0.0.1:55976). Jul 14 23:41:14.660934 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 55976 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:14.662307 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:14.666163 systemd-logind[1441]: New session 20 of user core. Jul 14 23:41:14.672362 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 23:41:14.782164 sshd[4169]: Connection closed by 10.0.0.1 port 55976 Jul 14 23:41:14.782537 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:14.785300 systemd[1]: sshd@19-10.0.0.8:22-10.0.0.1:55976.service: Deactivated successfully. Jul 14 23:41:14.787058 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 23:41:14.789094 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Jul 14 23:41:14.790037 systemd-logind[1441]: Removed session 20. Jul 14 23:41:19.795272 systemd[1]: Started sshd@20-10.0.0.8:22-10.0.0.1:55988.service - OpenSSH per-connection server daemon (10.0.0.1:55988). Jul 14 23:41:19.833670 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 55988 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:19.834886 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:19.839170 systemd-logind[1441]: New session 21 of user core. Jul 14 23:41:19.847259 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 23:41:19.958066 sshd[4186]: Connection closed by 10.0.0.1 port 55988 Jul 14 23:41:19.958435 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:19.961853 systemd[1]: sshd@20-10.0.0.8:22-10.0.0.1:55988.service: Deactivated successfully. Jul 14 23:41:19.964993 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 23:41:19.966043 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Jul 14 23:41:19.966982 systemd-logind[1441]: Removed session 21. Jul 14 23:41:24.973574 systemd[1]: Started sshd@21-10.0.0.8:22-10.0.0.1:42266.service - OpenSSH per-connection server daemon (10.0.0.1:42266). Jul 14 23:41:25.014727 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 42266 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:25.015785 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:25.020572 systemd-logind[1441]: New session 22 of user core. Jul 14 23:41:25.034252 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 23:41:25.143723 sshd[4204]: Connection closed by 10.0.0.1 port 42266 Jul 14 23:41:25.144043 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:25.160604 systemd[1]: sshd@21-10.0.0.8:22-10.0.0.1:42266.service: Deactivated successfully. Jul 14 23:41:25.164412 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 23:41:25.165896 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Jul 14 23:41:25.167502 systemd-logind[1441]: Removed session 22. Jul 14 23:41:25.177401 systemd[1]: Started sshd@22-10.0.0.8:22-10.0.0.1:42270.service - OpenSSH per-connection server daemon (10.0.0.1:42270). Jul 14 23:41:25.222122 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 42270 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:25.223114 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:25.227544 systemd-logind[1441]: New session 23 of user core. Jul 14 23:41:25.237217 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 23:41:27.389487 containerd[1462]: time="2025-07-14T23:41:27.389446799Z" level=info msg="StopContainer for \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\" with timeout 30 (s)" Jul 14 23:41:27.390805 containerd[1462]: time="2025-07-14T23:41:27.390775887Z" level=info msg="Stop container \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\" with signal terminated" Jul 14 23:41:27.401512 systemd[1]: cri-containerd-e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57.scope: Deactivated successfully. Jul 14 23:41:27.419296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57-rootfs.mount: Deactivated successfully. Jul 14 23:41:27.421626 containerd[1462]: time="2025-07-14T23:41:27.421536901Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 23:41:27.426754 containerd[1462]: time="2025-07-14T23:41:27.426707331Z" level=info msg="shim disconnected" id=e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57 namespace=k8s.io Jul 14 23:41:27.426754 containerd[1462]: time="2025-07-14T23:41:27.426753451Z" level=warning msg="cleaning up after shim disconnected" id=e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57 namespace=k8s.io Jul 14 23:41:27.426857 containerd[1462]: time="2025-07-14T23:41:27.426761851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:41:27.427256 containerd[1462]: time="2025-07-14T23:41:27.427232174Z" level=info msg="StopContainer for \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\" with timeout 2 (s)" Jul 14 23:41:27.427874 containerd[1462]: time="2025-07-14T23:41:27.427835057Z" level=info msg="Stop container \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\" with signal terminated" Jul 14 23:41:27.433938 systemd-networkd[1391]: lxc_health: Link DOWN Jul 14 23:41:27.433947 systemd-networkd[1391]: lxc_health: Lost carrier Jul 14 23:41:27.447739 systemd[1]: cri-containerd-f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c.scope: Deactivated successfully. Jul 14 23:41:27.448031 systemd[1]: cri-containerd-f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c.scope: Consumed 6.331s CPU time, 122.9M memory peak, 152K read from disk, 12.9M written to disk. Jul 14 23:41:27.465244 containerd[1462]: time="2025-07-14T23:41:27.465188229Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:41:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 14 23:41:27.467923 containerd[1462]: time="2025-07-14T23:41:27.467894884Z" level=info msg="StopContainer for \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\" returns successfully" Jul 14 23:41:27.470200 containerd[1462]: time="2025-07-14T23:41:27.470166137Z" level=info msg="StopPodSandbox for \"25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d\"" Jul 14 23:41:27.470200 containerd[1462]: time="2025-07-14T23:41:27.470202537Z" level=info msg="Container to stop \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:41:27.470651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c-rootfs.mount: Deactivated successfully. Jul 14 23:41:27.473594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d-shm.mount: Deactivated successfully. Jul 14 23:41:27.476493 systemd[1]: cri-containerd-25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d.scope: Deactivated successfully. Jul 14 23:41:27.476874 containerd[1462]: time="2025-07-14T23:41:27.476769335Z" level=info msg="shim disconnected" id=f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c namespace=k8s.io Jul 14 23:41:27.476874 containerd[1462]: time="2025-07-14T23:41:27.476861935Z" level=warning msg="cleaning up after shim disconnected" id=f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c namespace=k8s.io Jul 14 23:41:27.476874 containerd[1462]: time="2025-07-14T23:41:27.476872495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:41:27.493899 containerd[1462]: time="2025-07-14T23:41:27.493859352Z" level=info msg="StopContainer for \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\" returns successfully" Jul 14 23:41:27.495011 containerd[1462]: time="2025-07-14T23:41:27.494747757Z" level=info msg="StopPodSandbox for \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\"" Jul 14 23:41:27.495058 containerd[1462]: time="2025-07-14T23:41:27.495026718Z" level=info msg="Container to stop \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:41:27.495058 containerd[1462]: time="2025-07-14T23:41:27.495047038Z" level=info msg="Container to stop \"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:41:27.495132 containerd[1462]: time="2025-07-14T23:41:27.495056359Z" level=info msg="Container to stop \"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:41:27.495132 containerd[1462]: time="2025-07-14T23:41:27.495066279Z" level=info msg="Container to stop \"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:41:27.495549 containerd[1462]: time="2025-07-14T23:41:27.495074119Z" level=info msg="Container to stop \"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:41:27.497669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e-shm.mount: Deactivated successfully. Jul 14 23:41:27.501497 systemd[1]: cri-containerd-ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e.scope: Deactivated successfully. Jul 14 23:41:27.504160 containerd[1462]: time="2025-07-14T23:41:27.504112450Z" level=info msg="shim disconnected" id=25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d namespace=k8s.io Jul 14 23:41:27.504160 containerd[1462]: time="2025-07-14T23:41:27.504161210Z" level=warning msg="cleaning up after shim disconnected" id=25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d namespace=k8s.io Jul 14 23:41:27.504278 containerd[1462]: time="2025-07-14T23:41:27.504169770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:41:27.521137 containerd[1462]: time="2025-07-14T23:41:27.519590298Z" level=info msg="TearDown network for sandbox \"25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d\" successfully" Jul 14 23:41:27.521137 containerd[1462]: time="2025-07-14T23:41:27.519620538Z" level=info msg="StopPodSandbox for \"25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d\" returns successfully" Jul 14 23:41:27.540763 containerd[1462]: time="2025-07-14T23:41:27.540463696Z" level=info msg="shim disconnected" id=ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e namespace=k8s.io Jul 14 23:41:27.540763 containerd[1462]: time="2025-07-14T23:41:27.540521337Z" level=warning msg="cleaning up after shim disconnected" id=ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e namespace=k8s.io Jul 14 23:41:27.540763 containerd[1462]: time="2025-07-14T23:41:27.540529777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:41:27.569794 containerd[1462]: time="2025-07-14T23:41:27.569749222Z" level=info msg="TearDown network for sandbox \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" successfully" Jul 14 23:41:27.569794 containerd[1462]: time="2025-07-14T23:41:27.569783703Z" level=info msg="StopPodSandbox for \"ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e\" returns successfully" Jul 14 23:41:27.641571 kubelet[2556]: I0714 23:41:27.641433 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-bpf-maps\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.641571 kubelet[2556]: I0714 23:41:27.641477 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-etc-cni-netd\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.641571 kubelet[2556]: I0714 23:41:27.641494 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cni-path\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.641571 kubelet[2556]: I0714 23:41:27.641509 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cilium-run\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.641571 kubelet[2556]: I0714 23:41:27.641529 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1391798-becd-4448-8c96-43c288f8f16a-hubble-tls\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.641571 kubelet[2556]: I0714 23:41:27.641544 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-lib-modules\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.642624 kubelet[2556]: I0714 23:41:27.642319 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cilium-cgroup\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.642624 kubelet[2556]: I0714 23:41:27.642386 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-host-proc-sys-kernel\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.642624 kubelet[2556]: I0714 23:41:27.642411 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dpsn\" (UniqueName: \"kubernetes.io/projected/c1391798-becd-4448-8c96-43c288f8f16a-kube-api-access-6dpsn\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.642624 kubelet[2556]: I0714 23:41:27.642429 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-host-proc-sys-net\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.642624 kubelet[2556]: I0714 23:41:27.642445 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-xtables-lock\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.642624 kubelet[2556]: I0714 23:41:27.642460 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-hostproc\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.642821 kubelet[2556]: I0714 23:41:27.642478 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1391798-becd-4448-8c96-43c288f8f16a-clustermesh-secrets\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.642821 kubelet[2556]: I0714 23:41:27.642495 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1391798-becd-4448-8c96-43c288f8f16a-cilium-config-path\") pod \"c1391798-becd-4448-8c96-43c288f8f16a\" (UID: \"c1391798-becd-4448-8c96-43c288f8f16a\") " Jul 14 23:41:27.642821 kubelet[2556]: I0714 23:41:27.642512 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76dbb\" (UniqueName: \"kubernetes.io/projected/04f21409-337a-40dc-9947-a9187a0a59cb-kube-api-access-76dbb\") pod \"04f21409-337a-40dc-9947-a9187a0a59cb\" (UID: \"04f21409-337a-40dc-9947-a9187a0a59cb\") " Jul 14 23:41:27.642821 kubelet[2556]: I0714 23:41:27.642527 2556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04f21409-337a-40dc-9947-a9187a0a59cb-cilium-config-path\") pod \"04f21409-337a-40dc-9947-a9187a0a59cb\" (UID: \"04f21409-337a-40dc-9947-a9187a0a59cb\") " Jul 14 23:41:27.644067 kubelet[2556]: I0714 23:41:27.643746 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:41:27.644067 kubelet[2556]: I0714 23:41:27.643775 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:41:27.644067 kubelet[2556]: I0714 23:41:27.643748 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cni-path" (OuterVolumeSpecName: "cni-path") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:41:27.644067 kubelet[2556]: I0714 23:41:27.643778 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:41:27.644067 kubelet[2556]: I0714 23:41:27.643754 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:41:27.645947 kubelet[2556]: I0714 23:41:27.645904 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1391798-becd-4448-8c96-43c288f8f16a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 23:41:27.646015 kubelet[2556]: I0714 23:41:27.645960 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:41:27.646015 kubelet[2556]: I0714 23:41:27.645978 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:41:27.646015 kubelet[2556]: I0714 23:41:27.645994 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:41:27.646249 kubelet[2556]: I0714 23:41:27.646227 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:41:27.646417 kubelet[2556]: I0714 23:41:27.646391 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-hostproc" (OuterVolumeSpecName: "hostproc") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:41:27.646591 kubelet[2556]: I0714 23:41:27.646477 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1391798-becd-4448-8c96-43c288f8f16a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 23:41:27.647050 kubelet[2556]: I0714 23:41:27.647030 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04f21409-337a-40dc-9947-a9187a0a59cb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04f21409-337a-40dc-9947-a9187a0a59cb" (UID: "04f21409-337a-40dc-9947-a9187a0a59cb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 23:41:27.647279 kubelet[2556]: I0714 23:41:27.647262 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1391798-becd-4448-8c96-43c288f8f16a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 23:41:27.647852 kubelet[2556]: I0714 23:41:27.647821 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f21409-337a-40dc-9947-a9187a0a59cb-kube-api-access-76dbb" (OuterVolumeSpecName: "kube-api-access-76dbb") pod "04f21409-337a-40dc-9947-a9187a0a59cb" (UID: "04f21409-337a-40dc-9947-a9187a0a59cb"). InnerVolumeSpecName "kube-api-access-76dbb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 23:41:27.648177 kubelet[2556]: I0714 23:41:27.648153 2556 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1391798-becd-4448-8c96-43c288f8f16a-kube-api-access-6dpsn" (OuterVolumeSpecName: "kube-api-access-6dpsn") pod "c1391798-becd-4448-8c96-43c288f8f16a" (UID: "c1391798-becd-4448-8c96-43c288f8f16a"). InnerVolumeSpecName "kube-api-access-6dpsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 23:41:27.663715 kubelet[2556]: I0714 23:41:27.661921 2556 scope.go:117] "RemoveContainer" containerID="e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57" Jul 14 23:41:27.665051 containerd[1462]: time="2025-07-14T23:41:27.665008883Z" level=info msg="RemoveContainer for \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\"" Jul 14 23:41:27.666333 systemd[1]: Removed slice kubepods-besteffort-pod04f21409_337a_40dc_9947_a9187a0a59cb.slice - libcontainer container kubepods-besteffort-pod04f21409_337a_40dc_9947_a9187a0a59cb.slice. Jul 14 23:41:27.668155 containerd[1462]: time="2025-07-14T23:41:27.667782899Z" level=info msg="RemoveContainer for \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\" returns successfully" Jul 14 23:41:27.668402 kubelet[2556]: I0714 23:41:27.668374 2556 scope.go:117] "RemoveContainer" containerID="e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57" Jul 14 23:41:27.668769 containerd[1462]: time="2025-07-14T23:41:27.668608184Z" level=error msg="ContainerStatus for \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\": not found" Jul 14 23:41:27.668825 kubelet[2556]: E0714 23:41:27.668784 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\": not found" containerID="e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57" Jul 14 23:41:27.675395 systemd[1]: Removed slice kubepods-burstable-podc1391798_becd_4448_8c96_43c288f8f16a.slice - libcontainer container kubepods-burstable-podc1391798_becd_4448_8c96_43c288f8f16a.slice. Jul 14 23:41:27.675514 systemd[1]: kubepods-burstable-podc1391798_becd_4448_8c96_43c288f8f16a.slice: Consumed 6.475s CPU time, 123.2M memory peak, 172K read from disk, 16.1M written to disk. Jul 14 23:41:27.676361 kubelet[2556]: I0714 23:41:27.676183 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57"} err="failed to get container status \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0b2b7bbb1e50726b449dfa86c6ffb6c16e54a17162480e3f7a280181fc9bf57\": not found" Jul 14 23:41:27.676361 kubelet[2556]: I0714 23:41:27.676294 2556 scope.go:117] "RemoveContainer" containerID="f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c" Jul 14 23:41:27.677697 containerd[1462]: time="2025-07-14T23:41:27.677671675Z" level=info msg="RemoveContainer for \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\"" Jul 14 23:41:27.679791 containerd[1462]: time="2025-07-14T23:41:27.679758887Z" level=info msg="RemoveContainer for \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\" returns successfully" Jul 14 23:41:27.679957 kubelet[2556]: I0714 23:41:27.679935 2556 scope.go:117] "RemoveContainer" containerID="4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8" Jul 14 23:41:27.681515 containerd[1462]: time="2025-07-14T23:41:27.681489657Z" level=info msg="RemoveContainer for \"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8\"" Jul 14 23:41:27.691960 containerd[1462]: time="2025-07-14T23:41:27.691784755Z" level=info msg="RemoveContainer for \"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8\" returns successfully" Jul 14 23:41:27.692042 kubelet[2556]: I0714 23:41:27.691980 2556 scope.go:117] "RemoveContainer" containerID="37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e" Jul 14 23:41:27.693105 containerd[1462]: time="2025-07-14T23:41:27.693034602Z" level=info msg="RemoveContainer for \"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e\"" Jul 14 23:41:27.695885 containerd[1462]: time="2025-07-14T23:41:27.695841578Z" level=info msg="RemoveContainer for \"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e\" returns successfully" Jul 14 23:41:27.696223 kubelet[2556]: I0714 23:41:27.696195 2556 scope.go:117] "RemoveContainer" containerID="3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71" Jul 14 23:41:27.697591 containerd[1462]: time="2025-07-14T23:41:27.697570508Z" level=info msg="RemoveContainer for \"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71\"" Jul 14 23:41:27.706269 containerd[1462]: time="2025-07-14T23:41:27.706231237Z" level=info msg="RemoveContainer for \"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71\" returns successfully" Jul 14 23:41:27.706476 kubelet[2556]: I0714 23:41:27.706446 2556 scope.go:117] "RemoveContainer" containerID="00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d" Jul 14 23:41:27.711778 containerd[1462]: time="2025-07-14T23:41:27.711742308Z" level=info msg="RemoveContainer for \"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d\"" Jul 14 23:41:27.718682 containerd[1462]: time="2025-07-14T23:41:27.718644788Z" level=info msg="RemoveContainer for \"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d\" returns successfully" Jul 14 23:41:27.718846 kubelet[2556]: I0714 23:41:27.718814 2556 scope.go:117] "RemoveContainer" containerID="f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c" Jul 14 23:41:27.719033 containerd[1462]: time="2025-07-14T23:41:27.718994670Z" level=error msg="ContainerStatus for \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\": not found" Jul 14 23:41:27.719138 kubelet[2556]: E0714 23:41:27.719110 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\": not found" containerID="f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c" Jul 14 23:41:27.719173 kubelet[2556]: I0714 23:41:27.719138 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c"} err="failed to get container status \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6aa45102da6908397e234810f8a7ee38eb37dc4a892c275f0fb7ec1753c547c\": not found" Jul 14 23:41:27.719173 kubelet[2556]: I0714 23:41:27.719157 2556 scope.go:117] "RemoveContainer" containerID="4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8" Jul 14 23:41:27.719305 containerd[1462]: time="2025-07-14T23:41:27.719275911Z" level=error msg="ContainerStatus for \"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8\": not found" Jul 14 23:41:27.719397 kubelet[2556]: E0714 23:41:27.719379 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8\": not found" containerID="4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8" Jul 14 23:41:27.719421 kubelet[2556]: I0714 23:41:27.719400 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8"} err="failed to get container status \"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b266ed35c3d29be306dd6c1e7877f6bf270d4dcb39cf8327d017a8e83f971c8\": not found" Jul 14 23:41:27.719421 kubelet[2556]: I0714 23:41:27.719413 2556 scope.go:117] "RemoveContainer" containerID="37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e" Jul 14 23:41:27.719583 containerd[1462]: time="2025-07-14T23:41:27.719548353Z" level=error msg="ContainerStatus for \"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e\": not found" Jul 14 23:41:27.719698 kubelet[2556]: E0714 23:41:27.719676 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e\": not found" containerID="37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e" Jul 14 23:41:27.719725 kubelet[2556]: I0714 23:41:27.719706 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e"} err="failed to get container status \"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e\": rpc error: code = NotFound desc = an error occurred when try to find container \"37bc81898b5781c5fcd07c9b2aa4096af9507b92c685ded4109c36385eaea09e\": not found" Jul 14 23:41:27.719725 kubelet[2556]: I0714 23:41:27.719723 2556 scope.go:117] "RemoveContainer" containerID="3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71" Jul 14 23:41:27.719876 containerd[1462]: time="2025-07-14T23:41:27.719854115Z" level=error msg="ContainerStatus for \"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71\": not found" Jul 14 23:41:27.719972 kubelet[2556]: E0714 23:41:27.719953 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71\": not found" containerID="3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71" Jul 14 23:41:27.720010 kubelet[2556]: I0714 23:41:27.719977 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71"} err="failed to get container status \"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c5a8c1f5c309fd2d272e292a8c907872dbe2c329e8d194293a95184fc32cd71\": not found" Jul 14 23:41:27.720040 kubelet[2556]: I0714 23:41:27.720011 2556 scope.go:117] "RemoveContainer" containerID="00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d" Jul 14 23:41:27.720268 containerd[1462]: time="2025-07-14T23:41:27.720234757Z" level=error msg="ContainerStatus for \"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d\": not found" Jul 14 23:41:27.720368 kubelet[2556]: E0714 23:41:27.720346 2556 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d\": not found" containerID="00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d" Jul 14 23:41:27.720394 kubelet[2556]: I0714 23:41:27.720374 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d"} err="failed to get container status \"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d\": rpc error: code = NotFound desc = an error occurred when try to find container \"00cb7fd2bd7e238a9995e9c531ed67cc9dbe11b53d431201c7a19da477f6877d\": not found" Jul 14 23:41:27.742713 kubelet[2556]: I0714 23:41:27.742669 2556 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742713 kubelet[2556]: I0714 23:41:27.742694 2556 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742713 kubelet[2556]: I0714 23:41:27.742704 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742713 kubelet[2556]: I0714 23:41:27.742718 2556 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1391798-becd-4448-8c96-43c288f8f16a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742713 kubelet[2556]: I0714 23:41:27.742726 2556 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742998 kubelet[2556]: I0714 23:41:27.742734 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742998 kubelet[2556]: I0714 23:41:27.742743 2556 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742998 kubelet[2556]: I0714 23:41:27.742760 2556 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dpsn\" (UniqueName: \"kubernetes.io/projected/c1391798-becd-4448-8c96-43c288f8f16a-kube-api-access-6dpsn\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742998 kubelet[2556]: I0714 23:41:27.742768 2556 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742998 kubelet[2556]: I0714 23:41:27.742775 2556 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742998 kubelet[2556]: I0714 23:41:27.742783 2556 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742998 kubelet[2556]: I0714 23:41:27.742790 2556 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1391798-becd-4448-8c96-43c288f8f16a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.742998 kubelet[2556]: I0714 23:41:27.742797 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1391798-becd-4448-8c96-43c288f8f16a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.743182 kubelet[2556]: I0714 23:41:27.742805 2556 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-76dbb\" (UniqueName: \"kubernetes.io/projected/04f21409-337a-40dc-9947-a9187a0a59cb-kube-api-access-76dbb\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.743182 kubelet[2556]: I0714 23:41:27.742812 2556 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04f21409-337a-40dc-9947-a9187a0a59cb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:27.743182 kubelet[2556]: I0714 23:41:27.742820 2556 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1391798-becd-4448-8c96-43c288f8f16a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 23:41:28.408344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25998ebcf72c6152db2f9fcdc41900173cf47bffe55bfc5075addd1abcdc5b8d-rootfs.mount: Deactivated successfully. Jul 14 23:41:28.408462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac1ac4f3373994b5697cc7a341a8a05c51f88baaf6e9878cfc3c07651844770e-rootfs.mount: Deactivated successfully. Jul 14 23:41:28.408515 systemd[1]: var-lib-kubelet-pods-04f21409\x2d337a\x2d40dc\x2d9947\x2da9187a0a59cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d76dbb.mount: Deactivated successfully. Jul 14 23:41:28.408571 systemd[1]: var-lib-kubelet-pods-c1391798\x2dbecd\x2d4448\x2d8c96\x2d43c288f8f16a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6dpsn.mount: Deactivated successfully. Jul 14 23:41:28.408622 systemd[1]: var-lib-kubelet-pods-c1391798\x2dbecd\x2d4448\x2d8c96\x2d43c288f8f16a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 23:41:28.408669 systemd[1]: var-lib-kubelet-pods-c1391798\x2dbecd\x2d4448\x2d8c96\x2d43c288f8f16a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 23:41:28.481894 kubelet[2556]: E0714 23:41:28.481844 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:29.313201 sshd[4219]: Connection closed by 10.0.0.1 port 42270 Jul 14 23:41:29.314659 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:29.326395 systemd[1]: sshd@22-10.0.0.8:22-10.0.0.1:42270.service: Deactivated successfully. Jul 14 23:41:29.328503 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 23:41:29.328787 systemd[1]: session-23.scope: Consumed 1.447s CPU time, 26.8M memory peak. Jul 14 23:41:29.329298 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Jul 14 23:41:29.334442 systemd[1]: Started sshd@23-10.0.0.8:22-10.0.0.1:42274.service - OpenSSH per-connection server daemon (10.0.0.1:42274). Jul 14 23:41:29.335211 systemd-logind[1441]: Removed session 23. Jul 14 23:41:29.376270 sshd[4380]: Accepted publickey for core from 10.0.0.1 port 42274 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:29.377779 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:29.381452 systemd-logind[1441]: New session 24 of user core. Jul 14 23:41:29.397204 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 23:41:29.483766 kubelet[2556]: I0714 23:41:29.483723 2556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f21409-337a-40dc-9947-a9187a0a59cb" path="/var/lib/kubelet/pods/04f21409-337a-40dc-9947-a9187a0a59cb/volumes" Jul 14 23:41:29.484169 kubelet[2556]: I0714 23:41:29.484112 2556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1391798-becd-4448-8c96-43c288f8f16a" path="/var/lib/kubelet/pods/c1391798-becd-4448-8c96-43c288f8f16a/volumes" Jul 14 23:41:30.547231 kubelet[2556]: E0714 23:41:30.547192 2556 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 23:41:30.640053 sshd[4383]: Connection closed by 10.0.0.1 port 42274 Jul 14 23:41:30.640478 sshd-session[4380]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:30.655349 systemd[1]: sshd@23-10.0.0.8:22-10.0.0.1:42274.service: Deactivated successfully. Jul 14 23:41:30.659564 kubelet[2556]: I0714 23:41:30.659529 2556 memory_manager.go:355] "RemoveStaleState removing state" podUID="c1391798-becd-4448-8c96-43c288f8f16a" containerName="cilium-agent" Jul 14 23:41:30.659564 kubelet[2556]: I0714 23:41:30.659554 2556 memory_manager.go:355] "RemoveStaleState removing state" podUID="04f21409-337a-40dc-9947-a9187a0a59cb" containerName="cilium-operator" Jul 14 23:41:30.660616 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 23:41:30.660817 systemd[1]: session-24.scope: Consumed 1.165s CPU time, 25.5M memory peak. Jul 14 23:41:30.665029 kubelet[2556]: W0714 23:41:30.664940 2556 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 14 23:41:30.665029 kubelet[2556]: E0714 23:41:30.665003 2556 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 14 23:41:30.665640 kubelet[2556]: I0714 23:41:30.665493 2556 status_manager.go:890] "Failed to get status for pod" podUID="68b0f849-f717-4467-a116-86c850f73bb7" pod="kube-system/cilium-lq5rx" err="pods \"cilium-lq5rx\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jul 14 23:41:30.666020 kubelet[2556]: W0714 23:41:30.665998 2556 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 14 23:41:30.666020 kubelet[2556]: E0714 23:41:30.666030 2556 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 14 23:41:30.666133 kubelet[2556]: W0714 23:41:30.666091 2556 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 14 23:41:30.666133 kubelet[2556]: E0714 23:41:30.666103 2556 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 14 23:41:30.666189 kubelet[2556]: W0714 23:41:30.666136 2556 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 14 23:41:30.666189 kubelet[2556]: E0714 23:41:30.666149 2556 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 14 23:41:30.667354 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Jul 14 23:41:30.677481 systemd[1]: Started sshd@24-10.0.0.8:22-10.0.0.1:42278.service - OpenSSH per-connection server daemon (10.0.0.1:42278). Jul 14 23:41:30.680052 systemd-logind[1441]: Removed session 24. Jul 14 23:41:30.686839 systemd[1]: Created slice kubepods-burstable-pod68b0f849_f717_4467_a116_86c850f73bb7.slice - libcontainer container kubepods-burstable-pod68b0f849_f717_4467_a116_86c850f73bb7.slice. Jul 14 23:41:30.723871 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 42278 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:30.725702 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:30.730253 systemd-logind[1441]: New session 25 of user core. Jul 14 23:41:30.741298 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 23:41:30.760011 kubelet[2556]: I0714 23:41:30.759964 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/68b0f849-f717-4467-a116-86c850f73bb7-cilium-ipsec-secrets\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760302 kubelet[2556]: I0714 23:41:30.760178 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68b0f849-f717-4467-a116-86c850f73bb7-host-proc-sys-kernel\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760302 kubelet[2556]: I0714 23:41:30.760228 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68b0f849-f717-4467-a116-86c850f73bb7-cni-path\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760302 kubelet[2556]: I0714 23:41:30.760247 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68b0f849-f717-4467-a116-86c850f73bb7-cilium-run\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760529 kubelet[2556]: I0714 23:41:30.760262 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68b0f849-f717-4467-a116-86c850f73bb7-clustermesh-secrets\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760529 kubelet[2556]: I0714 23:41:30.760405 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvjq\" (UniqueName: \"kubernetes.io/projected/68b0f849-f717-4467-a116-86c850f73bb7-kube-api-access-dkvjq\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760529 kubelet[2556]: I0714 23:41:30.760430 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68b0f849-f717-4467-a116-86c850f73bb7-cilium-config-path\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760700 kubelet[2556]: I0714 23:41:30.760587 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68b0f849-f717-4467-a116-86c850f73bb7-etc-cni-netd\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760700 kubelet[2556]: I0714 23:41:30.760613 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68b0f849-f717-4467-a116-86c850f73bb7-lib-modules\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760892 kubelet[2556]: I0714 23:41:30.760774 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68b0f849-f717-4467-a116-86c850f73bb7-hubble-tls\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760892 kubelet[2556]: I0714 23:41:30.760802 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68b0f849-f717-4467-a116-86c850f73bb7-bpf-maps\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760892 kubelet[2556]: I0714 23:41:30.760820 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68b0f849-f717-4467-a116-86c850f73bb7-cilium-cgroup\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760892 kubelet[2556]: I0714 23:41:30.760852 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68b0f849-f717-4467-a116-86c850f73bb7-hostproc\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.760892 kubelet[2556]: I0714 23:41:30.760867 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68b0f849-f717-4467-a116-86c850f73bb7-host-proc-sys-net\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.761052 kubelet[2556]: I0714 23:41:30.760884 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68b0f849-f717-4467-a116-86c850f73bb7-xtables-lock\") pod \"cilium-lq5rx\" (UID: \"68b0f849-f717-4467-a116-86c850f73bb7\") " pod="kube-system/cilium-lq5rx" Jul 14 23:41:30.791840 sshd[4398]: Connection closed by 10.0.0.1 port 42278 Jul 14 23:41:30.791253 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:30.804398 systemd[1]: sshd@24-10.0.0.8:22-10.0.0.1:42278.service: Deactivated successfully. Jul 14 23:41:30.806137 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 23:41:30.807619 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Jul 14 23:41:30.808770 systemd[1]: Started sshd@25-10.0.0.8:22-10.0.0.1:42286.service - OpenSSH per-connection server daemon (10.0.0.1:42286). Jul 14 23:41:30.809972 systemd-logind[1441]: Removed session 25. Jul 14 23:41:30.846499 sshd[4404]: Accepted publickey for core from 10.0.0.1 port 42286 ssh2: RSA SHA256:IvD9s0mdaxDRPTMhnea16rOup9lIBeQNRvhwADTAo+s Jul 14 23:41:30.847592 sshd-session[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:41:30.854266 systemd-logind[1441]: New session 26 of user core. Jul 14 23:41:30.858223 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 23:41:31.863010 kubelet[2556]: E0714 23:41:31.862699 2556 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 14 23:41:31.863010 kubelet[2556]: E0714 23:41:31.862746 2556 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 14 23:41:31.863010 kubelet[2556]: E0714 23:41:31.862792 2556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68b0f849-f717-4467-a116-86c850f73bb7-clustermesh-secrets podName:68b0f849-f717-4467-a116-86c850f73bb7 nodeName:}" failed. No retries permitted until 2025-07-14 23:41:32.362764665 +0000 UTC m=+76.948642387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/68b0f849-f717-4467-a116-86c850f73bb7-clustermesh-secrets") pod "cilium-lq5rx" (UID: "68b0f849-f717-4467-a116-86c850f73bb7") : failed to sync secret cache: timed out waiting for the condition Jul 14 23:41:31.863010 kubelet[2556]: E0714 23:41:31.862825 2556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68b0f849-f717-4467-a116-86c850f73bb7-cilium-ipsec-secrets podName:68b0f849-f717-4467-a116-86c850f73bb7 nodeName:}" failed. No retries permitted until 2025-07-14 23:41:32.362802025 +0000 UTC m=+76.948679787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/68b0f849-f717-4467-a116-86c850f73bb7-cilium-ipsec-secrets") pod "cilium-lq5rx" (UID: "68b0f849-f717-4467-a116-86c850f73bb7") : failed to sync secret cache: timed out waiting for the condition Jul 14 23:41:31.864280 kubelet[2556]: E0714 23:41:31.864251 2556 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 14 23:41:31.864335 kubelet[2556]: E0714 23:41:31.864327 2556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/68b0f849-f717-4467-a116-86c850f73bb7-cilium-config-path podName:68b0f849-f717-4467-a116-86c850f73bb7 nodeName:}" failed. No retries permitted until 2025-07-14 23:41:32.364311273 +0000 UTC m=+76.950188995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/68b0f849-f717-4467-a116-86c850f73bb7-cilium-config-path") pod "cilium-lq5rx" (UID: "68b0f849-f717-4467-a116-86c850f73bb7") : failed to sync configmap cache: timed out waiting for the condition Jul 14 23:41:32.493814 kubelet[2556]: E0714 23:41:32.493732 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:32.494469 containerd[1462]: time="2025-07-14T23:41:32.494410494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lq5rx,Uid:68b0f849-f717-4467-a116-86c850f73bb7,Namespace:kube-system,Attempt:0,}" Jul 14 23:41:32.566668 containerd[1462]: time="2025-07-14T23:41:32.566509171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:41:32.566668 containerd[1462]: time="2025-07-14T23:41:32.566564012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:41:32.566668 containerd[1462]: time="2025-07-14T23:41:32.566579452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:41:32.566834 containerd[1462]: time="2025-07-14T23:41:32.566772453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:41:32.588223 systemd[1]: Started cri-containerd-71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5.scope - libcontainer container 71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5. Jul 14 23:41:32.609948 containerd[1462]: time="2025-07-14T23:41:32.609871026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lq5rx,Uid:68b0f849-f717-4467-a116-86c850f73bb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5\"" Jul 14 23:41:32.610764 kubelet[2556]: E0714 23:41:32.610562 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:32.612771 containerd[1462]: time="2025-07-14T23:41:32.612746360Z" level=info msg="CreateContainer within sandbox \"71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 23:41:32.624826 containerd[1462]: time="2025-07-14T23:41:32.624725580Z" level=info msg="CreateContainer within sandbox \"71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ded7c45683e2d2cc2ed90ad438ace50173f11a38af045453e9c2547b8f6f2e9\"" Jul 14 23:41:32.625190 containerd[1462]: time="2025-07-14T23:41:32.625143182Z" level=info msg="StartContainer for \"9ded7c45683e2d2cc2ed90ad438ace50173f11a38af045453e9c2547b8f6f2e9\"" Jul 14 23:41:32.655642 systemd[1]: Started cri-containerd-9ded7c45683e2d2cc2ed90ad438ace50173f11a38af045453e9c2547b8f6f2e9.scope - libcontainer container 9ded7c45683e2d2cc2ed90ad438ace50173f11a38af045453e9c2547b8f6f2e9. Jul 14 23:41:32.681455 containerd[1462]: time="2025-07-14T23:41:32.681419781Z" level=info msg="StartContainer for \"9ded7c45683e2d2cc2ed90ad438ace50173f11a38af045453e9c2547b8f6f2e9\" returns successfully" Jul 14 23:41:32.704789 systemd[1]: cri-containerd-9ded7c45683e2d2cc2ed90ad438ace50173f11a38af045453e9c2547b8f6f2e9.scope: Deactivated successfully. Jul 14 23:41:32.731525 containerd[1462]: time="2025-07-14T23:41:32.731366148Z" level=info msg="shim disconnected" id=9ded7c45683e2d2cc2ed90ad438ace50173f11a38af045453e9c2547b8f6f2e9 namespace=k8s.io Jul 14 23:41:32.731525 containerd[1462]: time="2025-07-14T23:41:32.731443829Z" level=warning msg="cleaning up after shim disconnected" id=9ded7c45683e2d2cc2ed90ad438ace50173f11a38af045453e9c2547b8f6f2e9 namespace=k8s.io Jul 14 23:41:32.731525 containerd[1462]: time="2025-07-14T23:41:32.731452429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:41:33.482369 kubelet[2556]: E0714 23:41:33.482342 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:33.482735 kubelet[2556]: E0714 23:41:33.482421 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:33.688968 kubelet[2556]: E0714 23:41:33.688864 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:33.691871 containerd[1462]: time="2025-07-14T23:41:33.691837097Z" level=info msg="CreateContainer within sandbox \"71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 23:41:33.709597 containerd[1462]: time="2025-07-14T23:41:33.709508583Z" level=info msg="CreateContainer within sandbox \"71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"76300d75d69b6352a22cfc6e0f4df39178f898b01d5b75616e7f41b60d486ebf\"" Jul 14 23:41:33.710481 containerd[1462]: time="2025-07-14T23:41:33.710415227Z" level=info msg="StartContainer for \"76300d75d69b6352a22cfc6e0f4df39178f898b01d5b75616e7f41b60d486ebf\"" Jul 14 23:41:33.745258 systemd[1]: Started cri-containerd-76300d75d69b6352a22cfc6e0f4df39178f898b01d5b75616e7f41b60d486ebf.scope - libcontainer container 76300d75d69b6352a22cfc6e0f4df39178f898b01d5b75616e7f41b60d486ebf. Jul 14 23:41:33.768126 containerd[1462]: time="2025-07-14T23:41:33.767982145Z" level=info msg="StartContainer for \"76300d75d69b6352a22cfc6e0f4df39178f898b01d5b75616e7f41b60d486ebf\" returns successfully" Jul 14 23:41:33.778584 systemd[1]: cri-containerd-76300d75d69b6352a22cfc6e0f4df39178f898b01d5b75616e7f41b60d486ebf.scope: Deactivated successfully. Jul 14 23:41:33.802043 containerd[1462]: time="2025-07-14T23:41:33.801986629Z" level=info msg="shim disconnected" id=76300d75d69b6352a22cfc6e0f4df39178f898b01d5b75616e7f41b60d486ebf namespace=k8s.io Jul 14 23:41:33.802043 containerd[1462]: time="2025-07-14T23:41:33.802040189Z" level=warning msg="cleaning up after shim disconnected" id=76300d75d69b6352a22cfc6e0f4df39178f898b01d5b75616e7f41b60d486ebf namespace=k8s.io Jul 14 23:41:33.802043 containerd[1462]: time="2025-07-14T23:41:33.802048829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:41:34.374134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76300d75d69b6352a22cfc6e0f4df39178f898b01d5b75616e7f41b60d486ebf-rootfs.mount: Deactivated successfully. Jul 14 23:41:34.692962 kubelet[2556]: E0714 23:41:34.692546 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:34.695420 containerd[1462]: time="2025-07-14T23:41:34.695375811Z" level=info msg="CreateContainer within sandbox \"71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 23:41:34.715852 containerd[1462]: time="2025-07-14T23:41:34.715720827Z" level=info msg="CreateContainer within sandbox \"71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a39333cff8d904219ff9a895f8bdb797dfee70c76164aaab0ea05a711b28527e\"" Jul 14 23:41:34.716350 containerd[1462]: time="2025-07-14T23:41:34.716208749Z" level=info msg="StartContainer for \"a39333cff8d904219ff9a895f8bdb797dfee70c76164aaab0ea05a711b28527e\"" Jul 14 23:41:34.742267 systemd[1]: Started cri-containerd-a39333cff8d904219ff9a895f8bdb797dfee70c76164aaab0ea05a711b28527e.scope - libcontainer container a39333cff8d904219ff9a895f8bdb797dfee70c76164aaab0ea05a711b28527e. Jul 14 23:41:34.767342 containerd[1462]: time="2025-07-14T23:41:34.767291949Z" level=info msg="StartContainer for \"a39333cff8d904219ff9a895f8bdb797dfee70c76164aaab0ea05a711b28527e\" returns successfully" Jul 14 23:41:34.768837 systemd[1]: cri-containerd-a39333cff8d904219ff9a895f8bdb797dfee70c76164aaab0ea05a711b28527e.scope: Deactivated successfully. Jul 14 23:41:34.792906 containerd[1462]: time="2025-07-14T23:41:34.792836109Z" level=info msg="shim disconnected" id=a39333cff8d904219ff9a895f8bdb797dfee70c76164aaab0ea05a711b28527e namespace=k8s.io Jul 14 23:41:34.792906 containerd[1462]: time="2025-07-14T23:41:34.792888909Z" level=warning msg="cleaning up after shim disconnected" id=a39333cff8d904219ff9a895f8bdb797dfee70c76164aaab0ea05a711b28527e namespace=k8s.io Jul 14 23:41:34.792906 containerd[1462]: time="2025-07-14T23:41:34.792896909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:41:35.374195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a39333cff8d904219ff9a895f8bdb797dfee70c76164aaab0ea05a711b28527e-rootfs.mount: Deactivated successfully. Jul 14 23:41:35.547646 kubelet[2556]: E0714 23:41:35.547609 2556 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 23:41:35.696825 kubelet[2556]: E0714 23:41:35.696459 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:35.699411 containerd[1462]: time="2025-07-14T23:41:35.699275442Z" level=info msg="CreateContainer within sandbox \"71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 23:41:35.714377 containerd[1462]: time="2025-07-14T23:41:35.714301030Z" level=info msg="CreateContainer within sandbox \"71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f7bb6bba5579860bb601de0a869f890c90bf0d0f859b3d8049c733c5e9e17007\"" Jul 14 23:41:35.715782 containerd[1462]: time="2025-07-14T23:41:35.715738637Z" level=info msg="StartContainer for \"f7bb6bba5579860bb601de0a869f890c90bf0d0f859b3d8049c733c5e9e17007\"" Jul 14 23:41:35.741284 systemd[1]: Started cri-containerd-f7bb6bba5579860bb601de0a869f890c90bf0d0f859b3d8049c733c5e9e17007.scope - libcontainer container f7bb6bba5579860bb601de0a869f890c90bf0d0f859b3d8049c733c5e9e17007. Jul 14 23:41:35.760857 systemd[1]: cri-containerd-f7bb6bba5579860bb601de0a869f890c90bf0d0f859b3d8049c733c5e9e17007.scope: Deactivated successfully. Jul 14 23:41:35.762761 containerd[1462]: time="2025-07-14T23:41:35.762650332Z" level=info msg="StartContainer for \"f7bb6bba5579860bb601de0a869f890c90bf0d0f859b3d8049c733c5e9e17007\" returns successfully" Jul 14 23:41:35.781220 containerd[1462]: time="2025-07-14T23:41:35.781159776Z" level=info msg="shim disconnected" id=f7bb6bba5579860bb601de0a869f890c90bf0d0f859b3d8049c733c5e9e17007 namespace=k8s.io Jul 14 23:41:35.781220 containerd[1462]: time="2025-07-14T23:41:35.781216737Z" level=warning msg="cleaning up after shim disconnected" id=f7bb6bba5579860bb601de0a869f890c90bf0d0f859b3d8049c733c5e9e17007 namespace=k8s.io Jul 14 23:41:35.781220 containerd[1462]: time="2025-07-14T23:41:35.781225897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:41:36.374361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7bb6bba5579860bb601de0a869f890c90bf0d0f859b3d8049c733c5e9e17007-rootfs.mount: Deactivated successfully. Jul 14 23:41:36.482021 kubelet[2556]: E0714 23:41:36.481986 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:36.700845 kubelet[2556]: E0714 23:41:36.700731 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:36.703636 containerd[1462]: time="2025-07-14T23:41:36.703501232Z" level=info msg="CreateContainer within sandbox \"71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 23:41:36.716141 containerd[1462]: time="2025-07-14T23:41:36.715922368Z" level=info msg="CreateContainer within sandbox \"71d5ce9f1261173c3def769fb99c3dfe064c593a77172b35b606e96b608204c5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e4a283fd0f0fb8630dcb62f4d1fe855bce886a4b613419e866264a45ec5c78bd\"" Jul 14 23:41:36.716528 containerd[1462]: time="2025-07-14T23:41:36.716476290Z" level=info msg="StartContainer for \"e4a283fd0f0fb8630dcb62f4d1fe855bce886a4b613419e866264a45ec5c78bd\"" Jul 14 23:41:36.767254 systemd[1]: Started cri-containerd-e4a283fd0f0fb8630dcb62f4d1fe855bce886a4b613419e866264a45ec5c78bd.scope - libcontainer container e4a283fd0f0fb8630dcb62f4d1fe855bce886a4b613419e866264a45ec5c78bd. Jul 14 23:41:36.790383 containerd[1462]: time="2025-07-14T23:41:36.790317819Z" level=info msg="StartContainer for \"e4a283fd0f0fb8630dcb62f4d1fe855bce886a4b613419e866264a45ec5c78bd\" returns successfully" Jul 14 23:41:37.063110 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 14 23:41:37.158424 kubelet[2556]: I0714 23:41:37.157908 2556 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T23:41:37Z","lastTransitionTime":"2025-07-14T23:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 23:41:37.374371 systemd[1]: run-containerd-runc-k8s.io-e4a283fd0f0fb8630dcb62f4d1fe855bce886a4b613419e866264a45ec5c78bd-runc.1fiiOT.mount: Deactivated successfully. Jul 14 23:41:37.705879 kubelet[2556]: E0714 23:41:37.705779 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:38.710396 kubelet[2556]: E0714 23:41:38.710324 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:39.711580 kubelet[2556]: E0714 23:41:39.711533 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:39.909716 systemd-networkd[1391]: lxc_health: Link UP Jul 14 23:41:39.910556 systemd-networkd[1391]: lxc_health: Gained carrier Jul 14 23:41:40.519891 kubelet[2556]: I0714 23:41:40.519828 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lq5rx" podStartSLOduration=10.519811172 podStartE2EDuration="10.519811172s" podCreationTimestamp="2025-07-14 23:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:41:37.720389121 +0000 UTC m=+82.306266843" watchObservedRunningTime="2025-07-14 23:41:40.519811172 +0000 UTC m=+85.105688894" Jul 14 23:41:40.713217 kubelet[2556]: E0714 23:41:40.713178 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:41.042235 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jul 14 23:41:41.375588 kubelet[2556]: E0714 23:41:41.374858 2556 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58186->127.0.0.1:40271: write tcp 127.0.0.1:58186->127.0.0.1:40271: write: broken pipe Jul 14 23:41:41.714644 kubelet[2556]: E0714 23:41:41.714488 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:42.716363 kubelet[2556]: E0714 23:41:42.716336 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:41:45.621111 sshd[4407]: Connection closed by 10.0.0.1 port 42286 Jul 14 23:41:45.621599 sshd-session[4404]: pam_unix(sshd:session): session closed for user core Jul 14 23:41:45.624927 systemd[1]: sshd@25-10.0.0.8:22-10.0.0.1:42286.service: Deactivated successfully. Jul 14 23:41:45.628641 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 23:41:45.629268 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. Jul 14 23:41:45.630575 systemd-logind[1441]: Removed session 26.