Jul 7 06:13:34.910512 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 06:13:34.910535 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 06:13:34.910545 kernel: KASLR enabled Jul 7 06:13:34.910551 kernel: efi: EFI v2.7 by EDK II Jul 7 06:13:34.910557 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 7 06:13:34.910563 kernel: random: crng init done Jul 7 06:13:34.910570 kernel: ACPI: Early table checksum verification disabled Jul 7 06:13:34.910576 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 7 06:13:34.910582 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 7 06:13:34.910589 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:34.910596 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:34.910602 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:34.910608 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:34.910614 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:34.910621 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:34.910629 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:34.910636 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:34.910643 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:34.910649 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 7 06:13:34.910655 kernel: NUMA: Failed to initialise from firmware Jul 7 06:13:34.910662 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:13:34.910668 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 7 06:13:34.910674 kernel: Zone ranges: Jul 7 06:13:34.910681 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:13:34.910687 kernel: DMA32 empty Jul 7 06:13:34.910695 kernel: Normal empty Jul 7 06:13:34.910701 kernel: Movable zone start for each node Jul 7 06:13:34.910707 kernel: Early memory node ranges Jul 7 06:13:34.910713 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 7 06:13:34.910720 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 7 06:13:34.910726 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 7 06:13:34.910732 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 7 06:13:34.910738 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 7 06:13:34.910745 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 7 06:13:34.910751 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 7 06:13:34.910757 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:13:34.910764 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 7 06:13:34.910771 kernel: psci: probing for conduit method from ACPI. Jul 7 06:13:34.910778 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 06:13:34.910784 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 06:13:34.910793 kernel: psci: Trusted OS migration not required Jul 7 06:13:34.910800 kernel: psci: SMC Calling Convention v1.1 Jul 7 06:13:34.910807 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 7 06:13:34.910815 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 06:13:34.910822 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 06:13:34.910829 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 7 06:13:34.910835 kernel: Detected PIPT I-cache on CPU0 Jul 7 06:13:34.910842 kernel: CPU features: detected: GIC system register CPU interface Jul 7 06:13:34.910849 kernel: CPU features: detected: Hardware dirty bit management Jul 7 06:13:34.910855 kernel: CPU features: detected: Spectre-v4 Jul 7 06:13:34.910862 kernel: CPU features: detected: Spectre-BHB Jul 7 06:13:34.910869 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 06:13:34.910876 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 06:13:34.910884 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 06:13:34.910891 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 06:13:34.910897 kernel: alternatives: applying boot alternatives Jul 7 06:13:34.910905 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:13:34.910913 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:13:34.910919 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:13:34.910926 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:13:34.910933 kernel: Fallback order for Node 0: 0 Jul 7 06:13:34.910940 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 7 06:13:34.910947 kernel: Policy zone: DMA Jul 7 06:13:34.910954 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:13:34.910963 kernel: software IO TLB: area num 4. Jul 7 06:13:34.910970 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 7 06:13:34.910977 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 7 06:13:34.910984 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:13:34.910991 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:13:34.910998 kernel: rcu: RCU event tracing is enabled. Jul 7 06:13:34.911005 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:13:34.911012 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:13:34.911018 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:13:34.911025 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:13:34.911032 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:13:34.911039 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 06:13:34.911048 kernel: GICv3: 256 SPIs implemented Jul 7 06:13:34.911054 kernel: GICv3: 0 Extended SPIs implemented Jul 7 06:13:34.911061 kernel: Root IRQ handler: gic_handle_irq Jul 7 06:13:34.911068 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 06:13:34.911074 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 7 06:13:34.911081 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 7 06:13:34.911088 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 06:13:34.911095 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 7 06:13:34.911102 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 7 06:13:34.911109 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 7 06:13:34.911116 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:13:34.911124 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:13:34.911131 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 06:13:34.911138 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 06:13:34.911144 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 06:13:34.911160 kernel: arm-pv: using stolen time PV Jul 7 06:13:34.911168 kernel: Console: colour dummy device 80x25 Jul 7 06:13:34.911175 kernel: ACPI: Core revision 20230628 Jul 7 06:13:34.911183 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 06:13:34.911190 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:13:34.911197 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 06:13:34.911206 kernel: landlock: Up and running. Jul 7 06:13:34.911213 kernel: SELinux: Initializing. Jul 7 06:13:34.911220 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:13:34.911227 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:13:34.911235 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:13:34.911242 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:13:34.911249 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:13:34.911256 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:13:34.911263 kernel: Platform MSI: ITS@0x8080000 domain created Jul 7 06:13:34.911271 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 7 06:13:34.911278 kernel: Remapping and enabling EFI services. Jul 7 06:13:34.911285 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:13:34.911292 kernel: Detected PIPT I-cache on CPU1 Jul 7 06:13:34.911299 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 7 06:13:34.911306 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 7 06:13:34.911314 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:13:34.911321 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 06:13:34.911328 kernel: Detected PIPT I-cache on CPU2 Jul 7 06:13:34.911335 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 7 06:13:34.911344 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 7 06:13:34.911351 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:13:34.911363 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 7 06:13:34.911372 kernel: Detected PIPT I-cache on CPU3 Jul 7 06:13:34.911379 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 7 06:13:34.911386 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 7 06:13:34.911394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:13:34.911409 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 7 06:13:34.911417 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:13:34.911427 kernel: SMP: Total of 4 processors activated. Jul 7 06:13:34.911434 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 06:13:34.911442 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 06:13:34.911449 kernel: CPU features: detected: Common not Private translations Jul 7 06:13:34.911456 kernel: CPU features: detected: CRC32 instructions Jul 7 06:13:34.911463 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 06:13:34.911471 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 06:13:34.911479 kernel: CPU features: detected: LSE atomic instructions Jul 7 06:13:34.911487 kernel: CPU features: detected: Privileged Access Never Jul 7 06:13:34.911495 kernel: CPU features: detected: RAS Extension Support Jul 7 06:13:34.911502 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 06:13:34.911509 kernel: CPU: All CPU(s) started at EL1 Jul 7 06:13:34.911516 kernel: alternatives: applying system-wide alternatives Jul 7 06:13:34.911523 kernel: devtmpfs: initialized Jul 7 06:13:34.911531 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:13:34.911538 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:13:34.911545 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:13:34.911554 kernel: SMBIOS 3.0.0 present. Jul 7 06:13:34.911562 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 7 06:13:34.911569 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:13:34.911576 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 06:13:34.911584 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 06:13:34.911591 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 06:13:34.911599 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:13:34.911606 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 7 06:13:34.911613 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:13:34.911622 kernel: cpuidle: using governor menu Jul 7 06:13:34.911629 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 06:13:34.911637 kernel: ASID allocator initialised with 32768 entries Jul 7 06:13:34.911644 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:13:34.911651 kernel: Serial: AMBA PL011 UART driver Jul 7 06:13:34.911658 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 06:13:34.911666 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 06:13:34.911673 kernel: Modules: 509008 pages in range for PLT usage Jul 7 06:13:34.911680 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:13:34.911689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:13:34.911697 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 06:13:34.911705 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 06:13:34.911712 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:13:34.911719 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:13:34.911726 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 06:13:34.911734 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 06:13:34.911741 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:13:34.911748 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:13:34.911757 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:13:34.911764 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:13:34.911772 kernel: ACPI: Interpreter enabled Jul 7 06:13:34.911779 kernel: ACPI: Using GIC for interrupt routing Jul 7 06:13:34.911786 kernel: ACPI: MCFG table detected, 1 entries Jul 7 06:13:34.911793 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 7 06:13:34.911801 kernel: printk: console [ttyAMA0] enabled Jul 7 06:13:34.911808 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:13:34.911951 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:13:34.912029 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 06:13:34.912092 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 06:13:34.912170 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 7 06:13:34.912240 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 7 06:13:34.912250 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 7 06:13:34.912258 kernel: PCI host bridge to bus 0000:00 Jul 7 06:13:34.912329 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 7 06:13:34.912392 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 06:13:34.912477 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 7 06:13:34.912533 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:13:34.912615 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 7 06:13:34.912689 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 06:13:34.912756 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 7 06:13:34.912825 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 7 06:13:34.912890 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:13:34.912955 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:13:34.913019 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 7 06:13:34.913084 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 7 06:13:34.913142 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 7 06:13:34.913214 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 06:13:34.913277 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 7 06:13:34.913287 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 06:13:34.913295 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 06:13:34.913302 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 06:13:34.913310 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 06:13:34.913317 kernel: iommu: Default domain type: Translated Jul 7 06:13:34.913324 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 06:13:34.913332 kernel: efivars: Registered efivars operations Jul 7 06:13:34.913339 kernel: vgaarb: loaded Jul 7 06:13:34.913348 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 06:13:34.913356 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:13:34.913363 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:13:34.913371 kernel: pnp: PnP ACPI init Jul 7 06:13:34.913472 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 7 06:13:34.913484 kernel: pnp: PnP ACPI: found 1 devices Jul 7 06:13:34.913492 kernel: NET: Registered PF_INET protocol family Jul 7 06:13:34.913499 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:13:34.913510 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:13:34.913518 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:13:34.913525 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:13:34.913533 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:13:34.913540 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:13:34.913548 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:13:34.913555 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:13:34.913562 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:13:34.913569 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:13:34.913579 kernel: kvm [1]: HYP mode not available Jul 7 06:13:34.913586 kernel: Initialise system trusted keyrings Jul 7 06:13:34.913594 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:13:34.913601 kernel: Key type asymmetric registered Jul 7 06:13:34.913608 kernel: Asymmetric key parser 'x509' registered Jul 7 06:13:34.913616 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:13:34.913623 kernel: io scheduler mq-deadline registered Jul 7 06:13:34.913630 kernel: io scheduler kyber registered Jul 7 06:13:34.913638 kernel: io scheduler bfq registered Jul 7 06:13:34.913647 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 06:13:34.913654 kernel: ACPI: button: Power Button [PWRB] Jul 7 06:13:34.913663 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 06:13:34.913734 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 7 06:13:34.913745 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:13:34.913752 kernel: thunder_xcv, ver 1.0 Jul 7 06:13:34.913760 kernel: thunder_bgx, ver 1.0 Jul 7 06:13:34.913767 kernel: nicpf, ver 1.0 Jul 7 06:13:34.913774 kernel: nicvf, ver 1.0 Jul 7 06:13:34.913850 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 06:13:34.913912 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T06:13:34 UTC (1751868814) Jul 7 06:13:34.913922 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:13:34.913929 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 7 06:13:34.913937 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 06:13:34.913944 kernel: watchdog: Hard watchdog permanently disabled Jul 7 06:13:34.913952 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:13:34.913959 kernel: Segment Routing with IPv6 Jul 7 06:13:34.913969 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:13:34.913976 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:13:34.913983 kernel: Key type dns_resolver registered Jul 7 06:13:34.913990 kernel: registered taskstats version 1 Jul 7 06:13:34.913998 kernel: Loading compiled-in X.509 certificates Jul 7 06:13:34.914006 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 06:13:34.914013 kernel: Key type .fscrypt registered Jul 7 06:13:34.914020 kernel: Key type fscrypt-provisioning registered Jul 7 06:13:34.914027 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:13:34.914036 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:13:34.914044 kernel: ima: No architecture policies found Jul 7 06:13:34.914051 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 06:13:34.914059 kernel: clk: Disabling unused clocks Jul 7 06:13:34.914066 kernel: Freeing unused kernel memory: 39424K Jul 7 06:13:34.914073 kernel: Run /init as init process Jul 7 06:13:34.914080 kernel: with arguments: Jul 7 06:13:34.914088 kernel: /init Jul 7 06:13:34.914095 kernel: with environment: Jul 7 06:13:34.914103 kernel: HOME=/ Jul 7 06:13:34.914111 kernel: TERM=linux Jul 7 06:13:34.914118 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:13:34.914127 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:13:34.914136 systemd[1]: Detected virtualization kvm. Jul 7 06:13:34.914145 systemd[1]: Detected architecture arm64. Jul 7 06:13:34.914160 systemd[1]: Running in initrd. Jul 7 06:13:34.914167 systemd[1]: No hostname configured, using default hostname. Jul 7 06:13:34.914177 systemd[1]: Hostname set to . Jul 7 06:13:34.914185 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:13:34.914193 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:13:34.914201 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:13:34.914209 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:13:34.914218 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:13:34.914226 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:13:34.914235 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:13:34.914244 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:13:34.914253 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:13:34.914261 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:13:34.914269 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:13:34.914277 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:13:34.914285 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:13:34.914294 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:13:34.914302 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:13:34.914310 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:13:34.914318 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:13:34.914326 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:13:34.914334 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:13:34.914342 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 06:13:34.914350 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:13:34.914358 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:13:34.914368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:13:34.914376 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:13:34.914384 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:13:34.914392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:13:34.914417 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:13:34.914427 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:13:34.914435 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:13:34.914443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:13:34.914454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:34.914462 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:13:34.914470 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:13:34.914478 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:13:34.914486 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:13:34.914520 systemd-journald[239]: Collecting audit messages is disabled. Jul 7 06:13:34.914541 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:13:34.914549 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:34.914558 systemd-journald[239]: Journal started Jul 7 06:13:34.914579 systemd-journald[239]: Runtime Journal (/run/log/journal/f244fa38f7344aa0b491e8dca4629d46) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:13:34.901800 systemd-modules-load[240]: Inserted module 'overlay' Jul 7 06:13:34.918069 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:13:34.921419 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:13:34.922966 systemd-modules-load[240]: Inserted module 'br_netfilter' Jul 7 06:13:34.923916 kernel: Bridge firewalling registered Jul 7 06:13:34.926555 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:13:34.928358 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:13:34.930492 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:13:34.934598 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:13:34.938545 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:13:34.939689 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:13:34.944710 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:13:34.947755 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:13:34.959646 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:13:34.960922 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:13:34.964369 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:13:34.978306 dracut-cmdline[281]: dracut-dracut-053 Jul 7 06:13:34.980994 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:13:34.988291 systemd-resolved[275]: Positive Trust Anchors: Jul 7 06:13:34.988308 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:13:34.988340 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:13:34.993184 systemd-resolved[275]: Defaulting to hostname 'linux'. Jul 7 06:13:34.994175 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:13:34.998578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:13:35.059428 kernel: SCSI subsystem initialized Jul 7 06:13:35.065422 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:13:35.071426 kernel: iscsi: registered transport (tcp) Jul 7 06:13:35.085449 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:13:35.085502 kernel: QLogic iSCSI HBA Driver Jul 7 06:13:35.128166 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:13:35.143583 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:13:35.159500 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:13:35.159557 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:13:35.162421 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 06:13:35.208437 kernel: raid6: neonx8 gen() 15789 MB/s Jul 7 06:13:35.225422 kernel: raid6: neonx4 gen() 15643 MB/s Jul 7 06:13:35.242434 kernel: raid6: neonx2 gen() 13248 MB/s Jul 7 06:13:35.259430 kernel: raid6: neonx1 gen() 10483 MB/s Jul 7 06:13:35.276424 kernel: raid6: int64x8 gen() 6962 MB/s Jul 7 06:13:35.293423 kernel: raid6: int64x4 gen() 7344 MB/s Jul 7 06:13:35.310429 kernel: raid6: int64x2 gen() 6130 MB/s Jul 7 06:13:35.327521 kernel: raid6: int64x1 gen() 5058 MB/s Jul 7 06:13:35.327547 kernel: raid6: using algorithm neonx8 gen() 15789 MB/s Jul 7 06:13:35.345498 kernel: raid6: .... xor() 11938 MB/s, rmw enabled Jul 7 06:13:35.345511 kernel: raid6: using neon recovery algorithm Jul 7 06:13:35.351799 kernel: xor: measuring software checksum speed Jul 7 06:13:35.351818 kernel: 8regs : 19707 MB/sec Jul 7 06:13:35.351829 kernel: 32regs : 18982 MB/sec Jul 7 06:13:35.352466 kernel: arm64_neon : 26866 MB/sec Jul 7 06:13:35.352479 kernel: xor: using function: arm64_neon (26866 MB/sec) Jul 7 06:13:35.405430 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:13:35.417144 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:13:35.429667 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:13:35.441352 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jul 7 06:13:35.444660 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:13:35.462653 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:13:35.475589 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Jul 7 06:13:35.504090 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:13:35.515562 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:13:35.556391 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:13:35.566628 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:13:35.579533 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:13:35.581201 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:13:35.583694 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:13:35.585565 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:13:35.597865 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:13:35.609104 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:13:35.613510 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 7 06:13:35.613661 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:13:35.613628 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:13:35.620408 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:13:35.620431 kernel: GPT:9289727 != 19775487 Jul 7 06:13:35.613735 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:13:35.625217 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:13:35.625240 kernel: GPT:9289727 != 19775487 Jul 7 06:13:35.625249 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:13:35.625259 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:13:35.625250 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:13:35.626392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:13:35.626561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:35.629618 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:35.641420 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (515) Jul 7 06:13:35.643109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:35.645861 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (512) Jul 7 06:13:35.652026 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:13:35.658427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:35.666937 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:13:35.674273 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:13:35.678213 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:13:35.679442 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:13:35.694615 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:13:35.696513 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:13:35.701206 disk-uuid[553]: Primary Header is updated. Jul 7 06:13:35.701206 disk-uuid[553]: Secondary Entries is updated. Jul 7 06:13:35.701206 disk-uuid[553]: Secondary Header is updated. Jul 7 06:13:35.706420 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:13:35.719974 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:13:36.717418 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:13:36.717572 disk-uuid[554]: The operation has completed successfully. Jul 7 06:13:36.739322 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:13:36.739454 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:13:36.758609 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:13:36.761330 sh[577]: Success Jul 7 06:13:36.774583 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 06:13:36.802684 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:13:36.810791 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:13:36.813066 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:13:36.823420 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 06:13:36.823460 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:13:36.823472 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 06:13:36.823482 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 06:13:36.824778 kernel: BTRFS info (device dm-0): using free space tree Jul 7 06:13:36.828390 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:13:36.829706 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:13:36.839551 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:13:36.841098 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:13:36.848270 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:13:36.848312 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:13:36.848323 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:13:36.851433 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:13:36.858235 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 06:13:36.859962 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:13:36.865701 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:13:36.874571 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:13:36.934635 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:13:36.946765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:13:36.971176 ignition[671]: Ignition 2.19.0 Jul 7 06:13:36.971185 ignition[671]: Stage: fetch-offline Jul 7 06:13:36.971221 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:36.971229 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:13:36.971381 ignition[671]: parsed url from cmdline: "" Jul 7 06:13:36.975926 systemd-networkd[767]: lo: Link UP Jul 7 06:13:36.971384 ignition[671]: no config URL provided Jul 7 06:13:36.975930 systemd-networkd[767]: lo: Gained carrier Jul 7 06:13:36.971388 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:13:36.976690 systemd-networkd[767]: Enumeration completed Jul 7 06:13:36.971395 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:13:36.976781 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:13:36.971427 ignition[671]: op(1): [started] loading QEMU firmware config module Jul 7 06:13:36.977810 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:36.971433 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:13:36.977816 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:13:36.986034 ignition[671]: op(1): [finished] loading QEMU firmware config module Jul 7 06:13:36.978607 systemd-networkd[767]: eth0: Link UP Jul 7 06:13:36.978610 systemd-networkd[767]: eth0: Gained carrier Jul 7 06:13:36.978617 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:36.978933 systemd[1]: Reached target network.target - Network. Jul 7 06:13:37.008459 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:13:37.030076 ignition[671]: parsing config with SHA512: 676a87c652d1fa6ef85f7fa85ea9bb5d19f0665cb548cd5e9dd0be959882277a7da777034d49ebd45b88485f9ba32713186d67441d794215c9c8a694031f197b Jul 7 06:13:37.034567 unknown[671]: fetched base config from "system" Jul 7 06:13:37.034578 unknown[671]: fetched user config from "qemu" Jul 7 06:13:37.035242 ignition[671]: fetch-offline: fetch-offline passed Jul 7 06:13:37.036700 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:13:37.035327 ignition[671]: Ignition finished successfully Jul 7 06:13:37.038287 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:13:37.047535 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:13:37.057555 ignition[773]: Ignition 2.19.0 Jul 7 06:13:37.057564 ignition[773]: Stage: kargs Jul 7 06:13:37.057720 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:37.057729 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:13:37.058602 ignition[773]: kargs: kargs passed Jul 7 06:13:37.058645 ignition[773]: Ignition finished successfully Jul 7 06:13:37.062727 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:13:37.074578 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:13:37.084056 ignition[782]: Ignition 2.19.0 Jul 7 06:13:37.084067 ignition[782]: Stage: disks Jul 7 06:13:37.084247 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:37.084257 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:13:37.085127 ignition[782]: disks: disks passed Jul 7 06:13:37.085180 ignition[782]: Ignition finished successfully Jul 7 06:13:37.088429 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:13:37.089696 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:13:37.091294 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:13:37.093274 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:13:37.095192 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:13:37.097000 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:13:37.110537 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:13:37.122238 systemd-resolved[275]: Detected conflict on linux IN A 10.0.0.145 Jul 7 06:13:37.122255 systemd-resolved[275]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jul 7 06:13:37.124967 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 06:13:37.128771 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:13:37.130858 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:13:37.177416 kernel: EXT4-fs (vda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 06:13:37.177722 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:13:37.178944 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:13:37.201502 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:13:37.203890 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:13:37.204940 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:13:37.204978 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:13:37.205002 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:13:37.210967 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:13:37.213650 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:13:37.218176 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (801) Jul 7 06:13:37.218206 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:13:37.218217 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:13:37.219343 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:13:37.222477 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:13:37.223723 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:13:37.252376 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:13:37.256456 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:13:37.259537 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:13:37.262303 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:13:37.331393 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:13:37.342530 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:13:37.344124 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:13:37.349410 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:13:37.362245 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:13:37.369322 ignition[915]: INFO : Ignition 2.19.0 Jul 7 06:13:37.369322 ignition[915]: INFO : Stage: mount Jul 7 06:13:37.371831 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:37.371831 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:13:37.371831 ignition[915]: INFO : mount: mount passed Jul 7 06:13:37.371831 ignition[915]: INFO : Ignition finished successfully Jul 7 06:13:37.374434 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:13:37.390532 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:13:37.821461 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:13:37.833566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:13:37.840091 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (928) Jul 7 06:13:37.840136 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:13:37.840166 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:13:37.841698 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:13:37.844440 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:13:37.844865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:13:37.860336 ignition[947]: INFO : Ignition 2.19.0 Jul 7 06:13:37.860336 ignition[947]: INFO : Stage: files Jul 7 06:13:37.861884 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:37.861884 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:13:37.861884 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:13:37.865305 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:13:37.865305 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:13:37.865305 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:13:37.865305 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:13:37.865305 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:13:37.864760 unknown[947]: wrote ssh authorized keys file for user: core Jul 7 06:13:37.872575 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 06:13:37.872575 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 7 06:13:37.920040 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:13:38.143968 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 06:13:38.143968 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:13:38.147763 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 7 06:13:38.288502 systemd-networkd[767]: eth0: Gained IPv6LL Jul 7 06:13:38.502438 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 06:13:38.556589 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:13:38.556589 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:13:38.560150 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 7 06:13:38.990264 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 06:13:39.255472 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:13:39.255472 ignition[947]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 06:13:39.258879 ignition[947]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:13:39.258879 ignition[947]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:13:39.258879 ignition[947]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 06:13:39.258879 ignition[947]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 7 06:13:39.258879 ignition[947]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:13:39.258879 ignition[947]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:13:39.258879 ignition[947]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 7 06:13:39.258879 ignition[947]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:13:39.282563 ignition[947]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:13:39.286379 ignition[947]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:13:39.288002 ignition[947]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:13:39.288002 ignition[947]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:13:39.288002 ignition[947]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:13:39.288002 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:13:39.288002 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:13:39.288002 ignition[947]: INFO : files: files passed Jul 7 06:13:39.288002 ignition[947]: INFO : Ignition finished successfully Jul 7 06:13:39.290433 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:13:39.303563 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:13:39.306705 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:13:39.309742 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:13:39.310478 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:13:39.314120 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:13:39.317914 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:13:39.317914 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:13:39.322466 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:13:39.321370 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:13:39.323058 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:13:39.340591 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:13:39.360490 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:13:39.360605 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:13:39.362833 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:13:39.364590 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:13:39.366381 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:13:39.367219 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:13:39.382759 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:13:39.385278 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:13:39.397116 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:13:39.398389 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:13:39.400482 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:13:39.402176 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:13:39.402304 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:13:39.404749 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:13:39.406691 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:13:39.408282 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:13:39.410007 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:13:39.411936 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:13:39.413841 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:13:39.415693 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:13:39.417614 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:13:39.419560 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:13:39.421268 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:13:39.422766 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:13:39.422895 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:13:39.425151 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:13:39.427082 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:13:39.429022 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:13:39.429142 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:13:39.431156 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:13:39.431282 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:13:39.434074 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:13:39.434205 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:13:39.436119 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:13:39.437695 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:13:39.442469 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:13:39.443756 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:13:39.445774 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:13:39.447304 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:13:39.447409 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:13:39.448923 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:13:39.449012 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:13:39.450522 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:13:39.450640 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:13:39.452415 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:13:39.452524 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:13:39.464584 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:13:39.465489 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:13:39.465632 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:13:39.468731 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:13:39.470202 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:13:39.470334 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:13:39.472439 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:13:39.472589 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:13:39.478700 ignition[1002]: INFO : Ignition 2.19.0 Jul 7 06:13:39.478700 ignition[1002]: INFO : Stage: umount Jul 7 06:13:39.480434 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:39.480434 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:13:39.480161 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:13:39.484780 ignition[1002]: INFO : umount: umount passed Jul 7 06:13:39.484780 ignition[1002]: INFO : Ignition finished successfully Jul 7 06:13:39.481440 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:13:39.483585 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:13:39.484015 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:13:39.484095 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:13:39.486573 systemd[1]: Stopped target network.target - Network. Jul 7 06:13:39.488148 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:13:39.488212 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:13:39.490386 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:13:39.490538 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:13:39.492202 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:13:39.492246 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:13:39.494040 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:13:39.494087 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:13:39.495940 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:13:39.497562 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:13:39.503469 systemd-networkd[767]: eth0: DHCPv6 lease lost Jul 7 06:13:39.505417 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:13:39.505545 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:13:39.507816 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:13:39.507946 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:13:39.510566 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:13:39.510618 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:13:39.519548 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:13:39.520428 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:13:39.520493 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:13:39.522631 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:13:39.522680 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:13:39.524430 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:13:39.524479 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:13:39.526478 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:13:39.526523 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:13:39.528466 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:13:39.538211 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:13:39.538351 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:13:39.546156 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:13:39.546326 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:13:39.548596 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:13:39.548636 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:13:39.550528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:13:39.550561 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:13:39.552317 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:13:39.552366 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:13:39.555065 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:13:39.555110 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:13:39.557698 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:13:39.557743 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:13:39.569602 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:13:39.570646 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:13:39.570707 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:13:39.572775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:13:39.572821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:39.574946 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:13:39.576456 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:13:39.577670 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:13:39.577762 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:13:39.580062 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:13:39.581116 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:13:39.581192 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:13:39.583630 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:13:39.593680 systemd[1]: Switching root. Jul 7 06:13:39.630654 systemd-journald[239]: Journal stopped Jul 7 06:13:40.360372 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jul 7 06:13:40.360444 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:13:40.360461 kernel: SELinux: policy capability open_perms=1 Jul 7 06:13:40.360471 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:13:40.360485 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:13:40.360495 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:13:40.360505 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:13:40.360519 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:13:40.360529 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:13:40.360538 kernel: audit: type=1403 audit(1751868819.791:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:13:40.360552 systemd[1]: Successfully loaded SELinux policy in 33.995ms. Jul 7 06:13:40.360568 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.680ms. Jul 7 06:13:40.360580 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:13:40.360593 systemd[1]: Detected virtualization kvm. Jul 7 06:13:40.360604 systemd[1]: Detected architecture arm64. Jul 7 06:13:40.360614 systemd[1]: Detected first boot. Jul 7 06:13:40.360624 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:13:40.360635 zram_generator::config[1046]: No configuration found. Jul 7 06:13:40.360646 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:13:40.360658 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:13:40.360669 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:13:40.360680 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:13:40.360691 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:13:40.360702 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:13:40.360713 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:13:40.360723 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:13:40.360734 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:13:40.360747 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:13:40.360758 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:13:40.360769 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:13:40.360780 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:13:40.360790 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:13:40.360801 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:13:40.360812 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:13:40.360823 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:13:40.360834 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:13:40.360846 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 06:13:40.360857 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:13:40.360867 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:13:40.360877 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:13:40.360888 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:13:40.360898 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:13:40.360909 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:13:40.360920 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:13:40.360932 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:13:40.360943 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:13:40.360953 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:13:40.360964 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:13:40.360975 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:13:40.360985 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:13:40.360996 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:13:40.361007 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:13:40.361017 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:13:40.361030 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:13:40.361040 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:13:40.361052 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:13:40.361063 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:13:40.361073 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:13:40.361084 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:13:40.361095 systemd[1]: Reached target machines.target - Containers. Jul 7 06:13:40.361105 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:13:40.361116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:40.361128 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:13:40.361146 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:13:40.361158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:13:40.361168 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:13:40.361179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:13:40.361189 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:13:40.361200 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:13:40.361211 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:13:40.361224 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:13:40.361234 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:13:40.361249 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:13:40.361260 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:13:40.361270 kernel: fuse: init (API version 7.39) Jul 7 06:13:40.361280 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:13:40.361292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:13:40.361303 kernel: ACPI: bus type drm_connector registered Jul 7 06:13:40.361312 kernel: loop: module loaded Jul 7 06:13:40.361324 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:13:40.361335 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:13:40.361365 systemd-journald[1117]: Collecting audit messages is disabled. Jul 7 06:13:40.361388 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:13:40.361409 systemd-journald[1117]: Journal started Jul 7 06:13:40.361431 systemd-journald[1117]: Runtime Journal (/run/log/journal/f244fa38f7344aa0b491e8dca4629d46) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:13:40.147272 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:13:40.164550 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:13:40.164930 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:13:40.364003 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:13:40.364041 systemd[1]: Stopped verity-setup.service. Jul 7 06:13:40.367421 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:13:40.368011 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:13:40.369183 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:13:40.370386 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:13:40.371459 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:13:40.372656 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:13:40.373871 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:13:40.376433 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:13:40.377823 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:13:40.380748 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:13:40.380909 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:13:40.382371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:13:40.382561 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:13:40.383881 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:13:40.384024 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:13:40.385353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:13:40.385502 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:13:40.387092 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:13:40.387242 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:13:40.388604 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:13:40.388741 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:13:40.390189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:13:40.391581 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:13:40.394477 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:13:40.406476 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:13:40.417504 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:13:40.419726 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:13:40.420861 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:13:40.420906 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:13:40.422987 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 06:13:40.425309 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:13:40.427593 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:13:40.428749 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:40.430235 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:13:40.432384 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:13:40.433619 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:13:40.435553 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:13:40.436692 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:13:40.438595 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:13:40.441672 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:13:40.443919 systemd-journald[1117]: Time spent on flushing to /var/log/journal/f244fa38f7344aa0b491e8dca4629d46 is 27.814ms for 858 entries. Jul 7 06:13:40.443919 systemd-journald[1117]: System Journal (/var/log/journal/f244fa38f7344aa0b491e8dca4629d46) is 8.0M, max 195.6M, 187.6M free. Jul 7 06:13:40.479690 systemd-journald[1117]: Received client request to flush runtime journal. Jul 7 06:13:40.479740 kernel: loop0: detected capacity change from 0 to 114328 Jul 7 06:13:40.479755 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:13:40.445504 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:13:40.451507 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:13:40.453099 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:13:40.455139 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:13:40.456943 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:13:40.458622 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:13:40.463345 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:13:40.476679 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 06:13:40.480802 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 06:13:40.485836 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:13:40.495425 kernel: loop1: detected capacity change from 0 to 207008 Jul 7 06:13:40.493243 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:13:40.493848 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:13:40.496101 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 06:13:40.501782 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 06:13:40.503198 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:13:40.515633 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:13:40.528479 kernel: loop2: detected capacity change from 0 to 114432 Jul 7 06:13:40.530642 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 7 06:13:40.530658 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 7 06:13:40.537053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:13:40.567457 kernel: loop3: detected capacity change from 0 to 114328 Jul 7 06:13:40.573450 kernel: loop4: detected capacity change from 0 to 207008 Jul 7 06:13:40.579422 kernel: loop5: detected capacity change from 0 to 114432 Jul 7 06:13:40.583006 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:13:40.583469 (sd-merge)[1181]: Merged extensions into '/usr'. Jul 7 06:13:40.587211 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:13:40.587330 systemd[1]: Reloading... Jul 7 06:13:40.658243 zram_generator::config[1207]: No configuration found. Jul 7 06:13:40.732044 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:13:40.760468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:13:40.797768 systemd[1]: Reloading finished in 209 ms. Jul 7 06:13:40.833206 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:13:40.834730 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:13:40.850921 systemd[1]: Starting ensure-sysext.service... Jul 7 06:13:40.853016 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:13:40.861469 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:13:40.861484 systemd[1]: Reloading... Jul 7 06:13:40.870633 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:13:40.870902 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:13:40.871583 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:13:40.871803 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jul 7 06:13:40.871850 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jul 7 06:13:40.874119 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:13:40.874124 systemd-tmpfiles[1242]: Skipping /boot Jul 7 06:13:40.881571 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:13:40.881583 systemd-tmpfiles[1242]: Skipping /boot Jul 7 06:13:40.921444 zram_generator::config[1272]: No configuration found. Jul 7 06:13:41.005233 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:13:41.042313 systemd[1]: Reloading finished in 180 ms. Jul 7 06:13:41.058461 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:13:41.075828 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:13:41.084337 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:13:41.087301 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:13:41.090008 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:13:41.094724 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:13:41.105234 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:13:41.110722 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:13:41.114348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:41.116944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:13:41.121660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:13:41.126665 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:13:41.128022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:41.133106 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:13:41.135036 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:13:41.137863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:13:41.137993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:13:41.141014 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:13:41.141167 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:13:41.142943 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:13:41.142993 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Jul 7 06:13:41.143076 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:13:41.148211 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:13:41.151954 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:41.163791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:13:41.168785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:13:41.171889 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:13:41.174553 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:41.176085 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:13:41.179551 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:13:41.180751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:13:41.182587 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:13:41.185032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:13:41.186416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:13:41.190091 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:13:41.190253 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:13:41.201872 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:13:41.210417 augenrules[1364]: No rules Jul 7 06:13:41.203693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:13:41.203830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:13:41.207179 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:13:41.208989 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:13:41.217746 systemd[1]: Finished ensure-sysext.service. Jul 7 06:13:41.225233 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 06:13:41.226341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:41.233610 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1342) Jul 7 06:13:41.235032 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:13:41.238274 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:13:41.243547 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:13:41.245041 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:41.249119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:13:41.251668 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:13:41.265640 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:13:41.267036 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:13:41.267563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:13:41.267869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:13:41.269951 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:13:41.271450 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:13:41.272789 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:13:41.272928 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:13:41.282499 systemd-resolved[1310]: Positive Trust Anchors: Jul 7 06:13:41.282514 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:13:41.282547 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:13:41.292839 systemd-resolved[1310]: Defaulting to hostname 'linux'. Jul 7 06:13:41.293366 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:13:41.295018 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:13:41.296275 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:13:41.298806 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:13:41.300110 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:13:41.320462 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:13:41.345232 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:13:41.349531 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:13:41.352845 systemd-networkd[1380]: lo: Link UP Jul 7 06:13:41.352855 systemd-networkd[1380]: lo: Gained carrier Jul 7 06:13:41.353591 systemd-networkd[1380]: Enumeration completed Jul 7 06:13:41.353698 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:13:41.354049 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:41.354059 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:13:41.354738 systemd-networkd[1380]: eth0: Link UP Jul 7 06:13:41.354746 systemd-networkd[1380]: eth0: Gained carrier Jul 7 06:13:41.354760 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:41.358215 systemd[1]: Reached target network.target - Network. Jul 7 06:13:41.369708 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:13:41.372504 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:13:41.373016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:41.374983 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jul 7 06:13:41.376544 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:13:41.376601 systemd-timesyncd[1382]: Initial clock synchronization to Mon 2025-07-07 06:13:41.339778 UTC. Jul 7 06:13:41.384432 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 06:13:41.387239 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 06:13:41.404694 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:13:41.418547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:41.441824 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 06:13:41.444772 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:13:41.445891 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:13:41.447102 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:13:41.448364 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:13:41.449784 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:13:41.450978 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:13:41.452230 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:13:41.453486 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:13:41.453530 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:13:41.454416 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:13:41.456061 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:13:41.458679 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:13:41.471416 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:13:41.473690 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 06:13:41.475262 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:13:41.476552 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:13:41.477527 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:13:41.478454 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:13:41.478487 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:13:41.479440 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:13:41.481442 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:13:41.483022 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:13:41.485557 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:13:41.488786 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:13:41.490195 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:13:41.494938 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:13:41.496870 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:13:41.498718 jq[1411]: false Jul 7 06:13:41.501584 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:13:41.503854 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:13:41.506952 extend-filesystems[1412]: Found loop3 Jul 7 06:13:41.507912 extend-filesystems[1412]: Found loop4 Jul 7 06:13:41.507912 extend-filesystems[1412]: Found loop5 Jul 7 06:13:41.507912 extend-filesystems[1412]: Found vda Jul 7 06:13:41.507912 extend-filesystems[1412]: Found vda1 Jul 7 06:13:41.507912 extend-filesystems[1412]: Found vda2 Jul 7 06:13:41.507912 extend-filesystems[1412]: Found vda3 Jul 7 06:13:41.507912 extend-filesystems[1412]: Found usr Jul 7 06:13:41.507912 extend-filesystems[1412]: Found vda4 Jul 7 06:13:41.507912 extend-filesystems[1412]: Found vda6 Jul 7 06:13:41.507912 extend-filesystems[1412]: Found vda7 Jul 7 06:13:41.507912 extend-filesystems[1412]: Found vda9 Jul 7 06:13:41.507912 extend-filesystems[1412]: Checking size of /dev/vda9 Jul 7 06:13:41.516004 dbus-daemon[1410]: [system] SELinux support is enabled Jul 7 06:13:41.507914 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:13:41.534046 extend-filesystems[1412]: Resized partition /dev/vda9 Jul 7 06:13:41.509664 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:13:41.510041 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:13:41.511229 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:13:41.516501 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:13:41.519274 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:13:41.522602 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 06:13:41.535218 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:13:41.536043 extend-filesystems[1433]: resize2fs 1.47.1 (20-May-2024) Jul 7 06:13:41.536516 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:13:41.536925 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:13:41.537073 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:13:41.540835 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:13:41.539248 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:13:41.539390 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:13:41.542304 jq[1422]: true Jul 7 06:13:41.550451 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1342) Jul 7 06:13:41.562425 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:13:41.568961 update_engine[1421]: I20250707 06:13:41.568753 1421 main.cc:92] Flatcar Update Engine starting Jul 7 06:13:41.572619 tar[1435]: linux-arm64/LICENSE Jul 7 06:13:41.588088 tar[1435]: linux-arm64/helm Jul 7 06:13:41.588144 extend-filesystems[1433]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:13:41.588144 extend-filesystems[1433]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:13:41.588144 extend-filesystems[1433]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:13:41.593452 jq[1438]: true Jul 7 06:13:41.593531 update_engine[1421]: I20250707 06:13:41.573244 1421 update_check_scheduler.cc:74] Next update check in 11m0s Jul 7 06:13:41.593692 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Jul 7 06:13:41.591888 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:13:41.592555 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 06:13:41.592720 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:13:41.595478 systemd-logind[1418]: New seat seat0. Jul 7 06:13:41.602107 (ntainerd)[1448]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:13:41.602378 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:13:41.607020 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:13:41.609734 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:13:41.609891 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:13:41.611879 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:13:41.612008 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:13:41.620738 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:13:41.672100 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:13:41.674949 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:13:41.677590 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:13:41.681064 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:13:41.814069 containerd[1448]: time="2025-07-07T06:13:41.813665080Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 06:13:41.841364 containerd[1448]: time="2025-07-07T06:13:41.841325280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:13:41.842626 containerd[1448]: time="2025-07-07T06:13:41.842593840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:13:41.842626 containerd[1448]: time="2025-07-07T06:13:41.842624680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 06:13:41.842713 containerd[1448]: time="2025-07-07T06:13:41.842640840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 06:13:41.842794 containerd[1448]: time="2025-07-07T06:13:41.842770320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 06:13:41.842832 containerd[1448]: time="2025-07-07T06:13:41.842793920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 06:13:41.842865 containerd[1448]: time="2025-07-07T06:13:41.842846520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:13:41.842887 containerd[1448]: time="2025-07-07T06:13:41.842862680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:13:41.843029 containerd[1448]: time="2025-07-07T06:13:41.843007720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:13:41.843058 containerd[1448]: time="2025-07-07T06:13:41.843028960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 06:13:41.843058 containerd[1448]: time="2025-07-07T06:13:41.843042400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:13:41.843058 containerd[1448]: time="2025-07-07T06:13:41.843051520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 06:13:41.843179 containerd[1448]: time="2025-07-07T06:13:41.843134520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:13:41.843347 containerd[1448]: time="2025-07-07T06:13:41.843326240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:13:41.843466 containerd[1448]: time="2025-07-07T06:13:41.843447560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:13:41.843493 containerd[1448]: time="2025-07-07T06:13:41.843467080Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 06:13:41.843565 containerd[1448]: time="2025-07-07T06:13:41.843548720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 06:13:41.843607 containerd[1448]: time="2025-07-07T06:13:41.843593800Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:13:41.846763 containerd[1448]: time="2025-07-07T06:13:41.846734440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 06:13:41.846894 containerd[1448]: time="2025-07-07T06:13:41.846781400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 06:13:41.846894 containerd[1448]: time="2025-07-07T06:13:41.846801800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 06:13:41.846894 containerd[1448]: time="2025-07-07T06:13:41.846816080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 06:13:41.846894 containerd[1448]: time="2025-07-07T06:13:41.846828720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 06:13:41.846989 containerd[1448]: time="2025-07-07T06:13:41.846947040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 06:13:41.847241 containerd[1448]: time="2025-07-07T06:13:41.847222640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 06:13:41.847348 containerd[1448]: time="2025-07-07T06:13:41.847323000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 06:13:41.847348 containerd[1448]: time="2025-07-07T06:13:41.847345560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 06:13:41.847395 containerd[1448]: time="2025-07-07T06:13:41.847358360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 06:13:41.847395 containerd[1448]: time="2025-07-07T06:13:41.847371360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 06:13:41.847395 containerd[1448]: time="2025-07-07T06:13:41.847386480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 06:13:41.847468 containerd[1448]: time="2025-07-07T06:13:41.847424440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 06:13:41.847468 containerd[1448]: time="2025-07-07T06:13:41.847442240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 06:13:41.847468 containerd[1448]: time="2025-07-07T06:13:41.847456600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 06:13:41.847529 containerd[1448]: time="2025-07-07T06:13:41.847468680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 06:13:41.847529 containerd[1448]: time="2025-07-07T06:13:41.847480680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 06:13:41.847529 containerd[1448]: time="2025-07-07T06:13:41.847491760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 06:13:41.847529 containerd[1448]: time="2025-07-07T06:13:41.847511320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847529 containerd[1448]: time="2025-07-07T06:13:41.847524280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847622 containerd[1448]: time="2025-07-07T06:13:41.847536960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847622 containerd[1448]: time="2025-07-07T06:13:41.847548880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847622 containerd[1448]: time="2025-07-07T06:13:41.847559880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847622 containerd[1448]: time="2025-07-07T06:13:41.847572520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847622 containerd[1448]: time="2025-07-07T06:13:41.847584720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847622 containerd[1448]: time="2025-07-07T06:13:41.847596520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847622 containerd[1448]: time="2025-07-07T06:13:41.847610280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847622 containerd[1448]: time="2025-07-07T06:13:41.847624520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847767 containerd[1448]: time="2025-07-07T06:13:41.847635880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847767 containerd[1448]: time="2025-07-07T06:13:41.847651920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847767 containerd[1448]: time="2025-07-07T06:13:41.847663960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847767 containerd[1448]: time="2025-07-07T06:13:41.847683800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 06:13:41.847767 containerd[1448]: time="2025-07-07T06:13:41.847702720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847767 containerd[1448]: time="2025-07-07T06:13:41.847714360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.847767 containerd[1448]: time="2025-07-07T06:13:41.847724800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 06:13:41.848375 containerd[1448]: time="2025-07-07T06:13:41.848314160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 06:13:41.848375 containerd[1448]: time="2025-07-07T06:13:41.848344120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 06:13:41.848375 containerd[1448]: time="2025-07-07T06:13:41.848355720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 06:13:41.848375 containerd[1448]: time="2025-07-07T06:13:41.848368640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 06:13:41.848375 containerd[1448]: time="2025-07-07T06:13:41.848377720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.848678 containerd[1448]: time="2025-07-07T06:13:41.848391080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 06:13:41.848678 containerd[1448]: time="2025-07-07T06:13:41.848440840Z" level=info msg="NRI interface is disabled by configuration." Jul 7 06:13:41.848678 containerd[1448]: time="2025-07-07T06:13:41.848455880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 06:13:41.848922 containerd[1448]: time="2025-07-07T06:13:41.848716840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 06:13:41.848922 containerd[1448]: time="2025-07-07T06:13:41.848770960Z" level=info msg="Connect containerd service" Jul 7 06:13:41.849826 containerd[1448]: time="2025-07-07T06:13:41.849344320Z" level=info msg="using legacy CRI server" Jul 7 06:13:41.849826 containerd[1448]: time="2025-07-07T06:13:41.849379680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:13:41.849826 containerd[1448]: time="2025-07-07T06:13:41.849502880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 06:13:41.851105 containerd[1448]: time="2025-07-07T06:13:41.851071320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:13:41.851722 containerd[1448]: time="2025-07-07T06:13:41.851687600Z" level=info msg="Start subscribing containerd event" Jul 7 06:13:41.852344 containerd[1448]: time="2025-07-07T06:13:41.852211960Z" level=info msg="Start recovering state" Jul 7 06:13:41.852604 containerd[1448]: time="2025-07-07T06:13:41.852466080Z" level=info msg="Start event monitor" Jul 7 06:13:41.852897 containerd[1448]: time="2025-07-07T06:13:41.852742360Z" level=info msg="Start snapshots syncer" Jul 7 06:13:41.852897 containerd[1448]: time="2025-07-07T06:13:41.852766280Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:13:41.852897 containerd[1448]: time="2025-07-07T06:13:41.852775160Z" level=info msg="Start streaming server" Jul 7 06:13:41.853457 containerd[1448]: time="2025-07-07T06:13:41.853435200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:13:41.855409 containerd[1448]: time="2025-07-07T06:13:41.853640640Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:13:41.855409 containerd[1448]: time="2025-07-07T06:13:41.854442040Z" level=info msg="containerd successfully booted in 0.042209s" Jul 7 06:13:41.853788 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:13:41.961193 tar[1435]: linux-arm64/README.md Jul 7 06:13:41.972850 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:13:42.640548 systemd-networkd[1380]: eth0: Gained IPv6LL Jul 7 06:13:42.643268 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:13:42.645039 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:13:42.653930 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:13:42.656459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:13:42.658699 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:13:42.679439 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:13:42.680904 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:13:42.681110 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:13:42.684132 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:13:42.894119 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:13:42.912894 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:13:42.921641 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:13:42.927537 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:13:42.927749 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:13:42.931090 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:13:42.942092 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:13:42.945037 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:13:42.947191 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 06:13:42.948630 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:13:43.213825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:13:43.215524 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:13:43.216689 systemd[1]: Startup finished in 639ms (kernel) + 5.083s (initrd) + 3.461s (userspace) = 9.184s. Jul 7 06:13:43.217439 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:13:43.633441 kubelet[1522]: E0707 06:13:43.633288 1522 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:13:43.635822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:13:43.635973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:13:47.900018 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:13:47.901134 systemd[1]: Started sshd@0-10.0.0.145:22-10.0.0.1:60872.service - OpenSSH per-connection server daemon (10.0.0.1:60872). Jul 7 06:13:47.987372 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 60872 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:13:47.990993 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:48.001849 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:13:48.018608 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:13:48.020382 systemd-logind[1418]: New session 1 of user core. Jul 7 06:13:48.028712 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:13:48.032659 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:13:48.036174 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:13:48.106735 systemd[1540]: Queued start job for default target default.target. Jul 7 06:13:48.121237 systemd[1540]: Created slice app.slice - User Application Slice. Jul 7 06:13:48.121267 systemd[1540]: Reached target paths.target - Paths. Jul 7 06:13:48.121278 systemd[1540]: Reached target timers.target - Timers. Jul 7 06:13:48.122371 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:13:48.130868 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:13:48.130927 systemd[1540]: Reached target sockets.target - Sockets. Jul 7 06:13:48.130939 systemd[1540]: Reached target basic.target - Basic System. Jul 7 06:13:48.130973 systemd[1540]: Reached target default.target - Main User Target. Jul 7 06:13:48.130995 systemd[1540]: Startup finished in 90ms. Jul 7 06:13:48.131222 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:13:48.132458 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:13:48.197352 systemd[1]: Started sshd@1-10.0.0.145:22-10.0.0.1:60878.service - OpenSSH per-connection server daemon (10.0.0.1:60878). Jul 7 06:13:48.229161 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 60878 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:13:48.230456 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:48.235469 systemd-logind[1418]: New session 2 of user core. Jul 7 06:13:48.241627 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:13:48.293334 sshd[1551]: pam_unix(sshd:session): session closed for user core Jul 7 06:13:48.305651 systemd[1]: sshd@1-10.0.0.145:22-10.0.0.1:60878.service: Deactivated successfully. Jul 7 06:13:48.307088 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:13:48.309816 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:13:48.310627 systemd[1]: Started sshd@2-10.0.0.145:22-10.0.0.1:60884.service - OpenSSH per-connection server daemon (10.0.0.1:60884). Jul 7 06:13:48.311294 systemd-logind[1418]: Removed session 2. Jul 7 06:13:48.340669 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 60884 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:13:48.341784 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:48.345303 systemd-logind[1418]: New session 3 of user core. Jul 7 06:13:48.352517 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:13:48.399691 sshd[1558]: pam_unix(sshd:session): session closed for user core Jul 7 06:13:48.414541 systemd[1]: sshd@2-10.0.0.145:22-10.0.0.1:60884.service: Deactivated successfully. Jul 7 06:13:48.415836 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:13:48.418458 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:13:48.419472 systemd[1]: Started sshd@3-10.0.0.145:22-10.0.0.1:60896.service - OpenSSH per-connection server daemon (10.0.0.1:60896). Jul 7 06:13:48.420074 systemd-logind[1418]: Removed session 3. Jul 7 06:13:48.449334 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 60896 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:13:48.450262 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:48.453267 systemd-logind[1418]: New session 4 of user core. Jul 7 06:13:48.464611 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:13:48.514727 sshd[1565]: pam_unix(sshd:session): session closed for user core Jul 7 06:13:48.529599 systemd[1]: sshd@3-10.0.0.145:22-10.0.0.1:60896.service: Deactivated successfully. Jul 7 06:13:48.530962 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:13:48.533501 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:13:48.534762 systemd[1]: Started sshd@4-10.0.0.145:22-10.0.0.1:60900.service - OpenSSH per-connection server daemon (10.0.0.1:60900). Jul 7 06:13:48.535780 systemd-logind[1418]: Removed session 4. Jul 7 06:13:48.564601 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 60900 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:13:48.565646 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:48.569217 systemd-logind[1418]: New session 5 of user core. Jul 7 06:13:48.575533 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:13:48.634150 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:13:48.634437 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:13:48.650240 sudo[1575]: pam_unix(sudo:session): session closed for user root Jul 7 06:13:48.651716 sshd[1572]: pam_unix(sshd:session): session closed for user core Jul 7 06:13:48.667691 systemd[1]: sshd@4-10.0.0.145:22-10.0.0.1:60900.service: Deactivated successfully. Jul 7 06:13:48.669060 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:13:48.671495 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:13:48.672593 systemd[1]: Started sshd@5-10.0.0.145:22-10.0.0.1:60914.service - OpenSSH per-connection server daemon (10.0.0.1:60914). Jul 7 06:13:48.673282 systemd-logind[1418]: Removed session 5. Jul 7 06:13:48.703162 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 60914 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:13:48.704192 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:48.707457 systemd-logind[1418]: New session 6 of user core. Jul 7 06:13:48.718583 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:13:48.767853 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:13:48.768366 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:13:48.770986 sudo[1584]: pam_unix(sudo:session): session closed for user root Jul 7 06:13:48.775141 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 06:13:48.775395 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:13:48.791614 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 06:13:48.792580 auditctl[1587]: No rules Jul 7 06:13:48.793294 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:13:48.795477 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 06:13:48.796951 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:13:48.818322 augenrules[1605]: No rules Jul 7 06:13:48.819674 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:13:48.820536 sudo[1583]: pam_unix(sudo:session): session closed for user root Jul 7 06:13:48.821820 sshd[1580]: pam_unix(sshd:session): session closed for user core Jul 7 06:13:48.832622 systemd[1]: sshd@5-10.0.0.145:22-10.0.0.1:60914.service: Deactivated successfully. Jul 7 06:13:48.833938 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:13:48.835122 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:13:48.836127 systemd[1]: Started sshd@6-10.0.0.145:22-10.0.0.1:60918.service - OpenSSH per-connection server daemon (10.0.0.1:60918). Jul 7 06:13:48.837092 systemd-logind[1418]: Removed session 6. Jul 7 06:13:48.866943 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 60918 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:13:48.868024 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:48.871207 systemd-logind[1418]: New session 7 of user core. Jul 7 06:13:48.880519 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:13:48.929350 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:13:48.929633 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:13:49.234609 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:13:49.234811 (dockerd)[1634]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:13:49.489780 dockerd[1634]: time="2025-07-07T06:13:49.489379812Z" level=info msg="Starting up" Jul 7 06:13:49.573205 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3597580853-merged.mount: Deactivated successfully. Jul 7 06:13:49.592476 dockerd[1634]: time="2025-07-07T06:13:49.592430199Z" level=info msg="Loading containers: start." Jul 7 06:13:49.671422 kernel: Initializing XFRM netlink socket Jul 7 06:13:49.732541 systemd-networkd[1380]: docker0: Link UP Jul 7 06:13:49.754690 dockerd[1634]: time="2025-07-07T06:13:49.754587705Z" level=info msg="Loading containers: done." Jul 7 06:13:49.768474 dockerd[1634]: time="2025-07-07T06:13:49.768428614Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:13:49.768589 dockerd[1634]: time="2025-07-07T06:13:49.768524566Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 06:13:49.768649 dockerd[1634]: time="2025-07-07T06:13:49.768621956Z" level=info msg="Daemon has completed initialization" Jul 7 06:13:49.796933 dockerd[1634]: time="2025-07-07T06:13:49.796797800Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:13:49.797151 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:13:50.461789 containerd[1448]: time="2025-07-07T06:13:50.461743070Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 06:13:50.570594 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2094730670-merged.mount: Deactivated successfully. Jul 7 06:13:51.017247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196908988.mount: Deactivated successfully. Jul 7 06:13:51.977189 containerd[1448]: time="2025-07-07T06:13:51.976995596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:51.978501 containerd[1448]: time="2025-07-07T06:13:51.978469168Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 7 06:13:51.979537 containerd[1448]: time="2025-07-07T06:13:51.979489930Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:51.982471 containerd[1448]: time="2025-07-07T06:13:51.982386837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:51.983518 containerd[1448]: time="2025-07-07T06:13:51.983485811Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.521700658s" Jul 7 06:13:51.983806 containerd[1448]: time="2025-07-07T06:13:51.983612822Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 7 06:13:51.984806 containerd[1448]: time="2025-07-07T06:13:51.984654965Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 06:13:53.074271 containerd[1448]: time="2025-07-07T06:13:53.074215538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:53.075052 containerd[1448]: time="2025-07-07T06:13:53.074580923Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 7 06:13:53.075875 containerd[1448]: time="2025-07-07T06:13:53.075841425Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:53.081271 containerd[1448]: time="2025-07-07T06:13:53.081227636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:53.082360 containerd[1448]: time="2025-07-07T06:13:53.082328227Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.097642568s" Jul 7 06:13:53.082537 containerd[1448]: time="2025-07-07T06:13:53.082443934Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 7 06:13:53.082937 containerd[1448]: time="2025-07-07T06:13:53.082906960Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 06:13:53.887683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:13:53.902701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:13:54.002756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:13:54.006703 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:13:54.175767 containerd[1448]: time="2025-07-07T06:13:54.175637104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:54.176243 containerd[1448]: time="2025-07-07T06:13:54.176207417Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 7 06:13:54.177234 containerd[1448]: time="2025-07-07T06:13:54.177183174Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:54.180483 containerd[1448]: time="2025-07-07T06:13:54.180448780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:54.181797 containerd[1448]: time="2025-07-07T06:13:54.181651079Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.098709227s" Jul 7 06:13:54.181797 containerd[1448]: time="2025-07-07T06:13:54.181687131Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 7 06:13:54.182301 containerd[1448]: time="2025-07-07T06:13:54.182182304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 06:13:54.193111 kubelet[1852]: E0707 06:13:54.193073 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:13:54.195851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:13:54.196000 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:13:55.111376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount510995284.mount: Deactivated successfully. Jul 7 06:13:55.446715 containerd[1448]: time="2025-07-07T06:13:55.446436649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:55.447112 containerd[1448]: time="2025-07-07T06:13:55.447086876Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 7 06:13:55.447962 containerd[1448]: time="2025-07-07T06:13:55.447921523Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:55.450247 containerd[1448]: time="2025-07-07T06:13:55.450216544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:55.451293 containerd[1448]: time="2025-07-07T06:13:55.451256396Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.268863618s" Jul 7 06:13:55.451327 containerd[1448]: time="2025-07-07T06:13:55.451294887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 7 06:13:55.451980 containerd[1448]: time="2025-07-07T06:13:55.451952749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:13:56.044211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274467949.mount: Deactivated successfully. Jul 7 06:13:56.737923 containerd[1448]: time="2025-07-07T06:13:56.737746854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:56.738817 containerd[1448]: time="2025-07-07T06:13:56.738789729Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 7 06:13:56.740057 containerd[1448]: time="2025-07-07T06:13:56.739633389Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:56.743101 containerd[1448]: time="2025-07-07T06:13:56.743072065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:56.744909 containerd[1448]: time="2025-07-07T06:13:56.744880178Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.292896612s" Jul 7 06:13:56.745009 containerd[1448]: time="2025-07-07T06:13:56.744993654Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 06:13:56.745478 containerd[1448]: time="2025-07-07T06:13:56.745455515Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:13:57.171918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3511319934.mount: Deactivated successfully. Jul 7 06:13:57.177361 containerd[1448]: time="2025-07-07T06:13:57.177036460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:57.178112 containerd[1448]: time="2025-07-07T06:13:57.178080438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 7 06:13:57.179145 containerd[1448]: time="2025-07-07T06:13:57.179087881Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:57.181162 containerd[1448]: time="2025-07-07T06:13:57.181126032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:57.182297 containerd[1448]: time="2025-07-07T06:13:57.182242398Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 436.755945ms" Jul 7 06:13:57.182297 containerd[1448]: time="2025-07-07T06:13:57.182272736Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 06:13:57.182758 containerd[1448]: time="2025-07-07T06:13:57.182741963Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 06:13:57.691169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168400040.mount: Deactivated successfully. Jul 7 06:13:59.353273 containerd[1448]: time="2025-07-07T06:13:59.353214425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:59.354383 containerd[1448]: time="2025-07-07T06:13:59.354080807Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 7 06:13:59.367925 containerd[1448]: time="2025-07-07T06:13:59.367877719Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:59.399200 containerd[1448]: time="2025-07-07T06:13:59.399144333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:59.400722 containerd[1448]: time="2025-07-07T06:13:59.400675831Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.217855483s" Jul 7 06:13:59.400722 containerd[1448]: time="2025-07-07T06:13:59.400715285Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 7 06:14:04.446940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:14:04.457599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:04.525567 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:14:04.525769 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:14:04.525984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:04.537668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:04.558871 systemd[1]: Reloading requested from client PID 2014 ('systemctl') (unit session-7.scope)... Jul 7 06:14:04.558889 systemd[1]: Reloading... Jul 7 06:14:04.629445 zram_generator::config[2053]: No configuration found. Jul 7 06:14:04.837217 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:14:04.892914 systemd[1]: Reloading finished in 333 ms. Jul 7 06:14:04.944606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:04.946940 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:14:04.947131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:04.948591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:05.051711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:05.056171 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:14:05.095368 kubelet[2100]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:05.095368 kubelet[2100]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:14:05.095368 kubelet[2100]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:05.095368 kubelet[2100]: I0707 06:14:05.094015 2100 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:14:05.677582 kubelet[2100]: I0707 06:14:05.677542 2100 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:14:05.677582 kubelet[2100]: I0707 06:14:05.677572 2100 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:14:05.677868 kubelet[2100]: I0707 06:14:05.677839 2100 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:14:05.708569 kubelet[2100]: E0707 06:14:05.708533 2100 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:05.710027 kubelet[2100]: I0707 06:14:05.709932 2100 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:14:05.714338 kubelet[2100]: E0707 06:14:05.714279 2100 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:14:05.714338 kubelet[2100]: I0707 06:14:05.714312 2100 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:14:05.717303 kubelet[2100]: I0707 06:14:05.717239 2100 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:14:05.717893 kubelet[2100]: I0707 06:14:05.717847 2100 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:14:05.718050 kubelet[2100]: I0707 06:14:05.717888 2100 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:14:05.718136 kubelet[2100]: I0707 06:14:05.718125 2100 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:14:05.718161 kubelet[2100]: I0707 06:14:05.718140 2100 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:14:05.718331 kubelet[2100]: I0707 06:14:05.718319 2100 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:05.722609 kubelet[2100]: I0707 06:14:05.722591 2100 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:14:05.722647 kubelet[2100]: I0707 06:14:05.722614 2100 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:14:05.722647 kubelet[2100]: I0707 06:14:05.722631 2100 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:14:05.722647 kubelet[2100]: I0707 06:14:05.722641 2100 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:14:05.724424 kubelet[2100]: W0707 06:14:05.724313 2100 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 7 06:14:05.724424 kubelet[2100]: E0707 06:14:05.724365 2100 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:05.725707 kubelet[2100]: W0707 06:14:05.725542 2100 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 7 06:14:05.725707 kubelet[2100]: E0707 06:14:05.725606 2100 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:05.725707 kubelet[2100]: I0707 06:14:05.725627 2100 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:14:05.726304 kubelet[2100]: I0707 06:14:05.726273 2100 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:14:05.727286 kubelet[2100]: W0707 06:14:05.726394 2100 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:14:05.727286 kubelet[2100]: I0707 06:14:05.727217 2100 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:14:05.727286 kubelet[2100]: I0707 06:14:05.727243 2100 server.go:1287] "Started kubelet" Jul 7 06:14:05.730123 kubelet[2100]: I0707 06:14:05.730096 2100 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:14:05.730603 kubelet[2100]: I0707 06:14:05.730570 2100 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:14:05.733996 kubelet[2100]: I0707 06:14:05.732813 2100 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:14:05.733996 kubelet[2100]: E0707 06:14:05.733051 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:05.733996 kubelet[2100]: I0707 06:14:05.733354 2100 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:14:05.733996 kubelet[2100]: I0707 06:14:05.733438 2100 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:14:05.734676 kubelet[2100]: I0707 06:14:05.734624 2100 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:14:05.735138 kubelet[2100]: W0707 06:14:05.735094 2100 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 7 06:14:05.735246 kubelet[2100]: E0707 06:14:05.735229 2100 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:05.735483 kubelet[2100]: I0707 06:14:05.735460 2100 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:14:05.736560 kubelet[2100]: I0707 06:14:05.736121 2100 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:14:05.736560 kubelet[2100]: I0707 06:14:05.736355 2100 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:14:05.736560 kubelet[2100]: E0707 06:14:05.736483 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="200ms" Jul 7 06:14:05.736560 kubelet[2100]: I0707 06:14:05.736358 2100 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:14:05.736560 kubelet[2100]: I0707 06:14:05.736558 2100 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:14:05.736560 kubelet[2100]: E0707 06:14:05.736151 2100 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.145:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.145:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe36ef2a51cc0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:14:05.727227072 +0000 UTC m=+0.668006044,LastTimestamp:2025-07-07 06:14:05.727227072 +0000 UTC m=+0.668006044,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:14:05.738080 kubelet[2100]: I0707 06:14:05.738042 2100 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:14:05.746745 kubelet[2100]: I0707 06:14:05.746718 2100 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:14:05.746745 kubelet[2100]: I0707 06:14:05.746739 2100 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:14:05.746861 kubelet[2100]: I0707 06:14:05.746757 2100 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:05.747915 kubelet[2100]: I0707 06:14:05.747787 2100 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:14:05.749093 kubelet[2100]: I0707 06:14:05.748825 2100 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:14:05.749093 kubelet[2100]: I0707 06:14:05.748846 2100 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:14:05.749093 kubelet[2100]: I0707 06:14:05.748866 2100 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:14:05.749093 kubelet[2100]: I0707 06:14:05.748872 2100 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:14:05.749093 kubelet[2100]: E0707 06:14:05.748910 2100 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:14:05.815619 kubelet[2100]: I0707 06:14:05.815576 2100 policy_none.go:49] "None policy: Start" Jul 7 06:14:05.815619 kubelet[2100]: I0707 06:14:05.815614 2100 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:14:05.815619 kubelet[2100]: I0707 06:14:05.815628 2100 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:14:05.815790 kubelet[2100]: W0707 06:14:05.815702 2100 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 7 06:14:05.815790 kubelet[2100]: E0707 06:14:05.815759 2100 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:05.821528 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:14:05.833269 kubelet[2100]: E0707 06:14:05.833230 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:05.835651 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:14:05.838389 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:14:05.849427 kubelet[2100]: I0707 06:14:05.849292 2100 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:14:05.849427 kubelet[2100]: E0707 06:14:05.849486 2100 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:14:05.849427 kubelet[2100]: I0707 06:14:05.849515 2100 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:14:05.849427 kubelet[2100]: I0707 06:14:05.849526 2100 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:14:05.849427 kubelet[2100]: I0707 06:14:05.849716 2100 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:14:05.850579 kubelet[2100]: E0707 06:14:05.850547 2100 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:14:05.850641 kubelet[2100]: E0707 06:14:05.850600 2100 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:14:05.937704 kubelet[2100]: E0707 06:14:05.937593 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="400ms" Jul 7 06:14:05.951729 kubelet[2100]: I0707 06:14:05.951688 2100 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:14:05.952122 kubelet[2100]: E0707 06:14:05.952097 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jul 7 06:14:06.057695 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 7 06:14:06.079560 kubelet[2100]: E0707 06:14:06.079525 2100 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:06.082260 systemd[1]: Created slice kubepods-burstable-pod12e71b71a1c2fa3eda30c37920b8ec58.slice - libcontainer container kubepods-burstable-pod12e71b71a1c2fa3eda30c37920b8ec58.slice. Jul 7 06:14:06.083628 kubelet[2100]: E0707 06:14:06.083513 2100 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:06.094800 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 7 06:14:06.096490 kubelet[2100]: E0707 06:14:06.096278 2100 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:06.134933 kubelet[2100]: I0707 06:14:06.134889 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12e71b71a1c2fa3eda30c37920b8ec58-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"12e71b71a1c2fa3eda30c37920b8ec58\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:06.135011 kubelet[2100]: I0707 06:14:06.134942 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:06.135011 kubelet[2100]: I0707 06:14:06.134992 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:06.135086 kubelet[2100]: I0707 06:14:06.135013 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:06.135086 kubelet[2100]: I0707 06:14:06.135031 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:06.135086 kubelet[2100]: I0707 06:14:06.135045 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12e71b71a1c2fa3eda30c37920b8ec58-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"12e71b71a1c2fa3eda30c37920b8ec58\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:06.135086 kubelet[2100]: I0707 06:14:06.135058 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12e71b71a1c2fa3eda30c37920b8ec58-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"12e71b71a1c2fa3eda30c37920b8ec58\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:06.135086 kubelet[2100]: I0707 06:14:06.135073 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:06.135190 kubelet[2100]: I0707 06:14:06.135095 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:06.154011 kubelet[2100]: I0707 06:14:06.153980 2100 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:14:06.154356 kubelet[2100]: E0707 06:14:06.154317 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jul 7 06:14:06.339114 kubelet[2100]: E0707 06:14:06.338973 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="800ms" Jul 7 06:14:06.381421 kubelet[2100]: E0707 06:14:06.381382 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:06.382066 containerd[1448]: time="2025-07-07T06:14:06.382017411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:06.384269 kubelet[2100]: E0707 06:14:06.384232 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:06.384625 containerd[1448]: time="2025-07-07T06:14:06.384597113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:12e71b71a1c2fa3eda30c37920b8ec58,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:06.396971 kubelet[2100]: E0707 06:14:06.396927 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:06.397313 containerd[1448]: time="2025-07-07T06:14:06.397222607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:06.556078 kubelet[2100]: I0707 06:14:06.556034 2100 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:14:06.556371 kubelet[2100]: E0707 06:14:06.556345 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jul 7 06:14:06.696321 kubelet[2100]: W0707 06:14:06.696233 2100 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 7 06:14:06.696321 kubelet[2100]: E0707 06:14:06.696269 2100 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:06.820382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount878509668.mount: Deactivated successfully. Jul 7 06:14:06.826899 containerd[1448]: time="2025-07-07T06:14:06.826846978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:06.827875 containerd[1448]: time="2025-07-07T06:14:06.827848563Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:06.828822 containerd[1448]: time="2025-07-07T06:14:06.828793458Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:06.829138 containerd[1448]: time="2025-07-07T06:14:06.829050161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:14:06.829648 containerd[1448]: time="2025-07-07T06:14:06.829585795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:14:06.830124 containerd[1448]: time="2025-07-07T06:14:06.830095322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 7 06:14:06.830792 containerd[1448]: time="2025-07-07T06:14:06.830763845Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:06.836423 containerd[1448]: time="2025-07-07T06:14:06.836046662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:06.837452 containerd[1448]: time="2025-07-07T06:14:06.837420568Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 440.114526ms" Jul 7 06:14:06.838120 containerd[1448]: time="2025-07-07T06:14:06.837994102Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 453.338461ms" Jul 7 06:14:06.840276 containerd[1448]: time="2025-07-07T06:14:06.840248298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 458.15009ms" Jul 7 06:14:06.998080 containerd[1448]: time="2025-07-07T06:14:06.997802956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:14:06.998080 containerd[1448]: time="2025-07-07T06:14:06.997854968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:14:06.998621 containerd[1448]: time="2025-07-07T06:14:06.997937084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:06.998621 containerd[1448]: time="2025-07-07T06:14:06.998448491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:06.999442 containerd[1448]: time="2025-07-07T06:14:06.998753648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:14:06.999442 containerd[1448]: time="2025-07-07T06:14:06.998843120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:14:06.999442 containerd[1448]: time="2025-07-07T06:14:06.998867947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:06.999442 containerd[1448]: time="2025-07-07T06:14:06.999198450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:07.000485 containerd[1448]: time="2025-07-07T06:14:06.999571371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:14:07.000485 containerd[1448]: time="2025-07-07T06:14:06.999637176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:14:07.000485 containerd[1448]: time="2025-07-07T06:14:06.999661483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:07.000485 containerd[1448]: time="2025-07-07T06:14:06.999746397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:07.020433 kubelet[2100]: W0707 06:14:07.020371 2100 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 7 06:14:07.020526 kubelet[2100]: E0707 06:14:07.020443 2100 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:07.021590 systemd[1]: Started cri-containerd-e0f5a7f132a5d8f9a7cc4ba51963c4d92f01015d29ed7f7895cdafe7fb975d20.scope - libcontainer container e0f5a7f132a5d8f9a7cc4ba51963c4d92f01015d29ed7f7895cdafe7fb975d20. Jul 7 06:14:07.026056 systemd[1]: Started cri-containerd-6b4a25c2d03ed4efd1d22a4d112d40281a5d63ff7574ec428532b80b93e3585a.scope - libcontainer container 6b4a25c2d03ed4efd1d22a4d112d40281a5d63ff7574ec428532b80b93e3585a. Jul 7 06:14:07.027287 systemd[1]: Started cri-containerd-8d49920286590b56751f0a86fd2973c8fc00a8711923f3ddd097eae0fcb6c73c.scope - libcontainer container 8d49920286590b56751f0a86fd2973c8fc00a8711923f3ddd097eae0fcb6c73c. Jul 7 06:14:07.054104 containerd[1448]: time="2025-07-07T06:14:07.054035270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0f5a7f132a5d8f9a7cc4ba51963c4d92f01015d29ed7f7895cdafe7fb975d20\"" Jul 7 06:14:07.055046 containerd[1448]: time="2025-07-07T06:14:07.054973144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:12e71b71a1c2fa3eda30c37920b8ec58,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b4a25c2d03ed4efd1d22a4d112d40281a5d63ff7574ec428532b80b93e3585a\"" Jul 7 06:14:07.056456 kubelet[2100]: E0707 06:14:07.056055 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:07.056894 kubelet[2100]: E0707 06:14:07.056867 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:07.058372 containerd[1448]: time="2025-07-07T06:14:07.058341761Z" level=info msg="CreateContainer within sandbox \"6b4a25c2d03ed4efd1d22a4d112d40281a5d63ff7574ec428532b80b93e3585a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:14:07.058494 containerd[1448]: time="2025-07-07T06:14:07.058477091Z" level=info msg="CreateContainer within sandbox \"e0f5a7f132a5d8f9a7cc4ba51963c4d92f01015d29ed7f7895cdafe7fb975d20\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:14:07.063169 containerd[1448]: time="2025-07-07T06:14:07.063129603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d49920286590b56751f0a86fd2973c8fc00a8711923f3ddd097eae0fcb6c73c\"" Jul 7 06:14:07.064277 kubelet[2100]: E0707 06:14:07.064215 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:07.065598 containerd[1448]: time="2025-07-07T06:14:07.065547151Z" level=info msg="CreateContainer within sandbox \"8d49920286590b56751f0a86fd2973c8fc00a8711923f3ddd097eae0fcb6c73c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:14:07.070746 containerd[1448]: time="2025-07-07T06:14:07.070712118Z" level=info msg="CreateContainer within sandbox \"6b4a25c2d03ed4efd1d22a4d112d40281a5d63ff7574ec428532b80b93e3585a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23d33817a0e423434dc576657611026c9058e93735c7408a2051f55239844932\"" Jul 7 06:14:07.071386 containerd[1448]: time="2025-07-07T06:14:07.071308209Z" level=info msg="StartContainer for \"23d33817a0e423434dc576657611026c9058e93735c7408a2051f55239844932\"" Jul 7 06:14:07.074723 containerd[1448]: time="2025-07-07T06:14:07.074687460Z" level=info msg="CreateContainer within sandbox \"e0f5a7f132a5d8f9a7cc4ba51963c4d92f01015d29ed7f7895cdafe7fb975d20\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4247a95cff8cddfafa9c027b294af70db667d06ff69417f76d1e7658fd8d6660\"" Jul 7 06:14:07.076072 containerd[1448]: time="2025-07-07T06:14:07.075090612Z" level=info msg="StartContainer for \"4247a95cff8cddfafa9c027b294af70db667d06ff69417f76d1e7658fd8d6660\"" Jul 7 06:14:07.082201 containerd[1448]: time="2025-07-07T06:14:07.082163831Z" level=info msg="CreateContainer within sandbox \"8d49920286590b56751f0a86fd2973c8fc00a8711923f3ddd097eae0fcb6c73c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"286179bedff312b118de075d225220f9eac6249c3a8f2e3e330d5ffface35282\"" Jul 7 06:14:07.082702 containerd[1448]: time="2025-07-07T06:14:07.082660933Z" level=info msg="StartContainer for \"286179bedff312b118de075d225220f9eac6249c3a8f2e3e330d5ffface35282\"" Jul 7 06:14:07.093565 systemd[1]: Started cri-containerd-23d33817a0e423434dc576657611026c9058e93735c7408a2051f55239844932.scope - libcontainer container 23d33817a0e423434dc576657611026c9058e93735c7408a2051f55239844932. Jul 7 06:14:07.098389 systemd[1]: Started cri-containerd-4247a95cff8cddfafa9c027b294af70db667d06ff69417f76d1e7658fd8d6660.scope - libcontainer container 4247a95cff8cddfafa9c027b294af70db667d06ff69417f76d1e7658fd8d6660. Jul 7 06:14:07.115590 systemd[1]: Started cri-containerd-286179bedff312b118de075d225220f9eac6249c3a8f2e3e330d5ffface35282.scope - libcontainer container 286179bedff312b118de075d225220f9eac6249c3a8f2e3e330d5ffface35282. Jul 7 06:14:07.121590 kubelet[2100]: W0707 06:14:07.121523 2100 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 7 06:14:07.121884 kubelet[2100]: E0707 06:14:07.121592 2100 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:07.129161 containerd[1448]: time="2025-07-07T06:14:07.129120846Z" level=info msg="StartContainer for \"23d33817a0e423434dc576657611026c9058e93735c7408a2051f55239844932\" returns successfully" Jul 7 06:14:07.140667 kubelet[2100]: E0707 06:14:07.140499 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="1.6s" Jul 7 06:14:07.152353 containerd[1448]: time="2025-07-07T06:14:07.152309923Z" level=info msg="StartContainer for \"4247a95cff8cddfafa9c027b294af70db667d06ff69417f76d1e7658fd8d6660\" returns successfully" Jul 7 06:14:07.184320 kubelet[2100]: W0707 06:14:07.174002 2100 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 7 06:14:07.184320 kubelet[2100]: E0707 06:14:07.174069 2100 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:07.185218 containerd[1448]: time="2025-07-07T06:14:07.185173393Z" level=info msg="StartContainer for \"286179bedff312b118de075d225220f9eac6249c3a8f2e3e330d5ffface35282\" returns successfully" Jul 7 06:14:07.359147 kubelet[2100]: I0707 06:14:07.359045 2100 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:14:07.759733 kubelet[2100]: E0707 06:14:07.759197 2100 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:07.759733 kubelet[2100]: E0707 06:14:07.759320 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:07.759954 kubelet[2100]: E0707 06:14:07.759841 2100 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:07.760187 kubelet[2100]: E0707 06:14:07.760155 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:07.761841 kubelet[2100]: E0707 06:14:07.761820 2100 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:07.762060 kubelet[2100]: E0707 06:14:07.762046 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:08.695177 kubelet[2100]: I0707 06:14:08.695138 2100 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:14:08.695177 kubelet[2100]: E0707 06:14:08.695178 2100 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 06:14:08.709156 kubelet[2100]: E0707 06:14:08.709120 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:08.764127 kubelet[2100]: E0707 06:14:08.764101 2100 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:08.764314 kubelet[2100]: E0707 06:14:08.764212 2100 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:08.764314 kubelet[2100]: E0707 06:14:08.764219 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:08.764451 kubelet[2100]: E0707 06:14:08.764381 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:08.810210 kubelet[2100]: E0707 06:14:08.810156 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:08.910876 kubelet[2100]: E0707 06:14:08.910829 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:09.011831 kubelet[2100]: E0707 06:14:09.011721 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:09.112508 kubelet[2100]: E0707 06:14:09.112472 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:09.212873 kubelet[2100]: E0707 06:14:09.212833 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:09.313192 kubelet[2100]: E0707 06:14:09.313056 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:09.414035 kubelet[2100]: E0707 06:14:09.413988 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:09.514827 kubelet[2100]: E0707 06:14:09.514775 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:09.528471 kubelet[2100]: E0707 06:14:09.528212 2100 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:09.528471 kubelet[2100]: E0707 06:14:09.528332 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:09.615424 kubelet[2100]: E0707 06:14:09.615109 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:09.716366 kubelet[2100]: E0707 06:14:09.716310 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:09.817218 kubelet[2100]: E0707 06:14:09.817174 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:09.918184 kubelet[2100]: E0707 06:14:09.918055 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:10.018913 kubelet[2100]: E0707 06:14:10.018859 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:10.119230 kubelet[2100]: E0707 06:14:10.119171 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:10.234584 kubelet[2100]: I0707 06:14:10.234303 2100 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:10.249738 kubelet[2100]: I0707 06:14:10.249706 2100 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:10.256606 kubelet[2100]: I0707 06:14:10.256559 2100 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:10.730425 kubelet[2100]: I0707 06:14:10.730306 2100 apiserver.go:52] "Watching apiserver" Jul 7 06:14:10.733792 kubelet[2100]: E0707 06:14:10.733770 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:10.733906 kubelet[2100]: E0707 06:14:10.733881 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:10.734273 kubelet[2100]: E0707 06:14:10.734237 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:10.833608 kubelet[2100]: I0707 06:14:10.833573 2100 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:14:11.017561 systemd[1]: Reloading requested from client PID 2379 ('systemctl') (unit session-7.scope)... Jul 7 06:14:11.017576 systemd[1]: Reloading... Jul 7 06:14:11.085442 zram_generator::config[2421]: No configuration found. Jul 7 06:14:11.165226 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:14:11.224662 kubelet[2100]: E0707 06:14:11.224582 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:11.231243 systemd[1]: Reloading finished in 213 ms. Jul 7 06:14:11.262837 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:11.271319 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:14:11.271559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:11.271611 systemd[1]: kubelet.service: Consumed 1.037s CPU time, 132.3M memory peak, 0B memory swap peak. Jul 7 06:14:11.278683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:11.401886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:11.405485 (kubelet)[2460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:14:11.447131 kubelet[2460]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:11.447428 kubelet[2460]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:14:11.447428 kubelet[2460]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:11.447428 kubelet[2460]: I0707 06:14:11.447345 2460 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:14:11.452614 kubelet[2460]: I0707 06:14:11.452575 2460 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:14:11.452614 kubelet[2460]: I0707 06:14:11.452605 2460 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:14:11.452882 kubelet[2460]: I0707 06:14:11.452850 2460 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:14:11.454010 kubelet[2460]: I0707 06:14:11.453985 2460 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:14:11.456335 kubelet[2460]: I0707 06:14:11.456204 2460 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:14:11.460623 kubelet[2460]: E0707 06:14:11.460593 2460 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:14:11.460623 kubelet[2460]: I0707 06:14:11.460622 2460 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:14:11.463009 kubelet[2460]: I0707 06:14:11.462977 2460 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:14:11.463208 kubelet[2460]: I0707 06:14:11.463181 2460 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:14:11.463360 kubelet[2460]: I0707 06:14:11.463206 2460 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:14:11.463468 kubelet[2460]: I0707 06:14:11.463372 2460 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:14:11.463468 kubelet[2460]: I0707 06:14:11.463381 2460 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:14:11.463468 kubelet[2460]: I0707 06:14:11.463445 2460 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:11.463579 kubelet[2460]: I0707 06:14:11.463569 2460 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:14:11.463606 kubelet[2460]: I0707 06:14:11.463584 2460 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:14:11.463606 kubelet[2460]: I0707 06:14:11.463600 2460 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:14:11.463664 kubelet[2460]: I0707 06:14:11.463608 2460 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:14:11.466493 kubelet[2460]: I0707 06:14:11.466444 2460 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:14:11.468321 kubelet[2460]: I0707 06:14:11.468165 2460 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:14:11.472416 kubelet[2460]: I0707 06:14:11.470260 2460 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:14:11.472416 kubelet[2460]: I0707 06:14:11.470298 2460 server.go:1287] "Started kubelet" Jul 7 06:14:11.472416 kubelet[2460]: I0707 06:14:11.471422 2460 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:14:11.473765 kubelet[2460]: I0707 06:14:11.473726 2460 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:14:11.477435 kubelet[2460]: I0707 06:14:11.474712 2460 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:14:11.477435 kubelet[2460]: I0707 06:14:11.474952 2460 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:14:11.477435 kubelet[2460]: I0707 06:14:11.475117 2460 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:14:11.477435 kubelet[2460]: I0707 06:14:11.475597 2460 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:14:11.477435 kubelet[2460]: I0707 06:14:11.475674 2460 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:14:11.477435 kubelet[2460]: I0707 06:14:11.475792 2460 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:14:11.477435 kubelet[2460]: E0707 06:14:11.475994 2460 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:11.478619 kubelet[2460]: I0707 06:14:11.477959 2460 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:14:11.483368 kubelet[2460]: I0707 06:14:11.481608 2460 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:14:11.483703 kubelet[2460]: I0707 06:14:11.483605 2460 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:14:11.484459 kubelet[2460]: I0707 06:14:11.483004 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:14:11.485645 kubelet[2460]: E0707 06:14:11.483888 2460 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:14:11.487025 kubelet[2460]: I0707 06:14:11.486953 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:14:11.487025 kubelet[2460]: I0707 06:14:11.487019 2460 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:14:11.487114 kubelet[2460]: I0707 06:14:11.487034 2460 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:14:11.487114 kubelet[2460]: I0707 06:14:11.487041 2460 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:14:11.487114 kubelet[2460]: E0707 06:14:11.487077 2460 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:14:11.487245 kubelet[2460]: I0707 06:14:11.487222 2460 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:14:11.512441 kubelet[2460]: I0707 06:14:11.512419 2460 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:14:11.512441 kubelet[2460]: I0707 06:14:11.512434 2460 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:14:11.512536 kubelet[2460]: I0707 06:14:11.512454 2460 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:11.512600 kubelet[2460]: I0707 06:14:11.512582 2460 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:14:11.512626 kubelet[2460]: I0707 06:14:11.512597 2460 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:14:11.512626 kubelet[2460]: I0707 06:14:11.512615 2460 policy_none.go:49] "None policy: Start" Jul 7 06:14:11.512675 kubelet[2460]: I0707 06:14:11.512631 2460 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:14:11.512675 kubelet[2460]: I0707 06:14:11.512642 2460 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:14:11.512738 kubelet[2460]: I0707 06:14:11.512724 2460 state_mem.go:75] "Updated machine memory state" Jul 7 06:14:11.516186 kubelet[2460]: I0707 06:14:11.516164 2460 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:14:11.516515 kubelet[2460]: I0707 06:14:11.516305 2460 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:14:11.516515 kubelet[2460]: I0707 06:14:11.516323 2460 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:14:11.516515 kubelet[2460]: I0707 06:14:11.516507 2460 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:14:11.517878 kubelet[2460]: E0707 06:14:11.517856 2460 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:14:11.588364 kubelet[2460]: I0707 06:14:11.588275 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:11.588736 kubelet[2460]: I0707 06:14:11.588313 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:11.588736 kubelet[2460]: I0707 06:14:11.588363 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:11.594249 kubelet[2460]: E0707 06:14:11.594110 2460 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:11.594312 kubelet[2460]: E0707 06:14:11.594275 2460 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:11.595012 kubelet[2460]: E0707 06:14:11.594985 2460 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:11.622302 kubelet[2460]: I0707 06:14:11.622279 2460 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:14:11.629918 kubelet[2460]: I0707 06:14:11.629872 2460 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 06:14:11.629983 kubelet[2460]: I0707 06:14:11.629941 2460 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:14:11.777313 kubelet[2460]: I0707 06:14:11.777276 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12e71b71a1c2fa3eda30c37920b8ec58-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"12e71b71a1c2fa3eda30c37920b8ec58\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:11.777313 kubelet[2460]: I0707 06:14:11.777310 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12e71b71a1c2fa3eda30c37920b8ec58-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"12e71b71a1c2fa3eda30c37920b8ec58\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:11.777468 kubelet[2460]: I0707 06:14:11.777330 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:11.777468 kubelet[2460]: I0707 06:14:11.777355 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:11.777468 kubelet[2460]: I0707 06:14:11.777370 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12e71b71a1c2fa3eda30c37920b8ec58-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"12e71b71a1c2fa3eda30c37920b8ec58\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:11.777468 kubelet[2460]: I0707 06:14:11.777431 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:11.777596 kubelet[2460]: I0707 06:14:11.777509 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:11.777596 kubelet[2460]: I0707 06:14:11.777558 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:11.777596 kubelet[2460]: I0707 06:14:11.777590 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:11.895072 kubelet[2460]: E0707 06:14:11.894960 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:11.895072 kubelet[2460]: E0707 06:14:11.894963 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:11.895171 kubelet[2460]: E0707 06:14:11.895108 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:12.023130 sudo[2498]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 06:14:12.023460 sudo[2498]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 06:14:12.443887 sudo[2498]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:12.465419 kubelet[2460]: I0707 06:14:12.464929 2460 apiserver.go:52] "Watching apiserver" Jul 7 06:14:12.477954 kubelet[2460]: I0707 06:14:12.476671 2460 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:14:12.498170 kubelet[2460]: I0707 06:14:12.498120 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:12.498246 kubelet[2460]: I0707 06:14:12.498187 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:12.498859 kubelet[2460]: E0707 06:14:12.498830 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:12.505200 kubelet[2460]: E0707 06:14:12.504820 2460 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:12.505200 kubelet[2460]: E0707 06:14:12.505008 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:12.507661 kubelet[2460]: E0707 06:14:12.507476 2460 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:12.507661 kubelet[2460]: E0707 06:14:12.507605 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:12.516366 kubelet[2460]: I0707 06:14:12.516306 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.516270207 podStartE2EDuration="2.516270207s" podCreationTimestamp="2025-07-07 06:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:12.515983934 +0000 UTC m=+1.106948504" watchObservedRunningTime="2025-07-07 06:14:12.516270207 +0000 UTC m=+1.107234778" Jul 7 06:14:12.534813 kubelet[2460]: I0707 06:14:12.534691 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.534678838 podStartE2EDuration="2.534678838s" podCreationTimestamp="2025-07-07 06:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:12.53374609 +0000 UTC m=+1.124710661" watchObservedRunningTime="2025-07-07 06:14:12.534678838 +0000 UTC m=+1.125643409" Jul 7 06:14:12.534813 kubelet[2460]: I0707 06:14:12.534780 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.534776675 podStartE2EDuration="2.534776675s" podCreationTimestamp="2025-07-07 06:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:12.524049612 +0000 UTC m=+1.115014183" watchObservedRunningTime="2025-07-07 06:14:12.534776675 +0000 UTC m=+1.125741246" Jul 7 06:14:13.500476 kubelet[2460]: E0707 06:14:13.500408 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:13.501394 kubelet[2460]: E0707 06:14:13.500589 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:14.320418 sudo[1616]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:14.322504 sshd[1613]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:14.326049 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:14:14.326223 systemd[1]: session-7.scope: Consumed 7.427s CPU time, 154.9M memory peak, 0B memory swap peak. Jul 7 06:14:14.326758 systemd[1]: sshd@6-10.0.0.145:22-10.0.0.1:60918.service: Deactivated successfully. Jul 7 06:14:14.329116 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:14:14.330058 systemd-logind[1418]: Removed session 7. Jul 7 06:14:14.504339 kubelet[2460]: E0707 06:14:14.504295 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:15.794646 kubelet[2460]: E0707 06:14:15.794600 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:17.150344 kubelet[2460]: I0707 06:14:17.150270 2460 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:14:17.150937 kubelet[2460]: I0707 06:14:17.150828 2460 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:14:17.151156 containerd[1448]: time="2025-07-07T06:14:17.150566100Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:14:17.186570 kubelet[2460]: E0707 06:14:17.186525 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:17.508929 kubelet[2460]: E0707 06:14:17.508509 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:17.954379 systemd[1]: Created slice kubepods-besteffort-podd83f6e88_6584_46ed_b681_57ae648b66bd.slice - libcontainer container kubepods-besteffort-podd83f6e88_6584_46ed_b681_57ae648b66bd.slice. Jul 7 06:14:17.965038 systemd[1]: Created slice kubepods-burstable-pod138efa40_59f0_43ea_8bb4_b15c317538f3.slice - libcontainer container kubepods-burstable-pod138efa40_59f0_43ea_8bb4_b15c317538f3.slice. Jul 7 06:14:18.019036 kubelet[2460]: I0707 06:14:18.018994 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/138efa40-59f0-43ea-8bb4-b15c317538f3-hubble-tls\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019036 kubelet[2460]: I0707 06:14:18.019034 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-config-path\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019212 kubelet[2460]: I0707 06:14:18.019068 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d83f6e88-6584-46ed-b681-57ae648b66bd-kube-proxy\") pod \"kube-proxy-m67bw\" (UID: \"d83f6e88-6584-46ed-b681-57ae648b66bd\") " pod="kube-system/kube-proxy-m67bw" Jul 7 06:14:18.019212 kubelet[2460]: I0707 06:14:18.019088 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-cgroup\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019212 kubelet[2460]: I0707 06:14:18.019105 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-etc-cni-netd\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019212 kubelet[2460]: I0707 06:14:18.019146 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-run\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019212 kubelet[2460]: I0707 06:14:18.019165 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-hostproc\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019212 kubelet[2460]: I0707 06:14:18.019181 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d83f6e88-6584-46ed-b681-57ae648b66bd-lib-modules\") pod \"kube-proxy-m67bw\" (UID: \"d83f6e88-6584-46ed-b681-57ae648b66bd\") " pod="kube-system/kube-proxy-m67bw" Jul 7 06:14:18.019337 kubelet[2460]: I0707 06:14:18.019207 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-lib-modules\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019337 kubelet[2460]: I0707 06:14:18.019222 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-xtables-lock\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019337 kubelet[2460]: I0707 06:14:18.019237 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-host-proc-sys-kernel\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019337 kubelet[2460]: I0707 06:14:18.019267 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rt2n\" (UniqueName: \"kubernetes.io/projected/138efa40-59f0-43ea-8bb4-b15c317538f3-kube-api-access-9rt2n\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019337 kubelet[2460]: I0707 06:14:18.019281 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d83f6e88-6584-46ed-b681-57ae648b66bd-xtables-lock\") pod \"kube-proxy-m67bw\" (UID: \"d83f6e88-6584-46ed-b681-57ae648b66bd\") " pod="kube-system/kube-proxy-m67bw" Jul 7 06:14:18.019337 kubelet[2460]: I0707 06:14:18.019308 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-bpf-maps\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019472 kubelet[2460]: I0707 06:14:18.019334 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-host-proc-sys-net\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019472 kubelet[2460]: I0707 06:14:18.019370 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqj8k\" (UniqueName: \"kubernetes.io/projected/d83f6e88-6584-46ed-b681-57ae648b66bd-kube-api-access-zqj8k\") pod \"kube-proxy-m67bw\" (UID: \"d83f6e88-6584-46ed-b681-57ae648b66bd\") " pod="kube-system/kube-proxy-m67bw" Jul 7 06:14:18.019472 kubelet[2460]: I0707 06:14:18.019387 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cni-path\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.019472 kubelet[2460]: I0707 06:14:18.019427 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/138efa40-59f0-43ea-8bb4-b15c317538f3-clustermesh-secrets\") pod \"cilium-6lrmn\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " pod="kube-system/cilium-6lrmn" Jul 7 06:14:18.241753 systemd[1]: Created slice kubepods-besteffort-podcc59d381_a793_42af_a4b2_914c5b7e4a8f.slice - libcontainer container kubepods-besteffort-podcc59d381_a793_42af_a4b2_914c5b7e4a8f.slice. Jul 7 06:14:18.262350 kubelet[2460]: E0707 06:14:18.262316 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:18.262999 containerd[1448]: time="2025-07-07T06:14:18.262959120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m67bw,Uid:d83f6e88-6584-46ed-b681-57ae648b66bd,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:18.267818 kubelet[2460]: E0707 06:14:18.267583 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:18.268029 containerd[1448]: time="2025-07-07T06:14:18.267976569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6lrmn,Uid:138efa40-59f0-43ea-8bb4-b15c317538f3,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:18.284665 containerd[1448]: time="2025-07-07T06:14:18.284563435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:14:18.284951 containerd[1448]: time="2025-07-07T06:14:18.284888037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:14:18.284951 containerd[1448]: time="2025-07-07T06:14:18.284920665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:18.285223 containerd[1448]: time="2025-07-07T06:14:18.285034263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:18.291541 containerd[1448]: time="2025-07-07T06:14:18.291343361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:14:18.291541 containerd[1448]: time="2025-07-07T06:14:18.291462157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:14:18.291541 containerd[1448]: time="2025-07-07T06:14:18.291477992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:18.291757 containerd[1448]: time="2025-07-07T06:14:18.291642891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:18.303829 systemd[1]: Started cri-containerd-5376b3ca6fb4b3e99f52fa40bb38a5d9d94a60765f0baee7f5bd90e300fe8482.scope - libcontainer container 5376b3ca6fb4b3e99f52fa40bb38a5d9d94a60765f0baee7f5bd90e300fe8482. Jul 7 06:14:18.306926 systemd[1]: Started cri-containerd-146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412.scope - libcontainer container 146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412. Jul 7 06:14:18.322510 kubelet[2460]: I0707 06:14:18.322406 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4pdx\" (UniqueName: \"kubernetes.io/projected/cc59d381-a793-42af-a4b2-914c5b7e4a8f-kube-api-access-n4pdx\") pod \"cilium-operator-6c4d7847fc-rng6h\" (UID: \"cc59d381-a793-42af-a4b2-914c5b7e4a8f\") " pod="kube-system/cilium-operator-6c4d7847fc-rng6h" Jul 7 06:14:18.322617 kubelet[2460]: I0707 06:14:18.322525 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc59d381-a793-42af-a4b2-914c5b7e4a8f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rng6h\" (UID: \"cc59d381-a793-42af-a4b2-914c5b7e4a8f\") " pod="kube-system/cilium-operator-6c4d7847fc-rng6h" Jul 7 06:14:18.325658 containerd[1448]: time="2025-07-07T06:14:18.325564831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m67bw,Uid:d83f6e88-6584-46ed-b681-57ae648b66bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5376b3ca6fb4b3e99f52fa40bb38a5d9d94a60765f0baee7f5bd90e300fe8482\"" Jul 7 06:14:18.326261 kubelet[2460]: E0707 06:14:18.326234 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:18.329794 containerd[1448]: time="2025-07-07T06:14:18.329763859Z" level=info msg="CreateContainer within sandbox \"5376b3ca6fb4b3e99f52fa40bb38a5d9d94a60765f0baee7f5bd90e300fe8482\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:14:18.332459 containerd[1448]: time="2025-07-07T06:14:18.332427726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6lrmn,Uid:138efa40-59f0-43ea-8bb4-b15c317538f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\"" Jul 7 06:14:18.332983 kubelet[2460]: E0707 06:14:18.332958 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:18.334577 containerd[1448]: time="2025-07-07T06:14:18.333851647Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 06:14:18.347352 containerd[1448]: time="2025-07-07T06:14:18.347305536Z" level=info msg="CreateContainer within sandbox \"5376b3ca6fb4b3e99f52fa40bb38a5d9d94a60765f0baee7f5bd90e300fe8482\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a56545edcc23ca314cbf1a5118a8352fff8ada773a3064cde3085fdec52e9d1b\"" Jul 7 06:14:18.347991 containerd[1448]: time="2025-07-07T06:14:18.347905517Z" level=info msg="StartContainer for \"a56545edcc23ca314cbf1a5118a8352fff8ada773a3064cde3085fdec52e9d1b\"" Jul 7 06:14:18.374559 systemd[1]: Started cri-containerd-a56545edcc23ca314cbf1a5118a8352fff8ada773a3064cde3085fdec52e9d1b.scope - libcontainer container a56545edcc23ca314cbf1a5118a8352fff8ada773a3064cde3085fdec52e9d1b. Jul 7 06:14:18.399833 containerd[1448]: time="2025-07-07T06:14:18.399788662Z" level=info msg="StartContainer for \"a56545edcc23ca314cbf1a5118a8352fff8ada773a3064cde3085fdec52e9d1b\" returns successfully" Jul 7 06:14:18.512909 kubelet[2460]: E0707 06:14:18.511857 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:18.521714 kubelet[2460]: I0707 06:14:18.521505 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m67bw" podStartSLOduration=1.521490405 podStartE2EDuration="1.521490405s" podCreationTimestamp="2025-07-07 06:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:18.521266246 +0000 UTC m=+7.112230777" watchObservedRunningTime="2025-07-07 06:14:18.521490405 +0000 UTC m=+7.112454936" Jul 7 06:14:18.547501 kubelet[2460]: E0707 06:14:18.547460 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:18.547955 containerd[1448]: time="2025-07-07T06:14:18.547915640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rng6h,Uid:cc59d381-a793-42af-a4b2-914c5b7e4a8f,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:18.574480 containerd[1448]: time="2025-07-07T06:14:18.572602510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:14:18.574480 containerd[1448]: time="2025-07-07T06:14:18.572656571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:14:18.574480 containerd[1448]: time="2025-07-07T06:14:18.572684440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:18.574480 containerd[1448]: time="2025-07-07T06:14:18.572761612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:18.592558 systemd[1]: Started cri-containerd-17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030.scope - libcontainer container 17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030. Jul 7 06:14:18.619603 containerd[1448]: time="2025-07-07T06:14:18.619517628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rng6h,Uid:cc59d381-a793-42af-a4b2-914c5b7e4a8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030\"" Jul 7 06:14:18.620279 kubelet[2460]: E0707 06:14:18.620254 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:24.458922 kubelet[2460]: E0707 06:14:24.458879 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:25.802299 kubelet[2460]: E0707 06:14:25.802268 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:27.043216 update_engine[1421]: I20250707 06:14:27.041435 1421 update_attempter.cc:509] Updating boot flags... Jul 7 06:14:27.097477 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2843) Jul 7 06:14:27.152468 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2842) Jul 7 06:14:27.497521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302114256.mount: Deactivated successfully. Jul 7 06:14:28.785877 containerd[1448]: time="2025-07-07T06:14:28.785817905Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:28.786446 containerd[1448]: time="2025-07-07T06:14:28.786273744Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 7 06:14:28.787029 containerd[1448]: time="2025-07-07T06:14:28.786998832Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:28.788813 containerd[1448]: time="2025-07-07T06:14:28.788766722Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.454020476s" Jul 7 06:14:28.788862 containerd[1448]: time="2025-07-07T06:14:28.788811590Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 7 06:14:28.791850 containerd[1448]: time="2025-07-07T06:14:28.791616125Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 06:14:28.796731 containerd[1448]: time="2025-07-07T06:14:28.796596922Z" level=info msg="CreateContainer within sandbox \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:14:28.817813 containerd[1448]: time="2025-07-07T06:14:28.817761579Z" level=info msg="CreateContainer within sandbox \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b\"" Jul 7 06:14:28.818279 containerd[1448]: time="2025-07-07T06:14:28.818251369Z" level=info msg="StartContainer for \"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b\"" Jul 7 06:14:28.842568 systemd[1]: Started cri-containerd-0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b.scope - libcontainer container 0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b. Jul 7 06:14:28.862509 containerd[1448]: time="2025-07-07T06:14:28.862360691Z" level=info msg="StartContainer for \"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b\" returns successfully" Jul 7 06:14:28.933053 systemd[1]: cri-containerd-0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b.scope: Deactivated successfully. Jul 7 06:14:29.036057 containerd[1448]: time="2025-07-07T06:14:29.035891517Z" level=info msg="shim disconnected" id=0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b namespace=k8s.io Jul 7 06:14:29.036057 containerd[1448]: time="2025-07-07T06:14:29.035956380Z" level=warning msg="cleaning up after shim disconnected" id=0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b namespace=k8s.io Jul 7 06:14:29.036057 containerd[1448]: time="2025-07-07T06:14:29.035964898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:14:29.537923 kubelet[2460]: E0707 06:14:29.537894 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:29.539724 containerd[1448]: time="2025-07-07T06:14:29.539592686Z" level=info msg="CreateContainer within sandbox \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:14:29.560585 containerd[1448]: time="2025-07-07T06:14:29.560476191Z" level=info msg="CreateContainer within sandbox \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd\"" Jul 7 06:14:29.563898 containerd[1448]: time="2025-07-07T06:14:29.561944493Z" level=info msg="StartContainer for \"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd\"" Jul 7 06:14:29.603575 systemd[1]: Started cri-containerd-bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd.scope - libcontainer container bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd. Jul 7 06:14:29.624531 containerd[1448]: time="2025-07-07T06:14:29.624362830Z" level=info msg="StartContainer for \"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd\" returns successfully" Jul 7 06:14:29.636170 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:14:29.636423 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:14:29.636755 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:14:29.643730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:14:29.643894 systemd[1]: cri-containerd-bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd.scope: Deactivated successfully. Jul 7 06:14:29.656258 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:14:29.661601 containerd[1448]: time="2025-07-07T06:14:29.661447086Z" level=info msg="shim disconnected" id=bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd namespace=k8s.io Jul 7 06:14:29.661601 containerd[1448]: time="2025-07-07T06:14:29.661503591Z" level=warning msg="cleaning up after shim disconnected" id=bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd namespace=k8s.io Jul 7 06:14:29.661601 containerd[1448]: time="2025-07-07T06:14:29.661512029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:14:29.670652 containerd[1448]: time="2025-07-07T06:14:29.670609808Z" level=warning msg="cleanup warnings time=\"2025-07-07T06:14:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 06:14:29.815382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b-rootfs.mount: Deactivated successfully. Jul 7 06:14:29.885723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907940504.mount: Deactivated successfully. Jul 7 06:14:30.398841 containerd[1448]: time="2025-07-07T06:14:30.398784451Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:30.399360 containerd[1448]: time="2025-07-07T06:14:30.399313200Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 7 06:14:30.400087 containerd[1448]: time="2025-07-07T06:14:30.400056254Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:30.401593 containerd[1448]: time="2025-07-07T06:14:30.401558320Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.609896807s" Jul 7 06:14:30.401629 containerd[1448]: time="2025-07-07T06:14:30.401596430Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 7 06:14:30.404118 containerd[1448]: time="2025-07-07T06:14:30.404084330Z" level=info msg="CreateContainer within sandbox \"17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 06:14:30.422800 containerd[1448]: time="2025-07-07T06:14:30.422757155Z" level=info msg="CreateContainer within sandbox \"17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\"" Jul 7 06:14:30.423886 containerd[1448]: time="2025-07-07T06:14:30.423327892Z" level=info msg="StartContainer for \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\"" Jul 7 06:14:30.450561 systemd[1]: Started cri-containerd-6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed.scope - libcontainer container 6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed. Jul 7 06:14:30.470230 containerd[1448]: time="2025-07-07T06:14:30.470173373Z" level=info msg="StartContainer for \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\" returns successfully" Jul 7 06:14:30.538280 kubelet[2460]: E0707 06:14:30.538205 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:30.543170 kubelet[2460]: E0707 06:14:30.542577 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:30.543770 containerd[1448]: time="2025-07-07T06:14:30.543477258Z" level=info msg="CreateContainer within sandbox \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:14:30.566359 containerd[1448]: time="2025-07-07T06:14:30.566063707Z" level=info msg="CreateContainer within sandbox \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298\"" Jul 7 06:14:30.567712 containerd[1448]: time="2025-07-07T06:14:30.567677024Z" level=info msg="StartContainer for \"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298\"" Jul 7 06:14:30.569532 kubelet[2460]: I0707 06:14:30.569431 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rng6h" podStartSLOduration=0.788109789 podStartE2EDuration="12.56941727s" podCreationTimestamp="2025-07-07 06:14:18 +0000 UTC" firstStartedPulling="2025-07-07 06:14:18.620953144 +0000 UTC m=+7.211917675" lastFinishedPulling="2025-07-07 06:14:30.402260585 +0000 UTC m=+18.993225156" observedRunningTime="2025-07-07 06:14:30.569232437 +0000 UTC m=+19.160197008" watchObservedRunningTime="2025-07-07 06:14:30.56941727 +0000 UTC m=+19.160381841" Jul 7 06:14:30.613620 systemd[1]: Started cri-containerd-966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298.scope - libcontainer container 966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298. Jul 7 06:14:30.651338 containerd[1448]: time="2025-07-07T06:14:30.651024005Z" level=info msg="StartContainer for \"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298\" returns successfully" Jul 7 06:14:30.677969 systemd[1]: cri-containerd-966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298.scope: Deactivated successfully. Jul 7 06:14:30.778673 containerd[1448]: time="2025-07-07T06:14:30.778610236Z" level=info msg="shim disconnected" id=966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298 namespace=k8s.io Jul 7 06:14:30.778673 containerd[1448]: time="2025-07-07T06:14:30.778666502Z" level=warning msg="cleaning up after shim disconnected" id=966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298 namespace=k8s.io Jul 7 06:14:30.778673 containerd[1448]: time="2025-07-07T06:14:30.778675340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:14:31.549125 kubelet[2460]: E0707 06:14:31.549095 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:31.549501 kubelet[2460]: E0707 06:14:31.549202 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:31.552069 containerd[1448]: time="2025-07-07T06:14:31.552025472Z" level=info msg="CreateContainer within sandbox \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:14:31.576878 containerd[1448]: time="2025-07-07T06:14:31.576820204Z" level=info msg="CreateContainer within sandbox \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996\"" Jul 7 06:14:31.577544 containerd[1448]: time="2025-07-07T06:14:31.577501719Z" level=info msg="StartContainer for \"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996\"" Jul 7 06:14:31.603545 systemd[1]: Started cri-containerd-d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996.scope - libcontainer container d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996. Jul 7 06:14:31.621578 systemd[1]: cri-containerd-d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996.scope: Deactivated successfully. Jul 7 06:14:31.624582 containerd[1448]: time="2025-07-07T06:14:31.624480733Z" level=info msg="StartContainer for \"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996\" returns successfully" Jul 7 06:14:31.640476 containerd[1448]: time="2025-07-07T06:14:31.640393090Z" level=info msg="shim disconnected" id=d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996 namespace=k8s.io Jul 7 06:14:31.640476 containerd[1448]: time="2025-07-07T06:14:31.640458714Z" level=warning msg="cleaning up after shim disconnected" id=d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996 namespace=k8s.io Jul 7 06:14:31.640476 containerd[1448]: time="2025-07-07T06:14:31.640467592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:14:31.815488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996-rootfs.mount: Deactivated successfully. Jul 7 06:14:32.553103 kubelet[2460]: E0707 06:14:32.553048 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:32.555578 containerd[1448]: time="2025-07-07T06:14:32.555538967Z" level=info msg="CreateContainer within sandbox \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:14:32.570156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184512147.mount: Deactivated successfully. Jul 7 06:14:32.573633 containerd[1448]: time="2025-07-07T06:14:32.573586705Z" level=info msg="CreateContainer within sandbox \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\"" Jul 7 06:14:32.575094 containerd[1448]: time="2025-07-07T06:14:32.574589670Z" level=info msg="StartContainer for \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\"" Jul 7 06:14:32.606566 systemd[1]: Started cri-containerd-e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff.scope - libcontainer container e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff. Jul 7 06:14:32.630444 containerd[1448]: time="2025-07-07T06:14:32.630345305Z" level=info msg="StartContainer for \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\" returns successfully" Jul 7 06:14:32.774651 kubelet[2460]: I0707 06:14:32.774604 2460 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:14:32.820834 systemd[1]: Created slice kubepods-burstable-podf8e36fd5_e010_4778_9d8f_5fda7dc850ff.slice - libcontainer container kubepods-burstable-podf8e36fd5_e010_4778_9d8f_5fda7dc850ff.slice. Jul 7 06:14:32.825589 systemd[1]: Created slice kubepods-burstable-pode39ec751_4d0c_4005_abd8_8422887056c9.slice - libcontainer container kubepods-burstable-pode39ec751_4d0c_4005_abd8_8422887056c9.slice. Jul 7 06:14:32.828112 kubelet[2460]: I0707 06:14:32.828068 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwvqs\" (UniqueName: \"kubernetes.io/projected/f8e36fd5-e010-4778-9d8f-5fda7dc850ff-kube-api-access-nwvqs\") pod \"coredns-668d6bf9bc-zwxt6\" (UID: \"f8e36fd5-e010-4778-9d8f-5fda7dc850ff\") " pod="kube-system/coredns-668d6bf9bc-zwxt6" Jul 7 06:14:32.828195 kubelet[2460]: I0707 06:14:32.828133 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e39ec751-4d0c-4005-abd8-8422887056c9-config-volume\") pod \"coredns-668d6bf9bc-sm5l9\" (UID: \"e39ec751-4d0c-4005-abd8-8422887056c9\") " pod="kube-system/coredns-668d6bf9bc-sm5l9" Jul 7 06:14:32.828195 kubelet[2460]: I0707 06:14:32.828163 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf6gc\" (UniqueName: \"kubernetes.io/projected/e39ec751-4d0c-4005-abd8-8422887056c9-kube-api-access-nf6gc\") pod \"coredns-668d6bf9bc-sm5l9\" (UID: \"e39ec751-4d0c-4005-abd8-8422887056c9\") " pod="kube-system/coredns-668d6bf9bc-sm5l9" Jul 7 06:14:32.828195 kubelet[2460]: I0707 06:14:32.828187 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8e36fd5-e010-4778-9d8f-5fda7dc850ff-config-volume\") pod \"coredns-668d6bf9bc-zwxt6\" (UID: \"f8e36fd5-e010-4778-9d8f-5fda7dc850ff\") " pod="kube-system/coredns-668d6bf9bc-zwxt6" Jul 7 06:14:33.125013 kubelet[2460]: E0707 06:14:33.123922 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:33.125131 containerd[1448]: time="2025-07-07T06:14:33.124638636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxt6,Uid:f8e36fd5-e010-4778-9d8f-5fda7dc850ff,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:33.129488 kubelet[2460]: E0707 06:14:33.129463 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:33.130862 containerd[1448]: time="2025-07-07T06:14:33.130809117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sm5l9,Uid:e39ec751-4d0c-4005-abd8-8422887056c9,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:33.558439 kubelet[2460]: E0707 06:14:33.557829 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:34.559722 kubelet[2460]: E0707 06:14:34.559679 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:34.853809 systemd-networkd[1380]: cilium_host: Link UP Jul 7 06:14:34.854625 systemd-networkd[1380]: cilium_net: Link UP Jul 7 06:14:34.855513 systemd-networkd[1380]: cilium_net: Gained carrier Jul 7 06:14:34.855696 systemd-networkd[1380]: cilium_host: Gained carrier Jul 7 06:14:34.934888 systemd-networkd[1380]: cilium_vxlan: Link UP Jul 7 06:14:34.934899 systemd-networkd[1380]: cilium_vxlan: Gained carrier Jul 7 06:14:35.218442 kernel: NET: Registered PF_ALG protocol family Jul 7 06:14:35.321581 systemd-networkd[1380]: cilium_net: Gained IPv6LL Jul 7 06:14:35.561007 kubelet[2460]: E0707 06:14:35.560905 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:35.569584 systemd-networkd[1380]: cilium_host: Gained IPv6LL Jul 7 06:14:35.772513 systemd-networkd[1380]: lxc_health: Link UP Jul 7 06:14:35.783970 systemd-networkd[1380]: lxc_health: Gained carrier Jul 7 06:14:36.276094 systemd-networkd[1380]: lxcc2ee00608ab8: Link UP Jul 7 06:14:36.284200 systemd-networkd[1380]: lxc3a1b63ca1b43: Link UP Jul 7 06:14:36.292429 kernel: eth0: renamed from tmp62de5 Jul 7 06:14:36.300427 kernel: eth0: renamed from tmpccc8e Jul 7 06:14:36.306816 kubelet[2460]: I0707 06:14:36.306648 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6lrmn" podStartSLOduration=8.848525531 podStartE2EDuration="19.306632067s" podCreationTimestamp="2025-07-07 06:14:17 +0000 UTC" firstStartedPulling="2025-07-07 06:14:18.333339194 +0000 UTC m=+6.924303765" lastFinishedPulling="2025-07-07 06:14:28.79144577 +0000 UTC m=+17.382410301" observedRunningTime="2025-07-07 06:14:33.573259473 +0000 UTC m=+22.164224044" watchObservedRunningTime="2025-07-07 06:14:36.306632067 +0000 UTC m=+24.897596598" Jul 7 06:14:36.307352 systemd-networkd[1380]: lxc3a1b63ca1b43: Gained carrier Jul 7 06:14:36.313160 systemd-networkd[1380]: lxcc2ee00608ab8: Gained carrier Jul 7 06:14:36.562883 kubelet[2460]: E0707 06:14:36.562601 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:36.976839 systemd-networkd[1380]: cilium_vxlan: Gained IPv6LL Jul 7 06:14:37.040848 systemd-networkd[1380]: lxc_health: Gained IPv6LL Jul 7 06:14:37.564506 kubelet[2460]: E0707 06:14:37.564252 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:37.744827 systemd-networkd[1380]: lxc3a1b63ca1b43: Gained IPv6LL Jul 7 06:14:38.385773 systemd-networkd[1380]: lxcc2ee00608ab8: Gained IPv6LL Jul 7 06:14:38.566257 kubelet[2460]: E0707 06:14:38.566178 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:38.670942 systemd[1]: Started sshd@7-10.0.0.145:22-10.0.0.1:52312.service - OpenSSH per-connection server daemon (10.0.0.1:52312). Jul 7 06:14:38.706608 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 52312 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:14:38.707396 sshd[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:38.710944 systemd-logind[1418]: New session 8 of user core. Jul 7 06:14:38.716532 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:14:38.848293 sshd[3697]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:38.851520 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:14:38.852268 systemd[1]: sshd@7-10.0.0.145:22-10.0.0.1:52312.service: Deactivated successfully. Jul 7 06:14:38.854246 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:14:38.855077 systemd-logind[1418]: Removed session 8. Jul 7 06:14:39.753474 containerd[1448]: time="2025-07-07T06:14:39.752956632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:14:39.753474 containerd[1448]: time="2025-07-07T06:14:39.753011422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:14:39.753474 containerd[1448]: time="2025-07-07T06:14:39.753028379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:39.753934 containerd[1448]: time="2025-07-07T06:14:39.753536204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:39.771081 containerd[1448]: time="2025-07-07T06:14:39.770983015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:14:39.771081 containerd[1448]: time="2025-07-07T06:14:39.771043204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:14:39.771081 containerd[1448]: time="2025-07-07T06:14:39.771054242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:39.771352 containerd[1448]: time="2025-07-07T06:14:39.771129468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:14:39.777987 systemd[1]: Started cri-containerd-62de5fc1ce4f551f3999faf6670eea868c334a79600530368b2fe20472568b68.scope - libcontainer container 62de5fc1ce4f551f3999faf6670eea868c334a79600530368b2fe20472568b68. Jul 7 06:14:39.785802 systemd[1]: Started cri-containerd-ccc8eb14b70fecc332b98a15236be92c26409ddda5d6ffc6a9a39f28849b739a.scope - libcontainer container ccc8eb14b70fecc332b98a15236be92c26409ddda5d6ffc6a9a39f28849b739a. Jul 7 06:14:39.789553 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:14:39.796300 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:14:39.807944 containerd[1448]: time="2025-07-07T06:14:39.807817435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxt6,Uid:f8e36fd5-e010-4778-9d8f-5fda7dc850ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"62de5fc1ce4f551f3999faf6670eea868c334a79600530368b2fe20472568b68\"" Jul 7 06:14:39.809961 kubelet[2460]: E0707 06:14:39.809910 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:39.812193 containerd[1448]: time="2025-07-07T06:14:39.812161021Z" level=info msg="CreateContainer within sandbox \"62de5fc1ce4f551f3999faf6670eea868c334a79600530368b2fe20472568b68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:14:39.815062 containerd[1448]: time="2025-07-07T06:14:39.814482786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sm5l9,Uid:e39ec751-4d0c-4005-abd8-8422887056c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccc8eb14b70fecc332b98a15236be92c26409ddda5d6ffc6a9a39f28849b739a\"" Jul 7 06:14:39.815607 kubelet[2460]: E0707 06:14:39.815583 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:39.817839 containerd[1448]: time="2025-07-07T06:14:39.817734457Z" level=info msg="CreateContainer within sandbox \"ccc8eb14b70fecc332b98a15236be92c26409ddda5d6ffc6a9a39f28849b739a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:14:39.830212 containerd[1448]: time="2025-07-07T06:14:39.830169447Z" level=info msg="CreateContainer within sandbox \"62de5fc1ce4f551f3999faf6670eea868c334a79600530368b2fe20472568b68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0c86e5276cc0c6c4af7e7939d59abd0a7ba13772b34cb5fefc7ad794835886f\"" Jul 7 06:14:39.830929 containerd[1448]: time="2025-07-07T06:14:39.830713186Z" level=info msg="StartContainer for \"a0c86e5276cc0c6c4af7e7939d59abd0a7ba13772b34cb5fefc7ad794835886f\"" Jul 7 06:14:39.833193 containerd[1448]: time="2025-07-07T06:14:39.833156888Z" level=info msg="CreateContainer within sandbox \"ccc8eb14b70fecc332b98a15236be92c26409ddda5d6ffc6a9a39f28849b739a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"913f3eab202add45d46fd24cc23af6aa360210e2045d60ce6d73c1a4c14d3f96\"" Jul 7 06:14:39.834115 containerd[1448]: time="2025-07-07T06:14:39.833836201Z" level=info msg="StartContainer for \"913f3eab202add45d46fd24cc23af6aa360210e2045d60ce6d73c1a4c14d3f96\"" Jul 7 06:14:39.859557 systemd[1]: Started cri-containerd-a0c86e5276cc0c6c4af7e7939d59abd0a7ba13772b34cb5fefc7ad794835886f.scope - libcontainer container a0c86e5276cc0c6c4af7e7939d59abd0a7ba13772b34cb5fefc7ad794835886f. Jul 7 06:14:39.862308 systemd[1]: Started cri-containerd-913f3eab202add45d46fd24cc23af6aa360210e2045d60ce6d73c1a4c14d3f96.scope - libcontainer container 913f3eab202add45d46fd24cc23af6aa360210e2045d60ce6d73c1a4c14d3f96. Jul 7 06:14:39.893941 containerd[1448]: time="2025-07-07T06:14:39.893846638Z" level=info msg="StartContainer for \"913f3eab202add45d46fd24cc23af6aa360210e2045d60ce6d73c1a4c14d3f96\" returns successfully" Jul 7 06:14:39.893941 containerd[1448]: time="2025-07-07T06:14:39.893898149Z" level=info msg="StartContainer for \"a0c86e5276cc0c6c4af7e7939d59abd0a7ba13772b34cb5fefc7ad794835886f\" returns successfully" Jul 7 06:14:40.570307 kubelet[2460]: E0707 06:14:40.570272 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:40.574230 kubelet[2460]: E0707 06:14:40.573599 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:40.583624 kubelet[2460]: I0707 06:14:40.583547 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sm5l9" podStartSLOduration=22.583531489 podStartE2EDuration="22.583531489s" podCreationTimestamp="2025-07-07 06:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:40.583421709 +0000 UTC m=+29.174386320" watchObservedRunningTime="2025-07-07 06:14:40.583531489 +0000 UTC m=+29.174496020" Jul 7 06:14:40.594446 kubelet[2460]: I0707 06:14:40.594100 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zwxt6" podStartSLOduration=22.594085294 podStartE2EDuration="22.594085294s" podCreationTimestamp="2025-07-07 06:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:40.592987973 +0000 UTC m=+29.183952544" watchObservedRunningTime="2025-07-07 06:14:40.594085294 +0000 UTC m=+29.185049865" Jul 7 06:14:40.758035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2763637584.mount: Deactivated successfully. Jul 7 06:14:41.574712 kubelet[2460]: E0707 06:14:41.574654 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:41.575317 kubelet[2460]: E0707 06:14:41.575295 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:42.576252 kubelet[2460]: E0707 06:14:42.576196 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:43.860852 systemd[1]: Started sshd@8-10.0.0.145:22-10.0.0.1:42274.service - OpenSSH per-connection server daemon (10.0.0.1:42274). Jul 7 06:14:43.908235 sshd[3884]: Accepted publickey for core from 10.0.0.1 port 42274 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:14:43.909713 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:43.914491 systemd-logind[1418]: New session 9 of user core. Jul 7 06:14:43.923566 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:14:44.036834 sshd[3884]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:44.040380 systemd[1]: sshd@8-10.0.0.145:22-10.0.0.1:42274.service: Deactivated successfully. Jul 7 06:14:44.042041 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:14:44.043493 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:14:44.044471 systemd-logind[1418]: Removed session 9. Jul 7 06:14:49.047030 systemd[1]: Started sshd@9-10.0.0.145:22-10.0.0.1:42278.service - OpenSSH per-connection server daemon (10.0.0.1:42278). Jul 7 06:14:49.078186 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 42278 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:14:49.079272 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:49.082380 systemd-logind[1418]: New session 10 of user core. Jul 7 06:14:49.088615 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:14:49.192275 sshd[3901]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:49.195363 systemd[1]: sshd@9-10.0.0.145:22-10.0.0.1:42278.service: Deactivated successfully. Jul 7 06:14:49.197151 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:14:49.197786 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:14:49.198666 systemd-logind[1418]: Removed session 10. Jul 7 06:14:54.205094 systemd[1]: Started sshd@10-10.0.0.145:22-10.0.0.1:59002.service - OpenSSH per-connection server daemon (10.0.0.1:59002). Jul 7 06:14:54.239582 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 59002 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:14:54.240796 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:54.244168 systemd-logind[1418]: New session 11 of user core. Jul 7 06:14:54.259584 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:14:54.369894 sshd[3917]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:54.381168 systemd[1]: sshd@10-10.0.0.145:22-10.0.0.1:59002.service: Deactivated successfully. Jul 7 06:14:54.382832 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:14:54.384300 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:14:54.393719 systemd[1]: Started sshd@11-10.0.0.145:22-10.0.0.1:59014.service - OpenSSH per-connection server daemon (10.0.0.1:59014). Jul 7 06:14:54.394796 systemd-logind[1418]: Removed session 11. Jul 7 06:14:54.422797 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 59014 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:14:54.423976 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:54.427681 systemd-logind[1418]: New session 12 of user core. Jul 7 06:14:54.438553 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:14:54.581803 sshd[3932]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:54.598463 systemd[1]: sshd@11-10.0.0.145:22-10.0.0.1:59014.service: Deactivated successfully. Jul 7 06:14:54.600201 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:14:54.608731 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:14:54.619298 systemd[1]: Started sshd@12-10.0.0.145:22-10.0.0.1:59026.service - OpenSSH per-connection server daemon (10.0.0.1:59026). Jul 7 06:14:54.624540 systemd-logind[1418]: Removed session 12. Jul 7 06:14:54.656823 sshd[3944]: Accepted publickey for core from 10.0.0.1 port 59026 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:14:54.658437 sshd[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:54.662191 systemd-logind[1418]: New session 13 of user core. Jul 7 06:14:54.676603 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:14:54.784165 sshd[3944]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:54.787393 systemd[1]: sshd@12-10.0.0.145:22-10.0.0.1:59026.service: Deactivated successfully. Jul 7 06:14:54.789213 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:14:54.789842 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:14:54.790568 systemd-logind[1418]: Removed session 13. Jul 7 06:14:59.793997 systemd[1]: Started sshd@13-10.0.0.145:22-10.0.0.1:59030.service - OpenSSH per-connection server daemon (10.0.0.1:59030). Jul 7 06:14:59.825528 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 59030 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:14:59.826653 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:59.830388 systemd-logind[1418]: New session 14 of user core. Jul 7 06:14:59.836540 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:14:59.940470 sshd[3961]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:59.943583 systemd[1]: sshd@13-10.0.0.145:22-10.0.0.1:59030.service: Deactivated successfully. Jul 7 06:14:59.945342 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:14:59.945957 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:14:59.946710 systemd-logind[1418]: Removed session 14. Jul 7 06:15:04.952008 systemd[1]: Started sshd@14-10.0.0.145:22-10.0.0.1:49164.service - OpenSSH per-connection server daemon (10.0.0.1:49164). Jul 7 06:15:04.988489 sshd[3975]: Accepted publickey for core from 10.0.0.1 port 49164 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:04.988912 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:04.992695 systemd-logind[1418]: New session 15 of user core. Jul 7 06:15:05.005571 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:15:05.123178 sshd[3975]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:05.137023 systemd[1]: sshd@14-10.0.0.145:22-10.0.0.1:49164.service: Deactivated successfully. Jul 7 06:15:05.138726 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:15:05.140176 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:15:05.141740 systemd[1]: Started sshd@15-10.0.0.145:22-10.0.0.1:49178.service - OpenSSH per-connection server daemon (10.0.0.1:49178). Jul 7 06:15:05.142922 systemd-logind[1418]: Removed session 15. Jul 7 06:15:05.185884 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 49178 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:05.187641 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:05.192181 systemd-logind[1418]: New session 16 of user core. Jul 7 06:15:05.203545 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:15:05.431774 sshd[3990]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:05.440949 systemd[1]: sshd@15-10.0.0.145:22-10.0.0.1:49178.service: Deactivated successfully. Jul 7 06:15:05.443726 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:15:05.444911 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:15:05.455891 systemd[1]: Started sshd@16-10.0.0.145:22-10.0.0.1:49182.service - OpenSSH per-connection server daemon (10.0.0.1:49182). Jul 7 06:15:05.456767 systemd-logind[1418]: Removed session 16. Jul 7 06:15:05.491051 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 49182 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:05.491860 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:05.495966 systemd-logind[1418]: New session 17 of user core. Jul 7 06:15:05.508548 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:15:06.228968 sshd[4003]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:06.237960 systemd[1]: sshd@16-10.0.0.145:22-10.0.0.1:49182.service: Deactivated successfully. Jul 7 06:15:06.243319 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:15:06.245777 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:15:06.252761 systemd[1]: Started sshd@17-10.0.0.145:22-10.0.0.1:49190.service - OpenSSH per-connection server daemon (10.0.0.1:49190). Jul 7 06:15:06.254084 systemd-logind[1418]: Removed session 17. Jul 7 06:15:06.285467 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 49190 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:06.286660 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:06.291193 systemd-logind[1418]: New session 18 of user core. Jul 7 06:15:06.298644 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:15:06.504303 sshd[4024]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:06.511733 systemd[1]: sshd@17-10.0.0.145:22-10.0.0.1:49190.service: Deactivated successfully. Jul 7 06:15:06.513324 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:15:06.517489 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:15:06.526677 systemd[1]: Started sshd@18-10.0.0.145:22-10.0.0.1:49192.service - OpenSSH per-connection server daemon (10.0.0.1:49192). Jul 7 06:15:06.527878 systemd-logind[1418]: Removed session 18. Jul 7 06:15:06.555599 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 49192 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:06.556851 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:06.560511 systemd-logind[1418]: New session 19 of user core. Jul 7 06:15:06.574606 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:15:06.686832 sshd[4037]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:06.690300 systemd[1]: sshd@18-10.0.0.145:22-10.0.0.1:49192.service: Deactivated successfully. Jul 7 06:15:06.693224 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:15:06.695072 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:15:06.695922 systemd-logind[1418]: Removed session 19. Jul 7 06:15:11.696908 systemd[1]: Started sshd@19-10.0.0.145:22-10.0.0.1:49198.service - OpenSSH per-connection server daemon (10.0.0.1:49198). Jul 7 06:15:11.729008 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 49198 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:11.730270 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:11.734087 systemd-logind[1418]: New session 20 of user core. Jul 7 06:15:11.743548 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:15:11.846782 sshd[4057]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:11.849703 systemd[1]: sshd@19-10.0.0.145:22-10.0.0.1:49198.service: Deactivated successfully. Jul 7 06:15:11.851338 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:15:11.853257 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:15:11.854146 systemd-logind[1418]: Removed session 20. Jul 7 06:15:16.856174 systemd[1]: Started sshd@20-10.0.0.145:22-10.0.0.1:33456.service - OpenSSH per-connection server daemon (10.0.0.1:33456). Jul 7 06:15:16.887760 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 33456 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:16.888963 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:16.893013 systemd-logind[1418]: New session 21 of user core. Jul 7 06:15:16.902610 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:15:17.005274 sshd[4071]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:17.007928 systemd-logind[1418]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:15:17.008099 systemd[1]: sshd@20-10.0.0.145:22-10.0.0.1:33456.service: Deactivated successfully. Jul 7 06:15:17.009528 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:15:17.010859 systemd-logind[1418]: Removed session 21. Jul 7 06:15:22.016001 systemd[1]: Started sshd@21-10.0.0.145:22-10.0.0.1:33472.service - OpenSSH per-connection server daemon (10.0.0.1:33472). Jul 7 06:15:22.048139 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 33472 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:22.049299 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:22.053276 systemd-logind[1418]: New session 22 of user core. Jul 7 06:15:22.064582 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:15:22.168791 sshd[4087]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:22.177935 systemd[1]: sshd@21-10.0.0.145:22-10.0.0.1:33472.service: Deactivated successfully. Jul 7 06:15:22.179297 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:15:22.182504 systemd-logind[1418]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:15:22.192647 systemd[1]: Started sshd@22-10.0.0.145:22-10.0.0.1:33486.service - OpenSSH per-connection server daemon (10.0.0.1:33486). Jul 7 06:15:22.193540 systemd-logind[1418]: Removed session 22. Jul 7 06:15:22.220969 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 33486 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:22.222170 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:22.225566 systemd-logind[1418]: New session 23 of user core. Jul 7 06:15:22.236535 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:15:24.682854 containerd[1448]: time="2025-07-07T06:15:24.682706726Z" level=info msg="StopContainer for \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\" with timeout 30 (s)" Jul 7 06:15:24.683729 containerd[1448]: time="2025-07-07T06:15:24.683700002Z" level=info msg="Stop container \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\" with signal terminated" Jul 7 06:15:24.700129 systemd[1]: cri-containerd-6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed.scope: Deactivated successfully. Jul 7 06:15:24.711617 containerd[1448]: time="2025-07-07T06:15:24.711363640Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:15:24.717452 containerd[1448]: time="2025-07-07T06:15:24.717377090Z" level=info msg="StopContainer for \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\" with timeout 2 (s)" Jul 7 06:15:24.717674 containerd[1448]: time="2025-07-07T06:15:24.717647758Z" level=info msg="Stop container \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\" with signal terminated" Jul 7 06:15:24.719586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed-rootfs.mount: Deactivated successfully. Jul 7 06:15:24.724781 systemd-networkd[1380]: lxc_health: Link DOWN Jul 7 06:15:24.724788 systemd-networkd[1380]: lxc_health: Lost carrier Jul 7 06:15:24.727599 containerd[1448]: time="2025-07-07T06:15:24.727191970Z" level=info msg="shim disconnected" id=6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed namespace=k8s.io Jul 7 06:15:24.727599 containerd[1448]: time="2025-07-07T06:15:24.727338883Z" level=warning msg="cleaning up after shim disconnected" id=6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed namespace=k8s.io Jul 7 06:15:24.727599 containerd[1448]: time="2025-07-07T06:15:24.727352482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:15:24.752136 systemd[1]: cri-containerd-e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff.scope: Deactivated successfully. Jul 7 06:15:24.752476 systemd[1]: cri-containerd-e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff.scope: Consumed 6.412s CPU time. Jul 7 06:15:24.768152 containerd[1448]: time="2025-07-07T06:15:24.768112693Z" level=info msg="StopContainer for \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\" returns successfully" Jul 7 06:15:24.768970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff-rootfs.mount: Deactivated successfully. Jul 7 06:15:24.770387 containerd[1448]: time="2025-07-07T06:15:24.770219198Z" level=info msg="StopPodSandbox for \"17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030\"" Jul 7 06:15:24.770387 containerd[1448]: time="2025-07-07T06:15:24.770256357Z" level=info msg="Container to stop \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:24.771746 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030-shm.mount: Deactivated successfully. Jul 7 06:15:24.774117 containerd[1448]: time="2025-07-07T06:15:24.773920072Z" level=info msg="shim disconnected" id=e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff namespace=k8s.io Jul 7 06:15:24.774117 containerd[1448]: time="2025-07-07T06:15:24.773961830Z" level=warning msg="cleaning up after shim disconnected" id=e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff namespace=k8s.io Jul 7 06:15:24.774117 containerd[1448]: time="2025-07-07T06:15:24.773969830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:15:24.779517 systemd[1]: cri-containerd-17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030.scope: Deactivated successfully. Jul 7 06:15:24.790018 containerd[1448]: time="2025-07-07T06:15:24.789898515Z" level=info msg="StopContainer for \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\" returns successfully" Jul 7 06:15:24.790643 containerd[1448]: time="2025-07-07T06:15:24.790453010Z" level=info msg="StopPodSandbox for \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\"" Jul 7 06:15:24.790643 containerd[1448]: time="2025-07-07T06:15:24.790486329Z" level=info msg="Container to stop \"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:24.790643 containerd[1448]: time="2025-07-07T06:15:24.790497368Z" level=info msg="Container to stop \"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:24.790643 containerd[1448]: time="2025-07-07T06:15:24.790506768Z" level=info msg="Container to stop \"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:24.790643 containerd[1448]: time="2025-07-07T06:15:24.790516287Z" level=info msg="Container to stop \"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:24.790643 containerd[1448]: time="2025-07-07T06:15:24.790524887Z" level=info msg="Container to stop \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:24.795516 systemd[1]: cri-containerd-146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412.scope: Deactivated successfully. Jul 7 06:15:24.811351 containerd[1448]: time="2025-07-07T06:15:24.810780538Z" level=info msg="shim disconnected" id=17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030 namespace=k8s.io Jul 7 06:15:24.811351 containerd[1448]: time="2025-07-07T06:15:24.811215038Z" level=warning msg="cleaning up after shim disconnected" id=17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030 namespace=k8s.io Jul 7 06:15:24.811351 containerd[1448]: time="2025-07-07T06:15:24.811224958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:15:24.817977 containerd[1448]: time="2025-07-07T06:15:24.817926577Z" level=info msg="shim disconnected" id=146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412 namespace=k8s.io Jul 7 06:15:24.817977 containerd[1448]: time="2025-07-07T06:15:24.817972095Z" level=warning msg="cleaning up after shim disconnected" id=146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412 namespace=k8s.io Jul 7 06:15:24.818160 containerd[1448]: time="2025-07-07T06:15:24.817980455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:15:24.823531 containerd[1448]: time="2025-07-07T06:15:24.823349374Z" level=info msg="TearDown network for sandbox \"17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030\" successfully" Jul 7 06:15:24.823531 containerd[1448]: time="2025-07-07T06:15:24.823381172Z" level=info msg="StopPodSandbox for \"17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030\" returns successfully" Jul 7 06:15:24.831477 containerd[1448]: time="2025-07-07T06:15:24.831372214Z" level=info msg="TearDown network for sandbox \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" successfully" Jul 7 06:15:24.831477 containerd[1448]: time="2025-07-07T06:15:24.831416732Z" level=info msg="StopPodSandbox for \"146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412\" returns successfully" Jul 7 06:15:24.927013 kubelet[2460]: I0707 06:15:24.926528 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-host-proc-sys-kernel\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927013 kubelet[2460]: I0707 06:15:24.926588 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rt2n\" (UniqueName: \"kubernetes.io/projected/138efa40-59f0-43ea-8bb4-b15c317538f3-kube-api-access-9rt2n\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927013 kubelet[2460]: I0707 06:15:24.926623 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/138efa40-59f0-43ea-8bb4-b15c317538f3-hubble-tls\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927013 kubelet[2460]: I0707 06:15:24.926653 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-config-path\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927013 kubelet[2460]: I0707 06:15:24.926680 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-host-proc-sys-net\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927013 kubelet[2460]: I0707 06:15:24.926705 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-xtables-lock\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927531 kubelet[2460]: I0707 06:15:24.926730 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-etc-cni-netd\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927531 kubelet[2460]: I0707 06:15:24.926756 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-bpf-maps\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927531 kubelet[2460]: I0707 06:15:24.926783 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4pdx\" (UniqueName: \"kubernetes.io/projected/cc59d381-a793-42af-a4b2-914c5b7e4a8f-kube-api-access-n4pdx\") pod \"cc59d381-a793-42af-a4b2-914c5b7e4a8f\" (UID: \"cc59d381-a793-42af-a4b2-914c5b7e4a8f\") " Jul 7 06:15:24.927531 kubelet[2460]: I0707 06:15:24.926801 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-run\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927531 kubelet[2460]: I0707 06:15:24.926817 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/138efa40-59f0-43ea-8bb4-b15c317538f3-clustermesh-secrets\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927531 kubelet[2460]: I0707 06:15:24.926833 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-lib-modules\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927676 kubelet[2460]: I0707 06:15:24.926846 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cni-path\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927676 kubelet[2460]: I0707 06:15:24.926861 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-cgroup\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927676 kubelet[2460]: I0707 06:15:24.926874 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-hostproc\") pod \"138efa40-59f0-43ea-8bb4-b15c317538f3\" (UID: \"138efa40-59f0-43ea-8bb4-b15c317538f3\") " Jul 7 06:15:24.927676 kubelet[2460]: I0707 06:15:24.926889 2460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc59d381-a793-42af-a4b2-914c5b7e4a8f-cilium-config-path\") pod \"cc59d381-a793-42af-a4b2-914c5b7e4a8f\" (UID: \"cc59d381-a793-42af-a4b2-914c5b7e4a8f\") " Jul 7 06:15:24.930748 kubelet[2460]: I0707 06:15:24.930662 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:24.930748 kubelet[2460]: I0707 06:15:24.930664 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:24.930748 kubelet[2460]: I0707 06:15:24.930722 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:24.930748 kubelet[2460]: I0707 06:15:24.930737 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:24.930748 kubelet[2460]: I0707 06:15:24.930751 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:24.930968 kubelet[2460]: I0707 06:15:24.930765 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:24.930968 kubelet[2460]: I0707 06:15:24.930778 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cni-path" (OuterVolumeSpecName: "cni-path") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:24.930968 kubelet[2460]: I0707 06:15:24.930792 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-hostproc" (OuterVolumeSpecName: "hostproc") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:24.933489 kubelet[2460]: I0707 06:15:24.933381 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/138efa40-59f0-43ea-8bb4-b15c317538f3-kube-api-access-9rt2n" (OuterVolumeSpecName: "kube-api-access-9rt2n") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "kube-api-access-9rt2n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:15:24.933489 kubelet[2460]: I0707 06:15:24.933447 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:24.933489 kubelet[2460]: I0707 06:15:24.933467 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:24.934866 kubelet[2460]: I0707 06:15:24.933886 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc59d381-a793-42af-a4b2-914c5b7e4a8f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cc59d381-a793-42af-a4b2-914c5b7e4a8f" (UID: "cc59d381-a793-42af-a4b2-914c5b7e4a8f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:15:24.935074 kubelet[2460]: I0707 06:15:24.935035 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc59d381-a793-42af-a4b2-914c5b7e4a8f-kube-api-access-n4pdx" (OuterVolumeSpecName: "kube-api-access-n4pdx") pod "cc59d381-a793-42af-a4b2-914c5b7e4a8f" (UID: "cc59d381-a793-42af-a4b2-914c5b7e4a8f"). InnerVolumeSpecName "kube-api-access-n4pdx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:15:24.935366 kubelet[2460]: I0707 06:15:24.935333 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138efa40-59f0-43ea-8bb4-b15c317538f3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:15:24.936774 kubelet[2460]: I0707 06:15:24.936730 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/138efa40-59f0-43ea-8bb4-b15c317538f3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:15:24.939766 kubelet[2460]: I0707 06:15:24.939723 2460 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "138efa40-59f0-43ea-8bb4-b15c317538f3" (UID: "138efa40-59f0-43ea-8bb4-b15c317538f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:15:25.027246 kubelet[2460]: I0707 06:15:25.027214 2460 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027246 kubelet[2460]: I0707 06:15:25.027240 2460 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027246 kubelet[2460]: I0707 06:15:25.027250 2460 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc59d381-a793-42af-a4b2-914c5b7e4a8f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027381 kubelet[2460]: I0707 06:15:25.027260 2460 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027381 kubelet[2460]: I0707 06:15:25.027268 2460 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/138efa40-59f0-43ea-8bb4-b15c317538f3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027381 kubelet[2460]: I0707 06:15:25.027276 2460 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027381 kubelet[2460]: I0707 06:15:25.027291 2460 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027381 kubelet[2460]: I0707 06:15:25.027300 2460 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027381 kubelet[2460]: I0707 06:15:25.027309 2460 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9rt2n\" (UniqueName: \"kubernetes.io/projected/138efa40-59f0-43ea-8bb4-b15c317538f3-kube-api-access-9rt2n\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027381 kubelet[2460]: I0707 06:15:25.027317 2460 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027381 kubelet[2460]: I0707 06:15:25.027325 2460 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027591 kubelet[2460]: I0707 06:15:25.027335 2460 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n4pdx\" (UniqueName: \"kubernetes.io/projected/cc59d381-a793-42af-a4b2-914c5b7e4a8f-kube-api-access-n4pdx\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027591 kubelet[2460]: I0707 06:15:25.027342 2460 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027591 kubelet[2460]: I0707 06:15:25.027351 2460 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/138efa40-59f0-43ea-8bb4-b15c317538f3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027591 kubelet[2460]: I0707 06:15:25.027362 2460 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.027591 kubelet[2460]: I0707 06:15:25.027370 2460 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/138efa40-59f0-43ea-8bb4-b15c317538f3-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:25.495751 systemd[1]: Removed slice kubepods-burstable-pod138efa40_59f0_43ea_8bb4_b15c317538f3.slice - libcontainer container kubepods-burstable-pod138efa40_59f0_43ea_8bb4_b15c317538f3.slice. Jul 7 06:15:25.495842 systemd[1]: kubepods-burstable-pod138efa40_59f0_43ea_8bb4_b15c317538f3.slice: Consumed 6.558s CPU time. Jul 7 06:15:25.496715 systemd[1]: Removed slice kubepods-besteffort-podcc59d381_a793_42af_a4b2_914c5b7e4a8f.slice - libcontainer container kubepods-besteffort-podcc59d381_a793_42af_a4b2_914c5b7e4a8f.slice. Jul 7 06:15:25.667902 kubelet[2460]: I0707 06:15:25.667873 2460 scope.go:117] "RemoveContainer" containerID="6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed" Jul 7 06:15:25.670180 containerd[1448]: time="2025-07-07T06:15:25.669354099Z" level=info msg="RemoveContainer for \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\"" Jul 7 06:15:25.674573 containerd[1448]: time="2025-07-07T06:15:25.674471316Z" level=info msg="RemoveContainer for \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\" returns successfully" Jul 7 06:15:25.674890 kubelet[2460]: I0707 06:15:25.674871 2460 scope.go:117] "RemoveContainer" containerID="6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed" Jul 7 06:15:25.675137 containerd[1448]: time="2025-07-07T06:15:25.675045931Z" level=error msg="ContainerStatus for \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\": not found" Jul 7 06:15:25.675524 kubelet[2460]: E0707 06:15:25.675419 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\": not found" containerID="6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed" Jul 7 06:15:25.682138 kubelet[2460]: I0707 06:15:25.682027 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed"} err="failed to get container status \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c59fd72a5c267c4dcc63236390540b43e424bb9de694902ad57204043c7a6ed\": not found" Jul 7 06:15:25.682138 kubelet[2460]: I0707 06:15:25.682121 2460 scope.go:117] "RemoveContainer" containerID="e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff" Jul 7 06:15:25.683415 containerd[1448]: time="2025-07-07T06:15:25.683087182Z" level=info msg="RemoveContainer for \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\"" Jul 7 06:15:25.686046 containerd[1448]: time="2025-07-07T06:15:25.685736947Z" level=info msg="RemoveContainer for \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\" returns successfully" Jul 7 06:15:25.686327 kubelet[2460]: I0707 06:15:25.685863 2460 scope.go:117] "RemoveContainer" containerID="d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996" Jul 7 06:15:25.687492 containerd[1448]: time="2025-07-07T06:15:25.687171524Z" level=info msg="RemoveContainer for \"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996\"" Jul 7 06:15:25.691385 containerd[1448]: time="2025-07-07T06:15:25.691337023Z" level=info msg="RemoveContainer for \"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996\" returns successfully" Jul 7 06:15:25.691550 kubelet[2460]: I0707 06:15:25.691497 2460 scope.go:117] "RemoveContainer" containerID="966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298" Jul 7 06:15:25.693016 containerd[1448]: time="2025-07-07T06:15:25.692975272Z" level=info msg="RemoveContainer for \"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298\"" Jul 7 06:15:25.693112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17b10e0b7469d6ef29642f7078551f1549f89fc8621c8478261a00da4e1c4030-rootfs.mount: Deactivated successfully. Jul 7 06:15:25.693234 systemd[1]: var-lib-kubelet-pods-cc59d381\x2da793\x2d42af\x2da4b2\x2d914c5b7e4a8f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn4pdx.mount: Deactivated successfully. Jul 7 06:15:25.693310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412-rootfs.mount: Deactivated successfully. Jul 7 06:15:25.693393 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-146541ccf62ad43d20999ccda32375e98e1730e52be8f6760ce6d47ad0859412-shm.mount: Deactivated successfully. Jul 7 06:15:25.693475 systemd[1]: var-lib-kubelet-pods-138efa40\x2d59f0\x2d43ea\x2d8bb4\x2db15c317538f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9rt2n.mount: Deactivated successfully. Jul 7 06:15:25.693573 systemd[1]: var-lib-kubelet-pods-138efa40\x2d59f0\x2d43ea\x2d8bb4\x2db15c317538f3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 06:15:25.693649 systemd[1]: var-lib-kubelet-pods-138efa40\x2d59f0\x2d43ea\x2d8bb4\x2db15c317538f3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 06:15:25.695566 containerd[1448]: time="2025-07-07T06:15:25.695540800Z" level=info msg="RemoveContainer for \"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298\" returns successfully" Jul 7 06:15:25.695875 kubelet[2460]: I0707 06:15:25.695767 2460 scope.go:117] "RemoveContainer" containerID="bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd" Jul 7 06:15:25.697046 containerd[1448]: time="2025-07-07T06:15:25.696990337Z" level=info msg="RemoveContainer for \"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd\"" Jul 7 06:15:25.699979 containerd[1448]: time="2025-07-07T06:15:25.699951448Z" level=info msg="RemoveContainer for \"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd\" returns successfully" Jul 7 06:15:25.700967 kubelet[2460]: I0707 06:15:25.700938 2460 scope.go:117] "RemoveContainer" containerID="0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b" Jul 7 06:15:25.701911 containerd[1448]: time="2025-07-07T06:15:25.701867565Z" level=info msg="RemoveContainer for \"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b\"" Jul 7 06:15:25.703979 containerd[1448]: time="2025-07-07T06:15:25.703947795Z" level=info msg="RemoveContainer for \"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b\" returns successfully" Jul 7 06:15:25.704202 kubelet[2460]: I0707 06:15:25.704122 2460 scope.go:117] "RemoveContainer" containerID="e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff" Jul 7 06:15:25.704466 containerd[1448]: time="2025-07-07T06:15:25.704370656Z" level=error msg="ContainerStatus for \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\": not found" Jul 7 06:15:25.704543 kubelet[2460]: E0707 06:15:25.704508 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\": not found" containerID="e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff" Jul 7 06:15:25.704593 kubelet[2460]: I0707 06:15:25.704540 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff"} err="failed to get container status \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"e082294eb6d0a29f6d637e8b316796fff2ea73c21262782b6f34ed3a753e19ff\": not found" Jul 7 06:15:25.704593 kubelet[2460]: I0707 06:15:25.704561 2460 scope.go:117] "RemoveContainer" containerID="d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996" Jul 7 06:15:25.704826 kubelet[2460]: E0707 06:15:25.704803 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996\": not found" containerID="d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996" Jul 7 06:15:25.704826 kubelet[2460]: I0707 06:15:25.704819 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996"} err="failed to get container status \"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996\": not found" Jul 7 06:15:25.704884 containerd[1448]: time="2025-07-07T06:15:25.704702802Z" level=error msg="ContainerStatus for \"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1f58887dc5a0dc22257f1a4e602765c0a4344d2d5a97894a8dc66a2e1b5e996\": not found" Jul 7 06:15:25.704909 kubelet[2460]: I0707 06:15:25.704832 2460 scope.go:117] "RemoveContainer" containerID="966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298" Jul 7 06:15:25.705037 containerd[1448]: time="2025-07-07T06:15:25.704985230Z" level=error msg="ContainerStatus for \"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298\": not found" Jul 7 06:15:25.705128 kubelet[2460]: E0707 06:15:25.705110 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298\": not found" containerID="966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298" Jul 7 06:15:25.705199 kubelet[2460]: I0707 06:15:25.705180 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298"} err="failed to get container status \"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298\": rpc error: code = NotFound desc = an error occurred when try to find container \"966a734d1ec1a3290e8efc9366d132e72faa06c4908c9e289d6c060615188298\": not found" Jul 7 06:15:25.705236 kubelet[2460]: I0707 06:15:25.705198 2460 scope.go:117] "RemoveContainer" containerID="bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd" Jul 7 06:15:25.705357 containerd[1448]: time="2025-07-07T06:15:25.705332334Z" level=error msg="ContainerStatus for \"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd\": not found" Jul 7 06:15:25.705482 kubelet[2460]: E0707 06:15:25.705464 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd\": not found" containerID="bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd" Jul 7 06:15:25.705524 kubelet[2460]: I0707 06:15:25.705485 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd"} err="failed to get container status \"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdaa3b50da85a81f5e36bae8cf9605572f7a4920c674d3864de6a86475079cdd\": not found" Jul 7 06:15:25.705524 kubelet[2460]: I0707 06:15:25.705500 2460 scope.go:117] "RemoveContainer" containerID="0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b" Jul 7 06:15:25.705674 containerd[1448]: time="2025-07-07T06:15:25.705645441Z" level=error msg="ContainerStatus for \"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b\": not found" Jul 7 06:15:25.705820 kubelet[2460]: E0707 06:15:25.705772 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b\": not found" containerID="0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b" Jul 7 06:15:25.705820 kubelet[2460]: I0707 06:15:25.705799 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b"} err="failed to get container status \"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"0bc5d623a0ac839d69ab3d12a55b7881a3594b9a51e9993a498f2c6cc8a42b1b\": not found" Jul 7 06:15:26.487585 kubelet[2460]: E0707 06:15:26.487533 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:26.533705 kubelet[2460]: E0707 06:15:26.533671 2460 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 06:15:26.646786 sshd[4101]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:26.655910 systemd[1]: sshd@22-10.0.0.145:22-10.0.0.1:33486.service: Deactivated successfully. Jul 7 06:15:26.658315 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:15:26.658707 systemd[1]: session-23.scope: Consumed 1.786s CPU time. Jul 7 06:15:26.660004 systemd-logind[1418]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:15:26.668685 systemd[1]: Started sshd@23-10.0.0.145:22-10.0.0.1:57452.service - OpenSSH per-connection server daemon (10.0.0.1:57452). Jul 7 06:15:26.669829 systemd-logind[1418]: Removed session 23. Jul 7 06:15:26.702460 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 57452 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:26.704034 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:26.707334 systemd-logind[1418]: New session 24 of user core. Jul 7 06:15:26.713597 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:15:27.494587 kubelet[2460]: I0707 06:15:27.493199 2460 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="138efa40-59f0-43ea-8bb4-b15c317538f3" path="/var/lib/kubelet/pods/138efa40-59f0-43ea-8bb4-b15c317538f3/volumes" Jul 7 06:15:27.494587 kubelet[2460]: I0707 06:15:27.493778 2460 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc59d381-a793-42af-a4b2-914c5b7e4a8f" path="/var/lib/kubelet/pods/cc59d381-a793-42af-a4b2-914c5b7e4a8f/volumes" Jul 7 06:15:27.731622 sshd[4262]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:27.741359 systemd[1]: sshd@23-10.0.0.145:22-10.0.0.1:57452.service: Deactivated successfully. Jul 7 06:15:27.744935 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:15:27.746390 systemd-logind[1418]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:15:27.748892 kubelet[2460]: I0707 06:15:27.748850 2460 memory_manager.go:355] "RemoveStaleState removing state" podUID="cc59d381-a793-42af-a4b2-914c5b7e4a8f" containerName="cilium-operator" Jul 7 06:15:27.748892 kubelet[2460]: I0707 06:15:27.748875 2460 memory_manager.go:355] "RemoveStaleState removing state" podUID="138efa40-59f0-43ea-8bb4-b15c317538f3" containerName="cilium-agent" Jul 7 06:15:27.758817 systemd[1]: Started sshd@24-10.0.0.145:22-10.0.0.1:57468.service - OpenSSH per-connection server daemon (10.0.0.1:57468). Jul 7 06:15:27.761784 systemd-logind[1418]: Removed session 24. Jul 7 06:15:27.768143 systemd[1]: Created slice kubepods-burstable-pod5db6e42a_e761_400b_aa93_f06629a4cc43.slice - libcontainer container kubepods-burstable-pod5db6e42a_e761_400b_aa93_f06629a4cc43.slice. Jul 7 06:15:27.790253 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 57468 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:27.792011 sshd[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:27.802876 systemd-logind[1418]: New session 25 of user core. Jul 7 06:15:27.811819 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:15:27.841111 kubelet[2460]: I0707 06:15:27.841071 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5db6e42a-e761-400b-aa93-f06629a4cc43-xtables-lock\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841111 kubelet[2460]: I0707 06:15:27.841121 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5db6e42a-e761-400b-aa93-f06629a4cc43-cni-path\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841111 kubelet[2460]: I0707 06:15:27.841143 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5db6e42a-e761-400b-aa93-f06629a4cc43-clustermesh-secrets\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841291 kubelet[2460]: I0707 06:15:27.841164 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5db6e42a-e761-400b-aa93-f06629a4cc43-cilium-config-path\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841291 kubelet[2460]: I0707 06:15:27.841181 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5db6e42a-e761-400b-aa93-f06629a4cc43-bpf-maps\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841291 kubelet[2460]: I0707 06:15:27.841196 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5db6e42a-e761-400b-aa93-f06629a4cc43-etc-cni-netd\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841291 kubelet[2460]: I0707 06:15:27.841211 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5db6e42a-e761-400b-aa93-f06629a4cc43-host-proc-sys-kernel\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841291 kubelet[2460]: I0707 06:15:27.841233 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7gkm\" (UniqueName: \"kubernetes.io/projected/5db6e42a-e761-400b-aa93-f06629a4cc43-kube-api-access-q7gkm\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841459 kubelet[2460]: I0707 06:15:27.841250 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5db6e42a-e761-400b-aa93-f06629a4cc43-cilium-cgroup\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841459 kubelet[2460]: I0707 06:15:27.841264 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5db6e42a-e761-400b-aa93-f06629a4cc43-lib-modules\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841459 kubelet[2460]: I0707 06:15:27.841280 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5db6e42a-e761-400b-aa93-f06629a4cc43-hubble-tls\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841459 kubelet[2460]: I0707 06:15:27.841299 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5db6e42a-e761-400b-aa93-f06629a4cc43-hostproc\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841459 kubelet[2460]: I0707 06:15:27.841314 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5db6e42a-e761-400b-aa93-f06629a4cc43-cilium-ipsec-secrets\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841459 kubelet[2460]: I0707 06:15:27.841328 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5db6e42a-e761-400b-aa93-f06629a4cc43-host-proc-sys-net\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.841588 kubelet[2460]: I0707 06:15:27.841346 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5db6e42a-e761-400b-aa93-f06629a4cc43-cilium-run\") pod \"cilium-84xjt\" (UID: \"5db6e42a-e761-400b-aa93-f06629a4cc43\") " pod="kube-system/cilium-84xjt" Jul 7 06:15:27.863412 sshd[4276]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:27.874103 systemd[1]: sshd@24-10.0.0.145:22-10.0.0.1:57468.service: Deactivated successfully. Jul 7 06:15:27.877746 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:15:27.879822 systemd-logind[1418]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:15:27.882225 systemd[1]: Started sshd@25-10.0.0.145:22-10.0.0.1:57470.service - OpenSSH per-connection server daemon (10.0.0.1:57470). Jul 7 06:15:27.883142 systemd-logind[1418]: Removed session 25. Jul 7 06:15:27.916946 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 57470 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:15:27.918268 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:27.921898 systemd-logind[1418]: New session 26 of user core. Jul 7 06:15:27.930538 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:15:28.073089 kubelet[2460]: E0707 06:15:28.072526 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:28.073391 containerd[1448]: time="2025-07-07T06:15:28.073323450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-84xjt,Uid:5db6e42a-e761-400b-aa93-f06629a4cc43,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:28.094550 containerd[1448]: time="2025-07-07T06:15:28.094454655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:15:28.094550 containerd[1448]: time="2025-07-07T06:15:28.094508893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:15:28.094550 containerd[1448]: time="2025-07-07T06:15:28.094520612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:15:28.094784 containerd[1448]: time="2025-07-07T06:15:28.094606969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:15:28.114614 systemd[1]: Started cri-containerd-461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474.scope - libcontainer container 461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474. Jul 7 06:15:28.132905 containerd[1448]: time="2025-07-07T06:15:28.132770420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-84xjt,Uid:5db6e42a-e761-400b-aa93-f06629a4cc43,Namespace:kube-system,Attempt:0,} returns sandbox id \"461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474\"" Jul 7 06:15:28.133654 kubelet[2460]: E0707 06:15:28.133628 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:28.136939 containerd[1448]: time="2025-07-07T06:15:28.136602029Z" level=info msg="CreateContainer within sandbox \"461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:15:28.147029 containerd[1448]: time="2025-07-07T06:15:28.146977418Z" level=info msg="CreateContainer within sandbox \"461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"485ed8ec3348c994a2f23fbc5fcde8b304a2d3b4f6ea633246d6c652f745d157\"" Jul 7 06:15:28.148275 containerd[1448]: time="2025-07-07T06:15:28.148240609Z" level=info msg="StartContainer for \"485ed8ec3348c994a2f23fbc5fcde8b304a2d3b4f6ea633246d6c652f745d157\"" Jul 7 06:15:28.178585 systemd[1]: Started cri-containerd-485ed8ec3348c994a2f23fbc5fcde8b304a2d3b4f6ea633246d6c652f745d157.scope - libcontainer container 485ed8ec3348c994a2f23fbc5fcde8b304a2d3b4f6ea633246d6c652f745d157. Jul 7 06:15:28.198761 containerd[1448]: time="2025-07-07T06:15:28.198616857Z" level=info msg="StartContainer for \"485ed8ec3348c994a2f23fbc5fcde8b304a2d3b4f6ea633246d6c652f745d157\" returns successfully" Jul 7 06:15:28.211096 systemd[1]: cri-containerd-485ed8ec3348c994a2f23fbc5fcde8b304a2d3b4f6ea633246d6c652f745d157.scope: Deactivated successfully. Jul 7 06:15:28.236300 containerd[1448]: time="2025-07-07T06:15:28.236240050Z" level=info msg="shim disconnected" id=485ed8ec3348c994a2f23fbc5fcde8b304a2d3b4f6ea633246d6c652f745d157 namespace=k8s.io Jul 7 06:15:28.236300 containerd[1448]: time="2025-07-07T06:15:28.236288088Z" level=warning msg="cleaning up after shim disconnected" id=485ed8ec3348c994a2f23fbc5fcde8b304a2d3b4f6ea633246d6c652f745d157 namespace=k8s.io Jul 7 06:15:28.236300 containerd[1448]: time="2025-07-07T06:15:28.236297168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:15:28.682223 kubelet[2460]: E0707 06:15:28.682025 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:28.685466 containerd[1448]: time="2025-07-07T06:15:28.685421693Z" level=info msg="CreateContainer within sandbox \"461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:15:28.708413 containerd[1448]: time="2025-07-07T06:15:28.708294309Z" level=info msg="CreateContainer within sandbox \"461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d7fb2d4fc929b01ef1e1e869f1f49b68f07b94bfa559b6ddbaf166b84a0739e9\"" Jul 7 06:15:28.709427 containerd[1448]: time="2025-07-07T06:15:28.709058519Z" level=info msg="StartContainer for \"d7fb2d4fc929b01ef1e1e869f1f49b68f07b94bfa559b6ddbaf166b84a0739e9\"" Jul 7 06:15:28.730582 systemd[1]: Started cri-containerd-d7fb2d4fc929b01ef1e1e869f1f49b68f07b94bfa559b6ddbaf166b84a0739e9.scope - libcontainer container d7fb2d4fc929b01ef1e1e869f1f49b68f07b94bfa559b6ddbaf166b84a0739e9. Jul 7 06:15:28.750708 containerd[1448]: time="2025-07-07T06:15:28.750564918Z" level=info msg="StartContainer for \"d7fb2d4fc929b01ef1e1e869f1f49b68f07b94bfa559b6ddbaf166b84a0739e9\" returns successfully" Jul 7 06:15:28.755332 systemd[1]: cri-containerd-d7fb2d4fc929b01ef1e1e869f1f49b68f07b94bfa559b6ddbaf166b84a0739e9.scope: Deactivated successfully. Jul 7 06:15:28.779665 containerd[1448]: time="2025-07-07T06:15:28.779610490Z" level=info msg="shim disconnected" id=d7fb2d4fc929b01ef1e1e869f1f49b68f07b94bfa559b6ddbaf166b84a0739e9 namespace=k8s.io Jul 7 06:15:28.779998 containerd[1448]: time="2025-07-07T06:15:28.779837601Z" level=warning msg="cleaning up after shim disconnected" id=d7fb2d4fc929b01ef1e1e869f1f49b68f07b94bfa559b6ddbaf166b84a0739e9 namespace=k8s.io Jul 7 06:15:28.779998 containerd[1448]: time="2025-07-07T06:15:28.779852481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:15:29.684229 kubelet[2460]: E0707 06:15:29.684174 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:29.685893 containerd[1448]: time="2025-07-07T06:15:29.685776474Z" level=info msg="CreateContainer within sandbox \"461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:15:29.702573 containerd[1448]: time="2025-07-07T06:15:29.702520753Z" level=info msg="CreateContainer within sandbox \"461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e62d77eca3f73e61c98c33f9fb6588b42532e197dc27e3def7fa5930be03d0d2\"" Jul 7 06:15:29.702993 containerd[1448]: time="2025-07-07T06:15:29.702965736Z" level=info msg="StartContainer for \"e62d77eca3f73e61c98c33f9fb6588b42532e197dc27e3def7fa5930be03d0d2\"" Jul 7 06:15:29.732555 systemd[1]: Started cri-containerd-e62d77eca3f73e61c98c33f9fb6588b42532e197dc27e3def7fa5930be03d0d2.scope - libcontainer container e62d77eca3f73e61c98c33f9fb6588b42532e197dc27e3def7fa5930be03d0d2. Jul 7 06:15:29.753763 containerd[1448]: time="2025-07-07T06:15:29.753728672Z" level=info msg="StartContainer for \"e62d77eca3f73e61c98c33f9fb6588b42532e197dc27e3def7fa5930be03d0d2\" returns successfully" Jul 7 06:15:29.754421 systemd[1]: cri-containerd-e62d77eca3f73e61c98c33f9fb6588b42532e197dc27e3def7fa5930be03d0d2.scope: Deactivated successfully. Jul 7 06:15:29.776016 containerd[1448]: time="2025-07-07T06:15:29.775965301Z" level=info msg="shim disconnected" id=e62d77eca3f73e61c98c33f9fb6588b42532e197dc27e3def7fa5930be03d0d2 namespace=k8s.io Jul 7 06:15:29.776016 containerd[1448]: time="2025-07-07T06:15:29.776014659Z" level=warning msg="cleaning up after shim disconnected" id=e62d77eca3f73e61c98c33f9fb6588b42532e197dc27e3def7fa5930be03d0d2 namespace=k8s.io Jul 7 06:15:29.776184 containerd[1448]: time="2025-07-07T06:15:29.776025378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:15:29.946237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e62d77eca3f73e61c98c33f9fb6588b42532e197dc27e3def7fa5930be03d0d2-rootfs.mount: Deactivated successfully. Jul 7 06:15:30.687974 kubelet[2460]: E0707 06:15:30.687602 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:30.689332 containerd[1448]: time="2025-07-07T06:15:30.689286948Z" level=info msg="CreateContainer within sandbox \"461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:15:30.701160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925695238.mount: Deactivated successfully. Jul 7 06:15:30.703660 containerd[1448]: time="2025-07-07T06:15:30.703621856Z" level=info msg="CreateContainer within sandbox \"461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c908bb7263388cef157688abed09def8c719b32b9cb1128cbe14e15ebe01454c\"" Jul 7 06:15:30.705046 containerd[1448]: time="2025-07-07T06:15:30.704026521Z" level=info msg="StartContainer for \"c908bb7263388cef157688abed09def8c719b32b9cb1128cbe14e15ebe01454c\"" Jul 7 06:15:30.734580 systemd[1]: Started cri-containerd-c908bb7263388cef157688abed09def8c719b32b9cb1128cbe14e15ebe01454c.scope - libcontainer container c908bb7263388cef157688abed09def8c719b32b9cb1128cbe14e15ebe01454c. Jul 7 06:15:30.754082 systemd[1]: cri-containerd-c908bb7263388cef157688abed09def8c719b32b9cb1128cbe14e15ebe01454c.scope: Deactivated successfully. Jul 7 06:15:30.763896 containerd[1448]: time="2025-07-07T06:15:30.763736226Z" level=info msg="StartContainer for \"c908bb7263388cef157688abed09def8c719b32b9cb1128cbe14e15ebe01454c\" returns successfully" Jul 7 06:15:30.783617 containerd[1448]: time="2025-07-07T06:15:30.783561371Z" level=info msg="shim disconnected" id=c908bb7263388cef157688abed09def8c719b32b9cb1128cbe14e15ebe01454c namespace=k8s.io Jul 7 06:15:30.783617 containerd[1448]: time="2025-07-07T06:15:30.783613129Z" level=warning msg="cleaning up after shim disconnected" id=c908bb7263388cef157688abed09def8c719b32b9cb1128cbe14e15ebe01454c namespace=k8s.io Jul 7 06:15:30.783617 containerd[1448]: time="2025-07-07T06:15:30.783621568Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:15:30.946427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c908bb7263388cef157688abed09def8c719b32b9cb1128cbe14e15ebe01454c-rootfs.mount: Deactivated successfully. Jul 7 06:15:31.534179 kubelet[2460]: E0707 06:15:31.534134 2460 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 06:15:31.692134 kubelet[2460]: E0707 06:15:31.692086 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:31.694828 containerd[1448]: time="2025-07-07T06:15:31.694387344Z" level=info msg="CreateContainer within sandbox \"461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:15:31.715422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607060582.mount: Deactivated successfully. Jul 7 06:15:31.716932 containerd[1448]: time="2025-07-07T06:15:31.716874976Z" level=info msg="CreateContainer within sandbox \"461c4874f930000279dc3e228d1870e2ee42e10670078214d540ed06692df474\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"93b2b703fe7d4369123fb0d25a65afeb22c9ba8296c3e58dc19e8b3ba29f4e30\"" Jul 7 06:15:31.718565 containerd[1448]: time="2025-07-07T06:15:31.717574590Z" level=info msg="StartContainer for \"93b2b703fe7d4369123fb0d25a65afeb22c9ba8296c3e58dc19e8b3ba29f4e30\"" Jul 7 06:15:31.745574 systemd[1]: Started cri-containerd-93b2b703fe7d4369123fb0d25a65afeb22c9ba8296c3e58dc19e8b3ba29f4e30.scope - libcontainer container 93b2b703fe7d4369123fb0d25a65afeb22c9ba8296c3e58dc19e8b3ba29f4e30. Jul 7 06:15:31.769029 containerd[1448]: time="2025-07-07T06:15:31.768984823Z" level=info msg="StartContainer for \"93b2b703fe7d4369123fb0d25a65afeb22c9ba8296c3e58dc19e8b3ba29f4e30\" returns successfully" Jul 7 06:15:32.038433 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 7 06:15:32.487767 kubelet[2460]: E0707 06:15:32.487731 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:32.696539 kubelet[2460]: E0707 06:15:32.696507 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:32.711714 kubelet[2460]: I0707 06:15:32.711651 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-84xjt" podStartSLOduration=5.711635423 podStartE2EDuration="5.711635423s" podCreationTimestamp="2025-07-07 06:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:15:32.711347113 +0000 UTC m=+81.302311684" watchObservedRunningTime="2025-07-07 06:15:32.711635423 +0000 UTC m=+81.302599954" Jul 7 06:15:33.587670 kubelet[2460]: I0707 06:15:33.586581 2460 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T06:15:33Z","lastTransitionTime":"2025-07-07T06:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 06:15:34.073527 kubelet[2460]: E0707 06:15:34.073490 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:34.834764 systemd-networkd[1380]: lxc_health: Link UP Jul 7 06:15:34.846576 systemd-networkd[1380]: lxc_health: Gained carrier Jul 7 06:15:36.077254 kubelet[2460]: E0707 06:15:36.076505 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:36.304692 systemd-networkd[1380]: lxc_health: Gained IPv6LL Jul 7 06:15:36.705680 kubelet[2460]: E0707 06:15:36.704497 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:37.706611 kubelet[2460]: E0707 06:15:37.706532 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:38.488446 kubelet[2460]: E0707 06:15:38.488382 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:40.641034 sshd[4284]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:40.644747 systemd[1]: sshd@25-10.0.0.145:22-10.0.0.1:57470.service: Deactivated successfully. Jul 7 06:15:40.646468 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:15:40.648073 systemd-logind[1418]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:15:40.649153 systemd-logind[1418]: Removed session 26.