Jul 7 06:00:50.892931 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 06:00:50.892963 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 06:00:50.892975 kernel: KASLR enabled Jul 7 06:00:50.892981 kernel: efi: EFI v2.7 by EDK II Jul 7 06:00:50.892987 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 7 06:00:50.892993 kernel: random: crng init done Jul 7 06:00:50.893000 kernel: ACPI: Early table checksum verification disabled Jul 7 06:00:50.893011 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 7 06:00:50.893017 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 7 06:00:50.893025 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:50.893031 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:50.893037 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:50.893043 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:50.893049 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:50.893056 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:50.893064 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:50.893070 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:50.893076 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:50.893083 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 7 06:00:50.893089 kernel: NUMA: Failed to initialise from firmware Jul 7 06:00:50.893095 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:00:50.893102 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 7 06:00:50.893108 kernel: Zone ranges: Jul 7 06:00:50.893114 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:00:50.893121 kernel: DMA32 empty Jul 7 06:00:50.893128 kernel: Normal empty Jul 7 06:00:50.893134 kernel: Movable zone start for each node Jul 7 06:00:50.893140 kernel: Early memory node ranges Jul 7 06:00:50.893147 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 7 06:00:50.893153 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 7 06:00:50.893159 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 7 06:00:50.893165 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 7 06:00:50.893172 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 7 06:00:50.893178 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 7 06:00:50.893184 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 7 06:00:50.893191 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:00:50.893197 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 7 06:00:50.893204 kernel: psci: probing for conduit method from ACPI. Jul 7 06:00:50.893211 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 06:00:50.893217 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 06:00:50.893226 kernel: psci: Trusted OS migration not required Jul 7 06:00:50.893233 kernel: psci: SMC Calling Convention v1.1 Jul 7 06:00:50.893239 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 7 06:00:50.893247 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 06:00:50.893254 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 06:00:50.893261 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 7 06:00:50.893268 kernel: Detected PIPT I-cache on CPU0 Jul 7 06:00:50.893275 kernel: CPU features: detected: GIC system register CPU interface Jul 7 06:00:50.893281 kernel: CPU features: detected: Hardware dirty bit management Jul 7 06:00:50.893288 kernel: CPU features: detected: Spectre-v4 Jul 7 06:00:50.893294 kernel: CPU features: detected: Spectre-BHB Jul 7 06:00:50.893301 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 06:00:50.893308 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 06:00:50.893316 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 06:00:50.893323 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 06:00:50.893329 kernel: alternatives: applying boot alternatives Jul 7 06:00:50.893337 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:00:50.893344 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:00:50.893351 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:00:50.893358 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:00:50.893364 kernel: Fallback order for Node 0: 0 Jul 7 06:00:50.893371 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 7 06:00:50.893378 kernel: Policy zone: DMA Jul 7 06:00:50.893385 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:00:50.893393 kernel: software IO TLB: area num 4. Jul 7 06:00:50.893399 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 7 06:00:50.893406 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 7 06:00:50.893413 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:00:50.893420 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:00:50.893427 kernel: rcu: RCU event tracing is enabled. Jul 7 06:00:50.893434 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:00:50.893441 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:00:50.893447 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:00:50.893454 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:00:50.893461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:00:50.893468 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 06:00:50.893475 kernel: GICv3: 256 SPIs implemented Jul 7 06:00:50.893482 kernel: GICv3: 0 Extended SPIs implemented Jul 7 06:00:50.893489 kernel: Root IRQ handler: gic_handle_irq Jul 7 06:00:50.893495 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 06:00:50.893502 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 7 06:00:50.893509 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 7 06:00:50.893516 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 06:00:50.893523 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 7 06:00:50.893530 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 7 06:00:50.893536 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 7 06:00:50.893543 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:00:50.893551 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:00:50.893558 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 06:00:50.893565 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 06:00:50.893571 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 06:00:50.893578 kernel: arm-pv: using stolen time PV Jul 7 06:00:50.893585 kernel: Console: colour dummy device 80x25 Jul 7 06:00:50.893592 kernel: ACPI: Core revision 20230628 Jul 7 06:00:50.893599 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 06:00:50.893606 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:00:50.893613 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 06:00:50.893621 kernel: landlock: Up and running. Jul 7 06:00:50.893634 kernel: SELinux: Initializing. Jul 7 06:00:50.893641 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:00:50.893648 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:00:50.893655 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:00:50.893663 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:00:50.893670 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:00:50.893677 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:00:50.893684 kernel: Platform MSI: ITS@0x8080000 domain created Jul 7 06:00:50.893692 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 7 06:00:50.893699 kernel: Remapping and enabling EFI services. Jul 7 06:00:50.893705 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:00:50.893712 kernel: Detected PIPT I-cache on CPU1 Jul 7 06:00:50.893720 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 7 06:00:50.893726 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 7 06:00:50.893733 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:00:50.893740 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 06:00:50.893747 kernel: Detected PIPT I-cache on CPU2 Jul 7 06:00:50.893754 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 7 06:00:50.893762 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 7 06:00:50.893769 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:00:50.893780 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 7 06:00:50.893788 kernel: Detected PIPT I-cache on CPU3 Jul 7 06:00:50.893796 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 7 06:00:50.893803 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 7 06:00:50.893810 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:00:50.893817 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 7 06:00:50.893825 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:00:50.893833 kernel: SMP: Total of 4 processors activated. Jul 7 06:00:50.893840 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 06:00:50.893848 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 06:00:50.893855 kernel: CPU features: detected: Common not Private translations Jul 7 06:00:50.893862 kernel: CPU features: detected: CRC32 instructions Jul 7 06:00:50.893869 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 06:00:50.893877 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 06:00:50.893884 kernel: CPU features: detected: LSE atomic instructions Jul 7 06:00:50.893893 kernel: CPU features: detected: Privileged Access Never Jul 7 06:00:50.893900 kernel: CPU features: detected: RAS Extension Support Jul 7 06:00:50.893908 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 06:00:50.893915 kernel: CPU: All CPU(s) started at EL1 Jul 7 06:00:50.893922 kernel: alternatives: applying system-wide alternatives Jul 7 06:00:50.893930 kernel: devtmpfs: initialized Jul 7 06:00:50.893937 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:00:50.893944 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:00:50.893951 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:00:50.893960 kernel: SMBIOS 3.0.0 present. Jul 7 06:00:50.893967 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 7 06:00:50.893974 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:00:50.893982 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 06:00:50.893989 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 06:00:50.893997 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 06:00:50.894007 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:00:50.894016 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 7 06:00:50.894023 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:00:50.894032 kernel: cpuidle: using governor menu Jul 7 06:00:50.894039 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 06:00:50.894047 kernel: ASID allocator initialised with 32768 entries Jul 7 06:00:50.894054 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:00:50.894061 kernel: Serial: AMBA PL011 UART driver Jul 7 06:00:50.894069 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 06:00:50.894076 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 06:00:50.894083 kernel: Modules: 509008 pages in range for PLT usage Jul 7 06:00:50.894091 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:00:50.894114 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:00:50.894121 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 06:00:50.894129 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 06:00:50.894137 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:00:50.894144 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:00:50.894151 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 06:00:50.894159 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 06:00:50.894166 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:00:50.894173 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:00:50.894183 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:00:50.894190 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:00:50.894198 kernel: ACPI: Interpreter enabled Jul 7 06:00:50.894205 kernel: ACPI: Using GIC for interrupt routing Jul 7 06:00:50.894213 kernel: ACPI: MCFG table detected, 1 entries Jul 7 06:00:50.894220 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 7 06:00:50.894227 kernel: printk: console [ttyAMA0] enabled Jul 7 06:00:50.894234 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:00:50.894355 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:00:50.894428 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 06:00:50.894492 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 06:00:50.894555 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 7 06:00:50.894616 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 7 06:00:50.894695 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 7 06:00:50.894704 kernel: PCI host bridge to bus 0000:00 Jul 7 06:00:50.894780 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 7 06:00:50.894840 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 06:00:50.894895 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 7 06:00:50.894956 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:00:50.895041 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 7 06:00:50.895116 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 06:00:50.895181 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 7 06:00:50.895247 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 7 06:00:50.895309 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:00:50.895371 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:00:50.895448 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 7 06:00:50.895512 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 7 06:00:50.895568 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 7 06:00:50.895632 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 06:00:50.895703 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 7 06:00:50.895713 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 06:00:50.895720 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 06:00:50.895728 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 06:00:50.895735 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 06:00:50.895742 kernel: iommu: Default domain type: Translated Jul 7 06:00:50.895750 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 06:00:50.895757 kernel: efivars: Registered efivars operations Jul 7 06:00:50.895766 kernel: vgaarb: loaded Jul 7 06:00:50.895774 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 06:00:50.895781 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:00:50.895788 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:00:50.895796 kernel: pnp: PnP ACPI init Jul 7 06:00:50.895870 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 7 06:00:50.895881 kernel: pnp: PnP ACPI: found 1 devices Jul 7 06:00:50.895888 kernel: NET: Registered PF_INET protocol family Jul 7 06:00:50.895895 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:00:50.895905 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:00:50.895913 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:00:50.895920 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:00:50.895928 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:00:50.895935 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:00:50.895942 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:00:50.895950 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:00:50.895957 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:00:50.895965 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:00:50.895972 kernel: kvm [1]: HYP mode not available Jul 7 06:00:50.895980 kernel: Initialise system trusted keyrings Jul 7 06:00:50.895987 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:00:50.895994 kernel: Key type asymmetric registered Jul 7 06:00:50.896001 kernel: Asymmetric key parser 'x509' registered Jul 7 06:00:50.896015 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:00:50.896022 kernel: io scheduler mq-deadline registered Jul 7 06:00:50.896029 kernel: io scheduler kyber registered Jul 7 06:00:50.896036 kernel: io scheduler bfq registered Jul 7 06:00:50.896045 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 06:00:50.896053 kernel: ACPI: button: Power Button [PWRB] Jul 7 06:00:50.896060 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 06:00:50.896127 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 7 06:00:50.896137 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:00:50.896144 kernel: thunder_xcv, ver 1.0 Jul 7 06:00:50.896151 kernel: thunder_bgx, ver 1.0 Jul 7 06:00:50.896158 kernel: nicpf, ver 1.0 Jul 7 06:00:50.896166 kernel: nicvf, ver 1.0 Jul 7 06:00:50.896237 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 06:00:50.896297 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T06:00:50 UTC (1751868050) Jul 7 06:00:50.896307 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:00:50.896314 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 7 06:00:50.896322 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 06:00:50.896329 kernel: watchdog: Hard watchdog permanently disabled Jul 7 06:00:50.896336 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:00:50.896343 kernel: Segment Routing with IPv6 Jul 7 06:00:50.896353 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:00:50.896360 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:00:50.896367 kernel: Key type dns_resolver registered Jul 7 06:00:50.896374 kernel: registered taskstats version 1 Jul 7 06:00:50.896381 kernel: Loading compiled-in X.509 certificates Jul 7 06:00:50.896389 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 06:00:50.896396 kernel: Key type .fscrypt registered Jul 7 06:00:50.896403 kernel: Key type fscrypt-provisioning registered Jul 7 06:00:50.896410 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:00:50.896419 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:00:50.896426 kernel: ima: No architecture policies found Jul 7 06:00:50.896433 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 06:00:50.896440 kernel: clk: Disabling unused clocks Jul 7 06:00:50.896448 kernel: Freeing unused kernel memory: 39424K Jul 7 06:00:50.896455 kernel: Run /init as init process Jul 7 06:00:50.896462 kernel: with arguments: Jul 7 06:00:50.896469 kernel: /init Jul 7 06:00:50.896476 kernel: with environment: Jul 7 06:00:50.896484 kernel: HOME=/ Jul 7 06:00:50.896491 kernel: TERM=linux Jul 7 06:00:50.896498 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:00:50.896508 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:00:50.896517 systemd[1]: Detected virtualization kvm. Jul 7 06:00:50.896525 systemd[1]: Detected architecture arm64. Jul 7 06:00:50.896532 systemd[1]: Running in initrd. Jul 7 06:00:50.896541 systemd[1]: No hostname configured, using default hostname. Jul 7 06:00:50.896549 systemd[1]: Hostname set to . Jul 7 06:00:50.896557 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:00:50.896564 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:00:50.896572 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:00:50.896580 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:00:50.896588 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:00:50.896596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:00:50.896605 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:00:50.896613 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:00:50.896622 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:00:50.896638 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:00:50.896646 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:00:50.896654 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:00:50.896662 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:00:50.896671 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:00:50.896679 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:00:50.896686 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:00:50.896694 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:00:50.896702 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:00:50.896710 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:00:50.896717 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 06:00:50.896725 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:00:50.896733 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:00:50.896742 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:00:50.896750 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:00:50.896757 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:00:50.896765 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:00:50.896773 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:00:50.896781 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:00:50.896788 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:00:50.896796 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:00:50.896805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:00:50.896813 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:00:50.896821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:00:50.896828 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:00:50.896837 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:00:50.896846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:00:50.896869 systemd-journald[237]: Collecting audit messages is disabled. Jul 7 06:00:50.896887 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:00:50.896896 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:00:50.896905 systemd-journald[237]: Journal started Jul 7 06:00:50.896923 systemd-journald[237]: Runtime Journal (/run/log/journal/64683e592040429785452e1015d27592) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:00:50.888944 systemd-modules-load[238]: Inserted module 'overlay' Jul 7 06:00:50.900646 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:00:50.902682 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:00:50.902790 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:00:50.905028 kernel: Bridge firewalling registered Jul 7 06:00:50.904098 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 7 06:00:50.906178 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:00:50.907877 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:00:50.910297 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:00:50.917883 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:00:50.919049 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:00:50.920695 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:00:50.922328 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:00:50.933867 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:00:50.935643 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:00:50.942839 dracut-cmdline[274]: dracut-dracut-053 Jul 7 06:00:50.945242 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:00:50.959780 systemd-resolved[276]: Positive Trust Anchors: Jul 7 06:00:50.959796 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:00:50.959827 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:00:50.964475 systemd-resolved[276]: Defaulting to hostname 'linux'. Jul 7 06:00:50.965389 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:00:50.966459 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:00:51.006658 kernel: SCSI subsystem initialized Jul 7 06:00:51.010644 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:00:51.018656 kernel: iscsi: registered transport (tcp) Jul 7 06:00:51.030642 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:00:51.030658 kernel: QLogic iSCSI HBA Driver Jul 7 06:00:51.070665 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:00:51.080763 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:00:51.097681 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:00:51.097731 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:00:51.097759 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 06:00:51.145684 kernel: raid6: neonx8 gen() 15750 MB/s Jul 7 06:00:51.162659 kernel: raid6: neonx4 gen() 15606 MB/s Jul 7 06:00:51.179653 kernel: raid6: neonx2 gen() 13204 MB/s Jul 7 06:00:51.196652 kernel: raid6: neonx1 gen() 10461 MB/s Jul 7 06:00:51.213654 kernel: raid6: int64x8 gen() 6947 MB/s Jul 7 06:00:51.230651 kernel: raid6: int64x4 gen() 7352 MB/s Jul 7 06:00:51.247651 kernel: raid6: int64x2 gen() 6127 MB/s Jul 7 06:00:51.264648 kernel: raid6: int64x1 gen() 5058 MB/s Jul 7 06:00:51.264674 kernel: raid6: using algorithm neonx8 gen() 15750 MB/s Jul 7 06:00:51.281655 kernel: raid6: .... xor() 11942 MB/s, rmw enabled Jul 7 06:00:51.281685 kernel: raid6: using neon recovery algorithm Jul 7 06:00:51.286642 kernel: xor: measuring software checksum speed Jul 7 06:00:51.286657 kernel: 8regs : 19231 MB/sec Jul 7 06:00:51.286666 kernel: 32regs : 18641 MB/sec Jul 7 06:00:51.287919 kernel: arm64_neon : 26280 MB/sec Jul 7 06:00:51.287944 kernel: xor: using function: arm64_neon (26280 MB/sec) Jul 7 06:00:51.337657 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:00:51.347421 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:00:51.357772 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:00:51.368585 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jul 7 06:00:51.371766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:00:51.377852 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:00:51.388554 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jul 7 06:00:51.412929 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:00:51.422795 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:00:51.461670 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:00:51.470282 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:00:51.481301 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:00:51.482447 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:00:51.485467 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:00:51.487031 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:00:51.494770 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:00:51.498697 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 7 06:00:51.498826 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:00:51.504915 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:00:51.504948 kernel: GPT:9289727 != 19775487 Jul 7 06:00:51.504958 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:00:51.504968 kernel: GPT:9289727 != 19775487 Jul 7 06:00:51.505660 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:00:51.507025 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:00:51.508608 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:00:51.519148 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:00:51.519383 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:00:51.523668 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (511) Jul 7 06:00:51.525968 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (517) Jul 7 06:00:51.524946 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:00:51.525874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:00:51.526013 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:00:51.526855 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:00:51.540957 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:00:51.547862 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:00:51.552672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:00:51.557877 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:00:51.564131 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:00:51.565021 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:00:51.570373 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:00:51.586846 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:00:51.589962 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:00:51.600982 disk-uuid[549]: Primary Header is updated. Jul 7 06:00:51.600982 disk-uuid[549]: Secondary Entries is updated. Jul 7 06:00:51.600982 disk-uuid[549]: Secondary Header is updated. Jul 7 06:00:51.604697 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:00:51.614388 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:00:52.618030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:00:52.618100 disk-uuid[550]: The operation has completed successfully. Jul 7 06:00:52.642598 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:00:52.643679 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:00:52.664788 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:00:52.667562 sh[574]: Success Jul 7 06:00:52.685285 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 06:00:52.733142 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:00:52.734853 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:00:52.735758 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:00:52.745776 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 06:00:52.745813 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:00:52.745831 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 06:00:52.747062 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 06:00:52.747082 kernel: BTRFS info (device dm-0): using free space tree Jul 7 06:00:52.751275 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:00:52.752349 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:00:52.753050 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:00:52.755542 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:00:52.764885 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:00:52.764926 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:00:52.764937 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:00:52.766715 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:00:52.773778 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 06:00:52.775164 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:00:52.782406 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:00:52.789773 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:00:52.850637 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:00:52.861808 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:00:52.883052 systemd-networkd[767]: lo: Link UP Jul 7 06:00:52.883061 systemd-networkd[767]: lo: Gained carrier Jul 7 06:00:52.883721 systemd-networkd[767]: Enumeration completed Jul 7 06:00:52.884235 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:00:52.885444 ignition[671]: Ignition 2.19.0 Jul 7 06:00:52.884238 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:00:52.885451 ignition[671]: Stage: fetch-offline Jul 7 06:00:52.884902 systemd-networkd[767]: eth0: Link UP Jul 7 06:00:52.885481 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:52.884905 systemd-networkd[767]: eth0: Gained carrier Jul 7 06:00:52.885489 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:52.884912 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:00:52.885647 ignition[671]: parsed url from cmdline: "" Jul 7 06:00:52.886292 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:00:52.885651 ignition[671]: no config URL provided Jul 7 06:00:52.888513 systemd[1]: Reached target network.target - Network. Jul 7 06:00:52.885655 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:00:52.885663 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:00:52.885686 ignition[671]: op(1): [started] loading QEMU firmware config module Jul 7 06:00:52.885691 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:00:52.896768 ignition[671]: op(1): [finished] loading QEMU firmware config module Jul 7 06:00:52.896788 ignition[671]: QEMU firmware config was not found. Ignoring... Jul 7 06:00:52.902684 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:00:52.920064 ignition[671]: parsing config with SHA512: ad9322c05112f2be9777fb69fcfdccc05d33926f6566bc7d5c81d1d5e50a838461d0055100a617a115132c7441d89d2503dc12a7119520d13ed50888a1052244 Jul 7 06:00:52.924350 unknown[671]: fetched base config from "system" Jul 7 06:00:52.924371 unknown[671]: fetched user config from "qemu" Jul 7 06:00:52.925722 ignition[671]: fetch-offline: fetch-offline passed Jul 7 06:00:52.925838 ignition[671]: Ignition finished successfully Jul 7 06:00:52.926973 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:00:52.928447 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:00:52.932775 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:00:52.942776 ignition[773]: Ignition 2.19.0 Jul 7 06:00:52.942785 ignition[773]: Stage: kargs Jul 7 06:00:52.942939 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:52.942948 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:52.943744 ignition[773]: kargs: kargs passed Jul 7 06:00:52.943785 ignition[773]: Ignition finished successfully Jul 7 06:00:52.945541 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:00:52.947394 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:00:52.959323 ignition[781]: Ignition 2.19.0 Jul 7 06:00:52.959332 ignition[781]: Stage: disks Jul 7 06:00:52.959474 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:52.959490 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:52.960357 ignition[781]: disks: disks passed Jul 7 06:00:52.960403 ignition[781]: Ignition finished successfully Jul 7 06:00:52.962712 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:00:52.964262 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:00:52.965062 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:00:52.966712 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:00:52.968333 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:00:52.969756 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:00:52.985835 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:00:52.994545 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 06:00:52.998233 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:00:52.999934 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:00:53.043831 kernel: EXT4-fs (vda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 06:00:53.044234 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:00:53.045188 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:00:53.056696 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:00:53.058688 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:00:53.059552 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:00:53.059589 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:00:53.065574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (799) Jul 7 06:00:53.059610 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:00:53.066123 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:00:53.070124 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:00:53.070144 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:00:53.070155 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:00:53.070053 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:00:53.072496 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:00:53.074065 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:00:53.111442 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:00:53.115747 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:00:53.118669 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:00:53.121801 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:00:53.189105 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:00:53.198717 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:00:53.199956 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:00:53.204641 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:00:53.219140 ignition[913]: INFO : Ignition 2.19.0 Jul 7 06:00:53.219884 ignition[913]: INFO : Stage: mount Jul 7 06:00:53.220360 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:53.220360 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:53.222568 ignition[913]: INFO : mount: mount passed Jul 7 06:00:53.222568 ignition[913]: INFO : Ignition finished successfully Jul 7 06:00:53.221830 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:00:53.223413 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:00:53.229707 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:00:53.744976 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:00:53.752865 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:00:53.757644 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (928) Jul 7 06:00:53.759123 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:00:53.759139 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:00:53.759150 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:00:53.761640 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:00:53.762437 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:00:53.777485 ignition[945]: INFO : Ignition 2.19.0 Jul 7 06:00:53.777485 ignition[945]: INFO : Stage: files Jul 7 06:00:53.778640 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:53.778640 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:53.778640 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:00:53.781186 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:00:53.781186 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:00:53.781186 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:00:53.781186 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:00:53.785023 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:00:53.785023 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 06:00:53.785023 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 7 06:00:53.781482 unknown[945]: wrote ssh authorized keys file for user: core Jul 7 06:00:53.821670 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:00:54.085048 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 06:00:54.085048 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:00:54.087945 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 7 06:00:54.714765 systemd-networkd[767]: eth0: Gained IPv6LL Jul 7 06:00:54.717249 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:00:55.321819 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:00:55.321819 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:00:55.324654 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:00:55.324654 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:00:55.324654 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:00:55.324654 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 06:00:55.324654 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:00:55.324654 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:00:55.324654 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 06:00:55.324654 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:00:55.345399 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:00:55.348568 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:00:55.350782 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:00:55.350782 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:00:55.350782 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:00:55.350782 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:00:55.350782 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:00:55.350782 ignition[945]: INFO : files: files passed Jul 7 06:00:55.350782 ignition[945]: INFO : Ignition finished successfully Jul 7 06:00:55.351090 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:00:55.363782 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:00:55.365400 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:00:55.366721 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:00:55.366797 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:00:55.372723 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:00:55.374853 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:00:55.374853 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:00:55.377175 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:00:55.379180 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:00:55.380545 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:00:55.388779 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:00:55.405801 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:00:55.406535 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:00:55.407607 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:00:55.408962 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:00:55.410300 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:00:55.415740 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:00:55.427093 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:00:55.428998 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:00:55.438966 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:00:55.439838 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:00:55.441453 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:00:55.442943 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:00:55.443057 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:00:55.445226 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:00:55.446896 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:00:55.448280 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:00:55.449696 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:00:55.451388 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:00:55.452986 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:00:55.454492 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:00:55.456080 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:00:55.457694 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:00:55.459307 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:00:55.460555 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:00:55.460670 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:00:55.462672 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:00:55.464293 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:00:55.465849 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:00:55.466719 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:00:55.467568 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:00:55.467683 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:00:55.470293 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:00:55.470395 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:00:55.472074 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:00:55.473355 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:00:55.474708 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:00:55.476055 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:00:55.477434 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:00:55.478879 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:00:55.478960 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:00:55.480705 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:00:55.480778 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:00:55.482104 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:00:55.482200 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:00:55.483691 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:00:55.483783 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:00:55.496865 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:00:55.497533 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:00:55.497662 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:00:55.502761 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:00:55.503358 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:00:55.503475 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:00:55.504925 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:00:55.510084 ignition[1000]: INFO : Ignition 2.19.0 Jul 7 06:00:55.510084 ignition[1000]: INFO : Stage: umount Jul 7 06:00:55.505022 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:00:55.510681 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:00:55.510756 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:00:55.515208 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:55.515208 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:55.515208 ignition[1000]: INFO : umount: umount passed Jul 7 06:00:55.515208 ignition[1000]: INFO : Ignition finished successfully Jul 7 06:00:55.518188 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:00:55.518624 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:00:55.518738 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:00:55.520095 systemd[1]: Stopped target network.target - Network. Jul 7 06:00:55.520801 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:00:55.520867 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:00:55.522040 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:00:55.522079 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:00:55.523315 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:00:55.523352 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:00:55.524425 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:00:55.524465 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:00:55.525915 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:00:55.527056 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:00:55.535677 systemd-networkd[767]: eth0: DHCPv6 lease lost Jul 7 06:00:55.537229 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:00:55.537339 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:00:55.539055 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:00:55.539086 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:00:55.546759 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:00:55.547393 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:00:55.547445 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:00:55.548936 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:00:55.550431 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:00:55.550521 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:00:55.554219 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:00:55.554295 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:00:55.556553 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:00:55.556595 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:00:55.558111 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:00:55.558151 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:00:55.559949 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:00:55.560078 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:00:55.562818 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:00:55.562891 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:00:55.564883 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:00:55.564928 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:00:55.565959 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:00:55.565994 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:00:55.567249 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:00:55.567290 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:00:55.569594 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:00:55.569648 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:00:55.571534 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:00:55.571580 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:00:55.581830 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:00:55.582619 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:00:55.582697 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:00:55.584319 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:00:55.584361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:00:55.585949 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:00:55.586045 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:00:55.587416 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:00:55.587486 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:00:55.589168 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:00:55.589984 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:00:55.590054 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:00:55.592029 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:00:55.600216 systemd[1]: Switching root. Jul 7 06:00:55.627367 systemd-journald[237]: Journal stopped Jul 7 06:00:56.269413 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 7 06:00:56.269473 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:00:56.269490 kernel: SELinux: policy capability open_perms=1 Jul 7 06:00:56.269499 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:00:56.269509 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:00:56.269521 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:00:56.269532 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:00:56.269541 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:00:56.269550 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:00:56.269561 kernel: audit: type=1403 audit(1751868055.770:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:00:56.269571 systemd[1]: Successfully loaded SELinux policy in 28.930ms. Jul 7 06:00:56.269584 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.998ms. Jul 7 06:00:56.269596 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:00:56.269607 systemd[1]: Detected virtualization kvm. Jul 7 06:00:56.269619 systemd[1]: Detected architecture arm64. Jul 7 06:00:56.269662 systemd[1]: Detected first boot. Jul 7 06:00:56.269676 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:00:56.269687 zram_generator::config[1043]: No configuration found. Jul 7 06:00:56.269699 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:00:56.269710 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:00:56.269720 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:00:56.269730 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:00:56.269743 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:00:56.269754 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:00:56.269765 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:00:56.269775 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:00:56.269785 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:00:56.269796 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:00:56.269806 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:00:56.269816 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:00:56.269827 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:00:56.269840 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:00:56.269850 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:00:56.269861 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:00:56.269871 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:00:56.269882 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:00:56.269892 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 06:00:56.269903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:00:56.269913 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:00:56.269924 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:00:56.269935 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:00:56.269946 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:00:56.269957 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:00:56.269967 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:00:56.269978 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:00:56.269995 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:00:56.270007 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:00:56.270020 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:00:56.270030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:00:56.270041 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:00:56.270052 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:00:56.270062 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:00:56.270082 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:00:56.270093 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:00:56.270103 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:00:56.270113 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:00:56.270125 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:00:56.270137 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:00:56.270147 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:00:56.270158 systemd[1]: Reached target machines.target - Containers. Jul 7 06:00:56.270168 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:00:56.270179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:00:56.270189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:00:56.270199 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:00:56.270209 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:00:56.270222 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:00:56.270232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:00:56.270243 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:00:56.270253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:00:56.270263 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:00:56.270274 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:00:56.270285 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:00:56.270295 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:00:56.270307 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:00:56.270317 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:00:56.270328 kernel: fuse: init (API version 7.39) Jul 7 06:00:56.270338 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:00:56.270349 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:00:56.270359 kernel: ACPI: bus type drm_connector registered Jul 7 06:00:56.270369 kernel: loop: module loaded Jul 7 06:00:56.270379 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:00:56.270409 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:00:56.270421 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:00:56.270431 systemd[1]: Stopped verity-setup.service. Jul 7 06:00:56.270442 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:00:56.270453 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:00:56.270481 systemd-journald[1114]: Collecting audit messages is disabled. Jul 7 06:00:56.270502 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:00:56.270514 systemd-journald[1114]: Journal started Jul 7 06:00:56.270536 systemd-journald[1114]: Runtime Journal (/run/log/journal/64683e592040429785452e1015d27592) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:00:56.099949 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:00:56.270764 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:00:56.116380 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:00:56.116724 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:00:56.272461 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:00:56.273378 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:00:56.274313 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:00:56.275267 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:00:56.276330 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:00:56.277479 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:00:56.277615 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:00:56.278840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:00:56.278971 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:00:56.280036 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:00:56.280168 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:00:56.281269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:00:56.281412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:00:56.282502 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:00:56.282659 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:00:56.283618 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:00:56.283761 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:00:56.284770 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:00:56.285773 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:00:56.286867 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:00:56.298244 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:00:56.305738 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:00:56.307467 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:00:56.308329 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:00:56.308358 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:00:56.310071 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 06:00:56.311929 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:00:56.313662 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:00:56.314484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:00:56.315818 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:00:56.318819 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:00:56.319648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:00:56.322490 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:00:56.323681 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:00:56.327814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:00:56.329691 systemd-journald[1114]: Time spent on flushing to /var/log/journal/64683e592040429785452e1015d27592 is 22.595ms for 852 entries. Jul 7 06:00:56.329691 systemd-journald[1114]: System Journal (/var/log/journal/64683e592040429785452e1015d27592) is 8.0M, max 195.6M, 187.6M free. Jul 7 06:00:56.356908 systemd-journald[1114]: Received client request to flush runtime journal. Jul 7 06:00:56.356942 kernel: loop0: detected capacity change from 0 to 114328 Jul 7 06:00:56.329769 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:00:56.333838 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:00:56.335965 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:00:56.337124 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:00:56.338142 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:00:56.343545 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:00:56.344881 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:00:56.349357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:00:56.350478 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:00:56.359822 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 06:00:56.362304 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 06:00:56.365637 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:00:56.376829 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 06:00:56.378717 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:00:56.383422 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:00:56.384088 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 06:00:56.386666 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:00:56.394926 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:00:56.398679 kernel: loop1: detected capacity change from 0 to 207008 Jul 7 06:00:56.417292 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 7 06:00:56.417308 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 7 06:00:56.423222 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:00:56.427640 kernel: loop2: detected capacity change from 0 to 114432 Jul 7 06:00:56.451884 kernel: loop3: detected capacity change from 0 to 114328 Jul 7 06:00:56.456641 kernel: loop4: detected capacity change from 0 to 207008 Jul 7 06:00:56.461643 kernel: loop5: detected capacity change from 0 to 114432 Jul 7 06:00:56.464180 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:00:56.464530 (sd-merge)[1180]: Merged extensions into '/usr'. Jul 7 06:00:56.467920 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:00:56.467940 systemd[1]: Reloading... Jul 7 06:00:56.517733 zram_generator::config[1202]: No configuration found. Jul 7 06:00:56.581962 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:00:56.623884 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:00:56.659017 systemd[1]: Reloading finished in 190 ms. Jul 7 06:00:56.689748 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:00:56.691279 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:00:56.703808 systemd[1]: Starting ensure-sysext.service... Jul 7 06:00:56.705537 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:00:56.718772 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:00:56.718787 systemd[1]: Reloading... Jul 7 06:00:56.724845 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:00:56.725404 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:00:56.726179 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:00:56.726501 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jul 7 06:00:56.726614 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jul 7 06:00:56.728913 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:00:56.729025 systemd-tmpfiles[1242]: Skipping /boot Jul 7 06:00:56.736240 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:00:56.736327 systemd-tmpfiles[1242]: Skipping /boot Jul 7 06:00:56.754702 zram_generator::config[1266]: No configuration found. Jul 7 06:00:56.843683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:00:56.878936 systemd[1]: Reloading finished in 159 ms. Jul 7 06:00:56.895683 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:00:56.907035 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:00:56.914034 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:00:56.918758 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:00:56.920644 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:00:56.928851 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:00:56.940803 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:00:56.942578 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:00:56.944211 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:00:56.952313 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:00:56.963548 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:00:56.964955 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:00:56.973237 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:00:56.974655 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:00:56.980234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:00:56.984861 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:00:56.986547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:00:56.990253 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Jul 7 06:00:56.990876 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:00:56.992917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:00:56.993045 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:00:56.993761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:00:56.993910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:00:56.999404 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:00:57.001933 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:00:57.003888 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:00:57.004800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:00:57.004923 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:00:57.005547 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:00:57.008776 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:00:57.010245 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:00:57.010374 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:00:57.013168 systemd[1]: Finished ensure-sysext.service. Jul 7 06:00:57.014272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:00:57.014407 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:00:57.018542 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:00:57.018700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:00:57.020980 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:00:57.022310 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:00:57.023492 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:00:57.031283 augenrules[1341]: No rules Jul 7 06:00:57.033589 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:00:57.036726 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:00:57.037837 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:00:57.037973 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:00:57.095285 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:00:57.096472 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:00:57.098994 systemd-resolved[1310]: Positive Trust Anchors: Jul 7 06:00:57.100882 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:00:57.100921 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:00:57.108844 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 06:00:57.110129 systemd-resolved[1310]: Defaulting to hostname 'linux'. Jul 7 06:00:57.114038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:00:57.114912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:00:57.123873 systemd-networkd[1365]: lo: Link UP Jul 7 06:00:57.124138 systemd-networkd[1365]: lo: Gained carrier Jul 7 06:00:57.125169 systemd-networkd[1365]: Enumeration completed Jul 7 06:00:57.126733 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:00:57.129273 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1360) Jul 7 06:00:57.127735 systemd[1]: Reached target network.target - Network. Jul 7 06:00:57.130158 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:00:57.130268 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:00:57.131210 systemd-networkd[1365]: eth0: Link UP Jul 7 06:00:57.131292 systemd-networkd[1365]: eth0: Gained carrier Jul 7 06:00:57.131349 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:00:57.132789 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:00:57.149731 systemd-networkd[1365]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:00:57.150608 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jul 7 06:00:57.150645 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:00:56.670944 systemd-timesyncd[1347]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:00:56.677739 systemd-journald[1114]: Time jumped backwards, rotating. Jul 7 06:00:56.671404 systemd-resolved[1310]: Clock change detected. Flushing caches. Jul 7 06:00:56.673007 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:00:56.674110 systemd-timesyncd[1347]: Initial clock synchronization to Mon 2025-07-07 06:00:56.670850 UTC. Jul 7 06:00:56.682598 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:00:56.695166 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:00:56.714458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:00:56.726544 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 06:00:56.729471 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 06:00:56.743006 lvm[1395]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:00:56.754820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:00:56.780623 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 06:00:56.782023 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:00:56.784261 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:00:56.785329 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:00:56.786545 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:00:56.787984 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:00:56.789117 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:00:56.790284 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:00:56.791442 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:00:56.791477 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:00:56.792328 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:00:56.795200 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:00:56.797287 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:00:56.809105 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:00:56.811005 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 06:00:56.812273 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:00:56.813110 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:00:56.813791 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:00:56.814500 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:00:56.814530 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:00:56.815389 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:00:56.817050 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:00:56.819263 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:00:56.820079 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:00:56.822405 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:00:56.823433 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:00:56.826338 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:00:56.827791 jq[1405]: false Jul 7 06:00:56.829124 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:00:56.832398 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:00:56.836308 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:00:56.838546 extend-filesystems[1406]: Found loop3 Jul 7 06:00:56.839118 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:00:56.839727 extend-filesystems[1406]: Found loop4 Jul 7 06:00:56.841162 extend-filesystems[1406]: Found loop5 Jul 7 06:00:56.841162 extend-filesystems[1406]: Found vda Jul 7 06:00:56.841162 extend-filesystems[1406]: Found vda1 Jul 7 06:00:56.841162 extend-filesystems[1406]: Found vda2 Jul 7 06:00:56.841162 extend-filesystems[1406]: Found vda3 Jul 7 06:00:56.841162 extend-filesystems[1406]: Found usr Jul 7 06:00:56.841162 extend-filesystems[1406]: Found vda4 Jul 7 06:00:56.841162 extend-filesystems[1406]: Found vda6 Jul 7 06:00:56.841162 extend-filesystems[1406]: Found vda7 Jul 7 06:00:56.841162 extend-filesystems[1406]: Found vda9 Jul 7 06:00:56.841162 extend-filesystems[1406]: Checking size of /dev/vda9 Jul 7 06:00:56.842702 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:00:56.852278 dbus-daemon[1404]: [system] SELinux support is enabled Jul 7 06:00:56.843118 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:00:56.843706 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:00:56.847427 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:00:56.850845 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 06:00:56.854554 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:00:56.859782 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:00:56.859994 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:00:56.862489 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:00:56.862647 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:00:56.872257 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:00:56.872442 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:00:56.876841 (ntainerd)[1427]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:00:56.878022 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:00:56.878074 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:00:56.880366 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:00:56.880400 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:00:56.884306 jq[1420]: true Jul 7 06:00:56.884925 extend-filesystems[1406]: Resized partition /dev/vda9 Jul 7 06:00:56.893651 extend-filesystems[1439]: resize2fs 1.47.1 (20-May-2024) Jul 7 06:00:56.897789 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1355) Jul 7 06:00:56.897946 jq[1437]: true Jul 7 06:00:56.902215 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:00:56.902261 tar[1425]: linux-arm64/LICENSE Jul 7 06:00:56.902261 tar[1425]: linux-arm64/helm Jul 7 06:00:56.915938 update_engine[1416]: I20250707 06:00:56.902202 1416 main.cc:92] Flatcar Update Engine starting Jul 7 06:00:56.915938 update_engine[1416]: I20250707 06:00:56.904273 1416 update_check_scheduler.cc:74] Next update check in 2m19s Jul 7 06:00:56.914538 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:00:56.918432 systemd-logind[1414]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 06:00:56.918707 systemd-logind[1414]: New seat seat0. Jul 7 06:00:56.920268 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:00:56.928175 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:00:56.942422 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:00:56.945440 extend-filesystems[1439]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:00:56.945440 extend-filesystems[1439]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:00:56.945440 extend-filesystems[1439]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:00:56.949859 extend-filesystems[1406]: Resized filesystem in /dev/vda9 Jul 7 06:00:56.948678 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:00:56.951191 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:00:57.001504 bash[1458]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:00:57.002581 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:00:57.008320 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:00:57.012064 locksmithd[1446]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:00:57.099508 containerd[1427]: time="2025-07-07T06:00:57.099362780Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 06:00:57.126525 containerd[1427]: time="2025-07-07T06:00:57.126439420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129087 containerd[1427]: time="2025-07-07T06:00:57.128034820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129087 containerd[1427]: time="2025-07-07T06:00:57.128072700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 06:00:57.129087 containerd[1427]: time="2025-07-07T06:00:57.128087580Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 06:00:57.129087 containerd[1427]: time="2025-07-07T06:00:57.128246780Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 06:00:57.129087 containerd[1427]: time="2025-07-07T06:00:57.128266180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129087 containerd[1427]: time="2025-07-07T06:00:57.128316660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129087 containerd[1427]: time="2025-07-07T06:00:57.128329420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129087 containerd[1427]: time="2025-07-07T06:00:57.128474340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129087 containerd[1427]: time="2025-07-07T06:00:57.128489740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129087 containerd[1427]: time="2025-07-07T06:00:57.128516940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129087 containerd[1427]: time="2025-07-07T06:00:57.128529100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129309 containerd[1427]: time="2025-07-07T06:00:57.128608780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129309 containerd[1427]: time="2025-07-07T06:00:57.128787460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129309 containerd[1427]: time="2025-07-07T06:00:57.128876980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:00:57.129309 containerd[1427]: time="2025-07-07T06:00:57.128890180Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 06:00:57.129309 containerd[1427]: time="2025-07-07T06:00:57.128962780Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 06:00:57.129309 containerd[1427]: time="2025-07-07T06:00:57.129008660Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:00:57.132659 containerd[1427]: time="2025-07-07T06:00:57.132633100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 06:00:57.132840 containerd[1427]: time="2025-07-07T06:00:57.132821140Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 06:00:57.132988 containerd[1427]: time="2025-07-07T06:00:57.132968980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 06:00:57.133107 containerd[1427]: time="2025-07-07T06:00:57.133089980Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 06:00:57.133241 containerd[1427]: time="2025-07-07T06:00:57.133175420Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 06:00:57.133492 containerd[1427]: time="2025-07-07T06:00:57.133444300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.133939020Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134064900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134091700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134110460Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134124020Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134159460Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134178420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134200100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134214460Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134225740Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134237140Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134247820Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134267260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134690 containerd[1427]: time="2025-07-07T06:00:57.134280180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134292340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134303500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134318340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134332620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134344300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134357140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134371300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134386980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134397980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134409860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134421460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134441940Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134461780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134473980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.134941 containerd[1427]: time="2025-07-07T06:00:57.134484580Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 06:00:57.136123 containerd[1427]: time="2025-07-07T06:00:57.136091980Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 06:00:57.136674 containerd[1427]: time="2025-07-07T06:00:57.136383460Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 06:00:57.136674 containerd[1427]: time="2025-07-07T06:00:57.136455060Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 06:00:57.136674 containerd[1427]: time="2025-07-07T06:00:57.136472380Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 06:00:57.136674 containerd[1427]: time="2025-07-07T06:00:57.136481820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.136674 containerd[1427]: time="2025-07-07T06:00:57.136505220Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 06:00:57.136674 containerd[1427]: time="2025-07-07T06:00:57.136516780Z" level=info msg="NRI interface is disabled by configuration." Jul 7 06:00:57.136674 containerd[1427]: time="2025-07-07T06:00:57.136526660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 06:00:57.137332 containerd[1427]: time="2025-07-07T06:00:57.137214940Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 06:00:57.137642 containerd[1427]: time="2025-07-07T06:00:57.137492340Z" level=info msg="Connect containerd service" Jul 7 06:00:57.138037 containerd[1427]: time="2025-07-07T06:00:57.137869180Z" level=info msg="using legacy CRI server" Jul 7 06:00:57.138037 containerd[1427]: time="2025-07-07T06:00:57.137891860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:00:57.140163 containerd[1427]: time="2025-07-07T06:00:57.140113740Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 06:00:57.140871 containerd[1427]: time="2025-07-07T06:00:57.140836300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:00:57.141230 containerd[1427]: time="2025-07-07T06:00:57.141177860Z" level=info msg="Start subscribing containerd event" Jul 7 06:00:57.141888 containerd[1427]: time="2025-07-07T06:00:57.141555380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:00:57.141888 containerd[1427]: time="2025-07-07T06:00:57.141639540Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:00:57.141888 containerd[1427]: time="2025-07-07T06:00:57.141579780Z" level=info msg="Start recovering state" Jul 7 06:00:57.141888 containerd[1427]: time="2025-07-07T06:00:57.141703540Z" level=info msg="Start event monitor" Jul 7 06:00:57.141888 containerd[1427]: time="2025-07-07T06:00:57.141713420Z" level=info msg="Start snapshots syncer" Jul 7 06:00:57.142016 containerd[1427]: time="2025-07-07T06:00:57.141961140Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:00:57.142016 containerd[1427]: time="2025-07-07T06:00:57.141972540Z" level=info msg="Start streaming server" Jul 7 06:00:57.142184 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:00:57.145237 containerd[1427]: time="2025-07-07T06:00:57.145210300Z" level=info msg="containerd successfully booted in 0.047185s" Jul 7 06:00:57.302624 tar[1425]: linux-arm64/README.md Jul 7 06:00:57.313548 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:00:57.878423 sshd_keygen[1426]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:00:57.897171 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:00:57.910561 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:00:57.915950 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:00:57.916135 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:00:57.918715 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:00:57.932187 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:00:57.944425 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:00:57.946298 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 06:00:57.947231 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:00:58.138258 systemd-networkd[1365]: eth0: Gained IPv6LL Jul 7 06:00:58.144783 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:00:58.146368 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:00:58.155372 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:00:58.157518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:00:58.159293 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:00:58.173510 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:00:58.173738 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:00:58.174964 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:00:58.180055 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:00:58.705661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:00:58.707082 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:00:58.709248 systemd[1]: Startup finished in 535ms (kernel) + 5.064s (initrd) + 3.451s (userspace) = 9.051s. Jul 7 06:00:58.710043 (kubelet)[1517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:00:59.102656 kubelet[1517]: E0707 06:00:59.102540 1517 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:00:59.104908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:00:59.105066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:01:03.449915 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:01:03.451011 systemd[1]: Started sshd@0-10.0.0.68:22-10.0.0.1:59166.service - OpenSSH per-connection server daemon (10.0.0.1:59166). Jul 7 06:01:03.510879 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 59166 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:01:03.512584 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:03.524278 systemd-logind[1414]: New session 1 of user core. Jul 7 06:01:03.525262 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:01:03.538389 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:01:03.549045 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:01:03.551856 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:01:03.558641 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:01:03.630323 systemd[1534]: Queued start job for default target default.target. Jul 7 06:01:03.640030 systemd[1534]: Created slice app.slice - User Application Slice. Jul 7 06:01:03.640059 systemd[1534]: Reached target paths.target - Paths. Jul 7 06:01:03.640071 systemd[1534]: Reached target timers.target - Timers. Jul 7 06:01:03.641367 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:01:03.650814 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:01:03.650876 systemd[1534]: Reached target sockets.target - Sockets. Jul 7 06:01:03.650888 systemd[1534]: Reached target basic.target - Basic System. Jul 7 06:01:03.650923 systemd[1534]: Reached target default.target - Main User Target. Jul 7 06:01:03.650948 systemd[1534]: Startup finished in 87ms. Jul 7 06:01:03.651269 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:01:03.664277 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:01:03.729993 systemd[1]: Started sshd@1-10.0.0.68:22-10.0.0.1:59168.service - OpenSSH per-connection server daemon (10.0.0.1:59168). Jul 7 06:01:03.773758 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 59168 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:01:03.774916 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:03.778856 systemd-logind[1414]: New session 2 of user core. Jul 7 06:01:03.795292 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:01:03.848134 sshd[1545]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:03.857797 systemd[1]: sshd@1-10.0.0.68:22-10.0.0.1:59168.service: Deactivated successfully. Jul 7 06:01:03.860328 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:01:03.861683 systemd-logind[1414]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:01:03.871712 systemd[1]: Started sshd@2-10.0.0.68:22-10.0.0.1:59174.service - OpenSSH per-connection server daemon (10.0.0.1:59174). Jul 7 06:01:03.872717 systemd-logind[1414]: Removed session 2. Jul 7 06:01:03.904220 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 59174 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:01:03.905514 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:03.909196 systemd-logind[1414]: New session 3 of user core. Jul 7 06:01:03.918312 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:01:03.967427 sshd[1552]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:03.977455 systemd[1]: sshd@2-10.0.0.68:22-10.0.0.1:59174.service: Deactivated successfully. Jul 7 06:01:03.978757 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:01:03.979939 systemd-logind[1414]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:01:03.981460 systemd[1]: Started sshd@3-10.0.0.68:22-10.0.0.1:59184.service - OpenSSH per-connection server daemon (10.0.0.1:59184). Jul 7 06:01:03.982163 systemd-logind[1414]: Removed session 3. Jul 7 06:01:04.017343 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 59184 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:01:04.018587 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:04.022651 systemd-logind[1414]: New session 4 of user core. Jul 7 06:01:04.032314 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:01:04.084220 sshd[1559]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:04.095629 systemd[1]: sshd@3-10.0.0.68:22-10.0.0.1:59184.service: Deactivated successfully. Jul 7 06:01:04.097054 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:01:04.099229 systemd-logind[1414]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:01:04.100329 systemd[1]: Started sshd@4-10.0.0.68:22-10.0.0.1:59188.service - OpenSSH per-connection server daemon (10.0.0.1:59188). Jul 7 06:01:04.101485 systemd-logind[1414]: Removed session 4. Jul 7 06:01:04.136541 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 59188 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:01:04.137719 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:04.141267 systemd-logind[1414]: New session 5 of user core. Jul 7 06:01:04.152301 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:01:04.209762 sudo[1569]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:01:04.210028 sudo[1569]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:01:04.551367 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:01:04.551503 (dockerd)[1587]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:01:04.803026 dockerd[1587]: time="2025-07-07T06:01:04.802913260Z" level=info msg="Starting up" Jul 7 06:01:04.949352 dockerd[1587]: time="2025-07-07T06:01:04.949308460Z" level=info msg="Loading containers: start." Jul 7 06:01:05.035674 kernel: Initializing XFRM netlink socket Jul 7 06:01:05.092829 systemd-networkd[1365]: docker0: Link UP Jul 7 06:01:05.113179 dockerd[1587]: time="2025-07-07T06:01:05.113134500Z" level=info msg="Loading containers: done." Jul 7 06:01:05.125182 dockerd[1587]: time="2025-07-07T06:01:05.125112380Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:01:05.125295 dockerd[1587]: time="2025-07-07T06:01:05.125220260Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 06:01:05.125344 dockerd[1587]: time="2025-07-07T06:01:05.125323340Z" level=info msg="Daemon has completed initialization" Jul 7 06:01:05.153534 dockerd[1587]: time="2025-07-07T06:01:05.153413820Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:01:05.153621 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:01:05.811957 containerd[1427]: time="2025-07-07T06:01:05.811908700Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 06:01:06.375896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385306877.mount: Deactivated successfully. Jul 7 06:01:07.346384 containerd[1427]: time="2025-07-07T06:01:07.346319380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:07.347276 containerd[1427]: time="2025-07-07T06:01:07.347238820Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 7 06:01:07.348204 containerd[1427]: time="2025-07-07T06:01:07.348172220Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:07.351914 containerd[1427]: time="2025-07-07T06:01:07.351862660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:07.353624 containerd[1427]: time="2025-07-07T06:01:07.353537820Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.54158384s" Jul 7 06:01:07.353624 containerd[1427]: time="2025-07-07T06:01:07.353574460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 7 06:01:07.354353 containerd[1427]: time="2025-07-07T06:01:07.354331540Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 06:01:08.511769 containerd[1427]: time="2025-07-07T06:01:08.511723580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:08.512661 containerd[1427]: time="2025-07-07T06:01:08.512451260Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 7 06:01:08.514220 containerd[1427]: time="2025-07-07T06:01:08.513413260Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:08.516517 containerd[1427]: time="2025-07-07T06:01:08.516451460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:08.517837 containerd[1427]: time="2025-07-07T06:01:08.517713020Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.16335032s" Jul 7 06:01:08.517837 containerd[1427]: time="2025-07-07T06:01:08.517747180Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 7 06:01:08.518257 containerd[1427]: time="2025-07-07T06:01:08.518175340Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 06:01:09.355535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:01:09.368598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:09.507548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:09.510938 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:01:09.547528 kubelet[1805]: E0707 06:01:09.547481 1805 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:01:09.550749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:01:09.550886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:01:09.779507 containerd[1427]: time="2025-07-07T06:01:09.779392860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:09.780739 containerd[1427]: time="2025-07-07T06:01:09.780695220Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 7 06:01:09.781468 containerd[1427]: time="2025-07-07T06:01:09.781400380Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:09.784401 containerd[1427]: time="2025-07-07T06:01:09.784356660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:09.785435 containerd[1427]: time="2025-07-07T06:01:09.785397300Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.26718808s" Jul 7 06:01:09.785493 containerd[1427]: time="2025-07-07T06:01:09.785435260Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 7 06:01:09.786025 containerd[1427]: time="2025-07-07T06:01:09.785849940Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 06:01:10.750420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604089954.mount: Deactivated successfully. Jul 7 06:01:10.963120 containerd[1427]: time="2025-07-07T06:01:10.962961940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:10.963899 containerd[1427]: time="2025-07-07T06:01:10.963698140Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 7 06:01:10.964489 containerd[1427]: time="2025-07-07T06:01:10.964454540Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:10.966428 containerd[1427]: time="2025-07-07T06:01:10.966383300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:10.967199 containerd[1427]: time="2025-07-07T06:01:10.967097020Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.181215s" Jul 7 06:01:10.967199 containerd[1427]: time="2025-07-07T06:01:10.967128620Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 7 06:01:10.967611 containerd[1427]: time="2025-07-07T06:01:10.967583980Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:01:11.546340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101144388.mount: Deactivated successfully. Jul 7 06:01:12.313697 containerd[1427]: time="2025-07-07T06:01:12.313650180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:12.314579 containerd[1427]: time="2025-07-07T06:01:12.314548500Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 7 06:01:12.315266 containerd[1427]: time="2025-07-07T06:01:12.315201540Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:12.318455 containerd[1427]: time="2025-07-07T06:01:12.318394660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:12.319818 containerd[1427]: time="2025-07-07T06:01:12.319577740Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.3519444s" Jul 7 06:01:12.319818 containerd[1427]: time="2025-07-07T06:01:12.319613420Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 06:01:12.320065 containerd[1427]: time="2025-07-07T06:01:12.320025500Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:01:12.736679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2697786430.mount: Deactivated successfully. Jul 7 06:01:12.740397 containerd[1427]: time="2025-07-07T06:01:12.740354860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:12.741482 containerd[1427]: time="2025-07-07T06:01:12.741450540Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 7 06:01:12.742343 containerd[1427]: time="2025-07-07T06:01:12.742315820Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:12.744813 containerd[1427]: time="2025-07-07T06:01:12.744779820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:12.745552 containerd[1427]: time="2025-07-07T06:01:12.745513180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 425.45584ms" Jul 7 06:01:12.745589 containerd[1427]: time="2025-07-07T06:01:12.745554140Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 06:01:12.745988 containerd[1427]: time="2025-07-07T06:01:12.745963860Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 06:01:13.276793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount576827150.mount: Deactivated successfully. Jul 7 06:01:14.958770 containerd[1427]: time="2025-07-07T06:01:14.958723660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:14.959686 containerd[1427]: time="2025-07-07T06:01:14.959368420Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 7 06:01:14.960443 containerd[1427]: time="2025-07-07T06:01:14.960380100Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:14.963925 containerd[1427]: time="2025-07-07T06:01:14.963876620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:14.965345 containerd[1427]: time="2025-07-07T06:01:14.965203260Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.21920836s" Jul 7 06:01:14.965345 containerd[1427]: time="2025-07-07T06:01:14.965241700Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 7 06:01:19.591767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:01:19.607300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:19.745056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:19.747870 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:01:19.781510 kubelet[1964]: E0707 06:01:19.781464 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:01:19.783927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:01:19.784047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:01:20.557813 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:20.574343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:20.598504 systemd[1]: Reloading requested from client PID 1979 ('systemctl') (unit session-5.scope)... Jul 7 06:01:20.598519 systemd[1]: Reloading... Jul 7 06:01:20.664172 zram_generator::config[2023]: No configuration found. Jul 7 06:01:20.795034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:01:20.847857 systemd[1]: Reloading finished in 249 ms. Jul 7 06:01:20.883352 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:20.885724 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:01:20.887178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:20.888533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:20.987056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:20.991058 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:01:21.024799 kubelet[2065]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:01:21.024799 kubelet[2065]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:01:21.024799 kubelet[2065]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:01:21.025064 kubelet[2065]: I0707 06:01:21.024859 2065 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:01:22.080375 kubelet[2065]: I0707 06:01:22.080331 2065 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:01:22.080375 kubelet[2065]: I0707 06:01:22.080364 2065 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:01:22.080714 kubelet[2065]: I0707 06:01:22.080614 2065 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:01:22.139542 kubelet[2065]: E0707 06:01:22.139506 2065 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:22.140961 kubelet[2065]: I0707 06:01:22.140895 2065 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:01:22.147540 kubelet[2065]: E0707 06:01:22.147508 2065 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:01:22.147596 kubelet[2065]: I0707 06:01:22.147574 2065 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:01:22.150079 kubelet[2065]: I0707 06:01:22.149997 2065 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:01:22.151262 kubelet[2065]: I0707 06:01:22.151213 2065 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:01:22.151415 kubelet[2065]: I0707 06:01:22.151257 2065 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:01:22.151492 kubelet[2065]: I0707 06:01:22.151486 2065 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:01:22.151518 kubelet[2065]: I0707 06:01:22.151496 2065 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:01:22.151696 kubelet[2065]: I0707 06:01:22.151672 2065 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:01:22.154016 kubelet[2065]: I0707 06:01:22.153993 2065 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:01:22.154016 kubelet[2065]: I0707 06:01:22.154015 2065 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:01:22.154095 kubelet[2065]: I0707 06:01:22.154033 2065 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:01:22.154095 kubelet[2065]: I0707 06:01:22.154043 2065 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:01:22.157704 kubelet[2065]: W0707 06:01:22.157614 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Jul 7 06:01:22.157704 kubelet[2065]: E0707 06:01:22.157664 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:22.158615 kubelet[2065]: W0707 06:01:22.158573 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Jul 7 06:01:22.158685 kubelet[2065]: E0707 06:01:22.158626 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:22.158685 kubelet[2065]: I0707 06:01:22.158595 2065 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:01:22.159250 kubelet[2065]: I0707 06:01:22.159232 2065 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:01:22.159356 kubelet[2065]: W0707 06:01:22.159344 2065 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:01:22.160633 kubelet[2065]: I0707 06:01:22.160605 2065 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:01:22.160682 kubelet[2065]: I0707 06:01:22.160640 2065 server.go:1287] "Started kubelet" Jul 7 06:01:22.160906 kubelet[2065]: I0707 06:01:22.160867 2065 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:01:22.161993 kubelet[2065]: I0707 06:01:22.161931 2065 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:01:22.162745 kubelet[2065]: I0707 06:01:22.162094 2065 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:01:22.162745 kubelet[2065]: I0707 06:01:22.162260 2065 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:01:22.163604 kubelet[2065]: I0707 06:01:22.163581 2065 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:01:22.164054 kubelet[2065]: E0707 06:01:22.163817 2065 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe2bd2a8757bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:01:22.1606215 +0000 UTC m=+1.166405121,LastTimestamp:2025-07-07 06:01:22.1606215 +0000 UTC m=+1.166405121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:01:22.164457 kubelet[2065]: I0707 06:01:22.164428 2065 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:01:22.166244 kubelet[2065]: E0707 06:01:22.166215 2065 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:01:22.166372 kubelet[2065]: I0707 06:01:22.166359 2065 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:01:22.166512 kubelet[2065]: I0707 06:01:22.166500 2065 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:01:22.166628 kubelet[2065]: W0707 06:01:22.166585 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Jul 7 06:01:22.166662 kubelet[2065]: E0707 06:01:22.166632 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:22.166662 kubelet[2065]: I0707 06:01:22.166602 2065 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:01:22.166894 kubelet[2065]: I0707 06:01:22.166869 2065 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:01:22.166956 kubelet[2065]: I0707 06:01:22.166938 2065 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:01:22.167181 kubelet[2065]: E0707 06:01:22.166973 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="200ms" Jul 7 06:01:22.167882 kubelet[2065]: I0707 06:01:22.167860 2065 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:01:22.178389 kubelet[2065]: I0707 06:01:22.178261 2065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:01:22.179524 kubelet[2065]: I0707 06:01:22.179286 2065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:01:22.179524 kubelet[2065]: I0707 06:01:22.179309 2065 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:01:22.179524 kubelet[2065]: I0707 06:01:22.179323 2065 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:01:22.179524 kubelet[2065]: I0707 06:01:22.179328 2065 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:01:22.179524 kubelet[2065]: E0707 06:01:22.179364 2065 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:01:22.182571 kubelet[2065]: I0707 06:01:22.182371 2065 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:01:22.182571 kubelet[2065]: I0707 06:01:22.182385 2065 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:01:22.182571 kubelet[2065]: I0707 06:01:22.182399 2065 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:01:22.183064 kubelet[2065]: W0707 06:01:22.183003 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Jul 7 06:01:22.183064 kubelet[2065]: E0707 06:01:22.183039 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:22.258159 kubelet[2065]: I0707 06:01:22.258080 2065 policy_none.go:49] "None policy: Start" Jul 7 06:01:22.258159 kubelet[2065]: I0707 06:01:22.258119 2065 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:01:22.258159 kubelet[2065]: I0707 06:01:22.258168 2065 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:01:22.264417 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:01:22.266978 kubelet[2065]: E0707 06:01:22.266959 2065 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:01:22.276522 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:01:22.279059 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:01:22.279455 kubelet[2065]: E0707 06:01:22.279419 2065 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:01:22.290991 kubelet[2065]: I0707 06:01:22.290760 2065 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:01:22.291658 kubelet[2065]: I0707 06:01:22.291240 2065 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:01:22.291658 kubelet[2065]: I0707 06:01:22.291252 2065 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:01:22.291658 kubelet[2065]: I0707 06:01:22.291566 2065 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:01:22.292337 kubelet[2065]: E0707 06:01:22.292318 2065 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:01:22.292458 kubelet[2065]: E0707 06:01:22.292446 2065 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:01:22.368463 kubelet[2065]: E0707 06:01:22.368431 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="400ms" Jul 7 06:01:22.392348 kubelet[2065]: I0707 06:01:22.392300 2065 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:01:22.392655 kubelet[2065]: E0707 06:01:22.392633 2065 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Jul 7 06:01:22.487245 systemd[1]: Created slice kubepods-burstable-pod95bd7102b69fde5cc66e650f261b212c.slice - libcontainer container kubepods-burstable-pod95bd7102b69fde5cc66e650f261b212c.slice. Jul 7 06:01:22.506678 kubelet[2065]: E0707 06:01:22.506603 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:01:22.509039 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 7 06:01:22.524121 kubelet[2065]: E0707 06:01:22.524083 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:01:22.526503 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 7 06:01:22.528043 kubelet[2065]: E0707 06:01:22.528005 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:01:22.569006 kubelet[2065]: I0707 06:01:22.568977 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:22.569098 kubelet[2065]: I0707 06:01:22.569010 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:22.569098 kubelet[2065]: I0707 06:01:22.569033 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:22.569098 kubelet[2065]: I0707 06:01:22.569050 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:22.569098 kubelet[2065]: I0707 06:01:22.569090 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:22.569200 kubelet[2065]: I0707 06:01:22.569106 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:01:22.569200 kubelet[2065]: I0707 06:01:22.569124 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95bd7102b69fde5cc66e650f261b212c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"95bd7102b69fde5cc66e650f261b212c\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:22.569200 kubelet[2065]: I0707 06:01:22.569151 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95bd7102b69fde5cc66e650f261b212c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"95bd7102b69fde5cc66e650f261b212c\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:22.569200 kubelet[2065]: I0707 06:01:22.569170 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95bd7102b69fde5cc66e650f261b212c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"95bd7102b69fde5cc66e650f261b212c\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:22.593968 kubelet[2065]: I0707 06:01:22.593938 2065 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:01:22.594252 kubelet[2065]: E0707 06:01:22.594231 2065 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Jul 7 06:01:22.769842 kubelet[2065]: E0707 06:01:22.769734 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="800ms" Jul 7 06:01:22.807267 kubelet[2065]: E0707 06:01:22.807235 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:22.807915 containerd[1427]: time="2025-07-07T06:01:22.807881380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:95bd7102b69fde5cc66e650f261b212c,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:22.825129 kubelet[2065]: E0707 06:01:22.825072 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:22.825467 containerd[1427]: time="2025-07-07T06:01:22.825434100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:22.828747 kubelet[2065]: E0707 06:01:22.828727 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:22.829156 containerd[1427]: time="2025-07-07T06:01:22.829109700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:22.996339 kubelet[2065]: I0707 06:01:22.996278 2065 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:01:22.996599 kubelet[2065]: E0707 06:01:22.996576 2065 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Jul 7 06:01:23.302432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount56983665.mount: Deactivated successfully. Jul 7 06:01:23.308498 containerd[1427]: time="2025-07-07T06:01:23.308455980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:01:23.310222 containerd[1427]: time="2025-07-07T06:01:23.310191580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 7 06:01:23.310926 containerd[1427]: time="2025-07-07T06:01:23.310884980Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:01:23.311938 containerd[1427]: time="2025-07-07T06:01:23.311906900Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:01:23.312082 containerd[1427]: time="2025-07-07T06:01:23.312060460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:01:23.313098 containerd[1427]: time="2025-07-07T06:01:23.313063020Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:01:23.313594 containerd[1427]: time="2025-07-07T06:01:23.313572620Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:01:23.314519 kubelet[2065]: W0707 06:01:23.314471 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Jul 7 06:01:23.314833 kubelet[2065]: E0707 06:01:23.314796 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:23.315680 containerd[1427]: time="2025-07-07T06:01:23.315646900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:01:23.318201 containerd[1427]: time="2025-07-07T06:01:23.318162700Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.20624ms" Jul 7 06:01:23.319435 containerd[1427]: time="2025-07-07T06:01:23.319396100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.899ms" Jul 7 06:01:23.321793 containerd[1427]: time="2025-07-07T06:01:23.321758260Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.58592ms" Jul 7 06:01:23.385480 kubelet[2065]: W0707 06:01:23.380568 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Jul 7 06:01:23.385480 kubelet[2065]: E0707 06:01:23.380632 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:23.441551 containerd[1427]: time="2025-07-07T06:01:23.441219820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:01:23.441551 containerd[1427]: time="2025-07-07T06:01:23.441282300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:01:23.441551 containerd[1427]: time="2025-07-07T06:01:23.441303180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:23.442032 containerd[1427]: time="2025-07-07T06:01:23.441960460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:23.442323 containerd[1427]: time="2025-07-07T06:01:23.442263620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:01:23.442323 containerd[1427]: time="2025-07-07T06:01:23.442308300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:01:23.442375 containerd[1427]: time="2025-07-07T06:01:23.442323540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:23.442448 containerd[1427]: time="2025-07-07T06:01:23.442395300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:23.442596 containerd[1427]: time="2025-07-07T06:01:23.442504940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:01:23.442596 containerd[1427]: time="2025-07-07T06:01:23.442549860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:01:23.442596 containerd[1427]: time="2025-07-07T06:01:23.442560180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:23.442734 containerd[1427]: time="2025-07-07T06:01:23.442664940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:23.443391 kubelet[2065]: W0707 06:01:23.443013 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Jul 7 06:01:23.443391 kubelet[2065]: E0707 06:01:23.443077 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:23.468289 systemd[1]: Started cri-containerd-1e79c157a43ad8410f0facf4ab109e7e2153aab4ab54f02bdceda825c114b4c3.scope - libcontainer container 1e79c157a43ad8410f0facf4ab109e7e2153aab4ab54f02bdceda825c114b4c3. Jul 7 06:01:23.469447 systemd[1]: Started cri-containerd-52d3008de3dc2accd883f6af5d465339581c51c8ae5915490e4fffeb0087ec03.scope - libcontainer container 52d3008de3dc2accd883f6af5d465339581c51c8ae5915490e4fffeb0087ec03. Jul 7 06:01:23.470617 systemd[1]: Started cri-containerd-7c54f0352e2c66add1fa12a3555ccb8c8fb672090e9ecff011c7acbb9364491a.scope - libcontainer container 7c54f0352e2c66add1fa12a3555ccb8c8fb672090e9ecff011c7acbb9364491a. Jul 7 06:01:23.475062 kubelet[2065]: E0707 06:01:23.474958 2065 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe2bd2a8757bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:01:22.1606215 +0000 UTC m=+1.166405121,LastTimestamp:2025-07-07 06:01:22.1606215 +0000 UTC m=+1.166405121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:01:23.502712 containerd[1427]: time="2025-07-07T06:01:23.502635420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:95bd7102b69fde5cc66e650f261b212c,Namespace:kube-system,Attempt:0,} returns sandbox id \"52d3008de3dc2accd883f6af5d465339581c51c8ae5915490e4fffeb0087ec03\"" Jul 7 06:01:23.502837 containerd[1427]: time="2025-07-07T06:01:23.502795620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e79c157a43ad8410f0facf4ab109e7e2153aab4ab54f02bdceda825c114b4c3\"" Jul 7 06:01:23.503898 kubelet[2065]: E0707 06:01:23.503873 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:23.504177 kubelet[2065]: E0707 06:01:23.504039 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:23.505812 containerd[1427]: time="2025-07-07T06:01:23.505779100Z" level=info msg="CreateContainer within sandbox \"52d3008de3dc2accd883f6af5d465339581c51c8ae5915490e4fffeb0087ec03\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:01:23.506306 containerd[1427]: time="2025-07-07T06:01:23.506265900Z" level=info msg="CreateContainer within sandbox \"1e79c157a43ad8410f0facf4ab109e7e2153aab4ab54f02bdceda825c114b4c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:01:23.508203 containerd[1427]: time="2025-07-07T06:01:23.508172140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c54f0352e2c66add1fa12a3555ccb8c8fb672090e9ecff011c7acbb9364491a\"" Jul 7 06:01:23.508796 kubelet[2065]: E0707 06:01:23.508769 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:23.510125 containerd[1427]: time="2025-07-07T06:01:23.510094260Z" level=info msg="CreateContainer within sandbox \"7c54f0352e2c66add1fa12a3555ccb8c8fb672090e9ecff011c7acbb9364491a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:01:23.520593 containerd[1427]: time="2025-07-07T06:01:23.520474380Z" level=info msg="CreateContainer within sandbox \"52d3008de3dc2accd883f6af5d465339581c51c8ae5915490e4fffeb0087ec03\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"98bb24a440f2948e5b9fc1d64694777c1da3e9340676deda406dc0f09ebb2dcc\"" Jul 7 06:01:23.520936 containerd[1427]: time="2025-07-07T06:01:23.520914660Z" level=info msg="StartContainer for \"98bb24a440f2948e5b9fc1d64694777c1da3e9340676deda406dc0f09ebb2dcc\"" Jul 7 06:01:23.525511 containerd[1427]: time="2025-07-07T06:01:23.525443220Z" level=info msg="CreateContainer within sandbox \"7c54f0352e2c66add1fa12a3555ccb8c8fb672090e9ecff011c7acbb9364491a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72337c6ae4a26d06a7a4ebfb960749a37fe9491ba02cd3dad4455390d93dc843\"" Jul 7 06:01:23.525921 containerd[1427]: time="2025-07-07T06:01:23.525893500Z" level=info msg="CreateContainer within sandbox \"1e79c157a43ad8410f0facf4ab109e7e2153aab4ab54f02bdceda825c114b4c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"841bbfe7d31ce528bd0bd46efc102e4462c0c151db7176d52640a433046c5572\"" Jul 7 06:01:23.526130 containerd[1427]: time="2025-07-07T06:01:23.526099700Z" level=info msg="StartContainer for \"72337c6ae4a26d06a7a4ebfb960749a37fe9491ba02cd3dad4455390d93dc843\"" Jul 7 06:01:23.527379 containerd[1427]: time="2025-07-07T06:01:23.527344740Z" level=info msg="StartContainer for \"841bbfe7d31ce528bd0bd46efc102e4462c0c151db7176d52640a433046c5572\"" Jul 7 06:01:23.557285 systemd[1]: Started cri-containerd-72337c6ae4a26d06a7a4ebfb960749a37fe9491ba02cd3dad4455390d93dc843.scope - libcontainer container 72337c6ae4a26d06a7a4ebfb960749a37fe9491ba02cd3dad4455390d93dc843. Jul 7 06:01:23.560814 systemd[1]: Started cri-containerd-841bbfe7d31ce528bd0bd46efc102e4462c0c151db7176d52640a433046c5572.scope - libcontainer container 841bbfe7d31ce528bd0bd46efc102e4462c0c151db7176d52640a433046c5572. Jul 7 06:01:23.561959 systemd[1]: Started cri-containerd-98bb24a440f2948e5b9fc1d64694777c1da3e9340676deda406dc0f09ebb2dcc.scope - libcontainer container 98bb24a440f2948e5b9fc1d64694777c1da3e9340676deda406dc0f09ebb2dcc. Jul 7 06:01:23.571279 kubelet[2065]: E0707 06:01:23.571187 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="1.6s" Jul 7 06:01:23.592791 containerd[1427]: time="2025-07-07T06:01:23.591951020Z" level=info msg="StartContainer for \"72337c6ae4a26d06a7a4ebfb960749a37fe9491ba02cd3dad4455390d93dc843\" returns successfully" Jul 7 06:01:23.615314 containerd[1427]: time="2025-07-07T06:01:23.612520460Z" level=info msg="StartContainer for \"98bb24a440f2948e5b9fc1d64694777c1da3e9340676deda406dc0f09ebb2dcc\" returns successfully" Jul 7 06:01:23.615314 containerd[1427]: time="2025-07-07T06:01:23.612614060Z" level=info msg="StartContainer for \"841bbfe7d31ce528bd0bd46efc102e4462c0c151db7176d52640a433046c5572\" returns successfully" Jul 7 06:01:23.691125 kubelet[2065]: W0707 06:01:23.691002 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Jul 7 06:01:23.691125 kubelet[2065]: E0707 06:01:23.691068 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:23.798279 kubelet[2065]: I0707 06:01:23.797975 2065 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:01:24.191284 kubelet[2065]: E0707 06:01:24.191254 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:01:24.191428 kubelet[2065]: E0707 06:01:24.191367 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:24.194243 kubelet[2065]: E0707 06:01:24.194221 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:01:24.194462 kubelet[2065]: E0707 06:01:24.194445 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:24.195549 kubelet[2065]: E0707 06:01:24.195527 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:01:24.195647 kubelet[2065]: E0707 06:01:24.195630 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:25.197219 kubelet[2065]: E0707 06:01:25.197194 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:01:25.197592 kubelet[2065]: E0707 06:01:25.197305 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:25.197955 kubelet[2065]: E0707 06:01:25.197928 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:01:25.198047 kubelet[2065]: E0707 06:01:25.198033 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:25.972708 kubelet[2065]: E0707 06:01:25.972675 2065 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 06:01:26.033293 kubelet[2065]: I0707 06:01:26.033243 2065 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:01:26.066697 kubelet[2065]: I0707 06:01:26.066499 2065 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:26.074689 kubelet[2065]: E0707 06:01:26.074659 2065 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:26.074689 kubelet[2065]: I0707 06:01:26.074687 2065 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:01:26.076157 kubelet[2065]: E0707 06:01:26.076122 2065 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 06:01:26.076287 kubelet[2065]: I0707 06:01:26.076168 2065 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:26.078307 kubelet[2065]: E0707 06:01:26.078269 2065 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:26.160745 kubelet[2065]: I0707 06:01:26.160706 2065 apiserver.go:52] "Watching apiserver" Jul 7 06:01:26.166928 kubelet[2065]: I0707 06:01:26.166893 2065 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:01:27.954250 systemd[1]: Reloading requested from client PID 2347 ('systemctl') (unit session-5.scope)... Jul 7 06:01:27.954267 systemd[1]: Reloading... Jul 7 06:01:28.020314 zram_generator::config[2386]: No configuration found. Jul 7 06:01:28.104465 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:01:28.136924 kubelet[2065]: I0707 06:01:28.136901 2065 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:28.141805 kubelet[2065]: E0707 06:01:28.141781 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:28.169835 systemd[1]: Reloading finished in 215 ms. Jul 7 06:01:28.200936 kubelet[2065]: I0707 06:01:28.200826 2065 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:01:28.200986 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:28.211045 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:01:28.211290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:28.211339 systemd[1]: kubelet.service: Consumed 1.559s CPU time, 130.6M memory peak, 0B memory swap peak. Jul 7 06:01:28.218477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:28.315642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:28.320557 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:01:28.360411 kubelet[2428]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:01:28.360411 kubelet[2428]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:01:28.360411 kubelet[2428]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:01:28.360760 kubelet[2428]: I0707 06:01:28.360477 2428 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:01:28.369182 kubelet[2428]: I0707 06:01:28.368870 2428 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:01:28.369182 kubelet[2428]: I0707 06:01:28.368905 2428 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:01:28.369318 kubelet[2428]: I0707 06:01:28.369192 2428 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:01:28.370595 kubelet[2428]: I0707 06:01:28.370557 2428 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:01:28.373173 kubelet[2428]: I0707 06:01:28.373060 2428 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:01:28.375988 kubelet[2428]: E0707 06:01:28.375900 2428 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:01:28.375988 kubelet[2428]: I0707 06:01:28.375985 2428 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:01:28.379528 kubelet[2428]: I0707 06:01:28.379500 2428 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:01:28.379756 kubelet[2428]: I0707 06:01:28.379718 2428 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:01:28.379907 kubelet[2428]: I0707 06:01:28.379749 2428 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:01:28.379979 kubelet[2428]: I0707 06:01:28.379914 2428 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:01:28.379979 kubelet[2428]: I0707 06:01:28.379923 2428 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:01:28.379979 kubelet[2428]: I0707 06:01:28.379963 2428 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:01:28.380314 kubelet[2428]: I0707 06:01:28.380083 2428 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:01:28.380314 kubelet[2428]: I0707 06:01:28.380105 2428 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:01:28.380314 kubelet[2428]: I0707 06:01:28.380121 2428 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:01:28.380314 kubelet[2428]: I0707 06:01:28.380131 2428 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:01:28.380914 kubelet[2428]: I0707 06:01:28.380743 2428 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:01:28.382614 kubelet[2428]: I0707 06:01:28.382590 2428 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:01:28.382991 kubelet[2428]: I0707 06:01:28.382972 2428 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:01:28.383026 kubelet[2428]: I0707 06:01:28.383001 2428 server.go:1287] "Started kubelet" Jul 7 06:01:28.385143 kubelet[2428]: I0707 06:01:28.383332 2428 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:01:28.385143 kubelet[2428]: I0707 06:01:28.384816 2428 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:01:28.385143 kubelet[2428]: I0707 06:01:28.383483 2428 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:01:28.385225 kubelet[2428]: I0707 06:01:28.385150 2428 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:01:28.386252 kubelet[2428]: I0707 06:01:28.386232 2428 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:01:28.386297 kubelet[2428]: I0707 06:01:28.386277 2428 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:01:28.388147 kubelet[2428]: I0707 06:01:28.387506 2428 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:01:28.388147 kubelet[2428]: I0707 06:01:28.387628 2428 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:01:28.388147 kubelet[2428]: I0707 06:01:28.387741 2428 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:01:28.388147 kubelet[2428]: E0707 06:01:28.388031 2428 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:01:28.392547 kubelet[2428]: I0707 06:01:28.392517 2428 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:01:28.392663 kubelet[2428]: I0707 06:01:28.392638 2428 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:01:28.395290 kubelet[2428]: I0707 06:01:28.395259 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:01:28.396119 kubelet[2428]: I0707 06:01:28.396095 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:01:28.396174 kubelet[2428]: I0707 06:01:28.396122 2428 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:01:28.396174 kubelet[2428]: I0707 06:01:28.396147 2428 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:01:28.396174 kubelet[2428]: I0707 06:01:28.396162 2428 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:01:28.396284 kubelet[2428]: E0707 06:01:28.396206 2428 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:01:28.406366 kubelet[2428]: E0707 06:01:28.406329 2428 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:01:28.409152 kubelet[2428]: I0707 06:01:28.406699 2428 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:01:28.442207 kubelet[2428]: I0707 06:01:28.442181 2428 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:01:28.442207 kubelet[2428]: I0707 06:01:28.442199 2428 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:01:28.442336 kubelet[2428]: I0707 06:01:28.442219 2428 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:01:28.442415 kubelet[2428]: I0707 06:01:28.442395 2428 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:01:28.442456 kubelet[2428]: I0707 06:01:28.442409 2428 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:01:28.442456 kubelet[2428]: I0707 06:01:28.442429 2428 policy_none.go:49] "None policy: Start" Jul 7 06:01:28.442456 kubelet[2428]: I0707 06:01:28.442437 2428 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:01:28.442456 kubelet[2428]: I0707 06:01:28.442446 2428 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:01:28.442554 kubelet[2428]: I0707 06:01:28.442541 2428 state_mem.go:75] "Updated machine memory state" Jul 7 06:01:28.445780 kubelet[2428]: I0707 06:01:28.445753 2428 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:01:28.446092 kubelet[2428]: I0707 06:01:28.445938 2428 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:01:28.446092 kubelet[2428]: I0707 06:01:28.445958 2428 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:01:28.446189 kubelet[2428]: I0707 06:01:28.446126 2428 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:01:28.447277 kubelet[2428]: E0707 06:01:28.447242 2428 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:01:28.497144 kubelet[2428]: I0707 06:01:28.497024 2428 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:01:28.497144 kubelet[2428]: I0707 06:01:28.497061 2428 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:28.497239 kubelet[2428]: I0707 06:01:28.497124 2428 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:28.503855 kubelet[2428]: E0707 06:01:28.503826 2428 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:28.550295 kubelet[2428]: I0707 06:01:28.550272 2428 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:01:28.558220 kubelet[2428]: I0707 06:01:28.558187 2428 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 06:01:28.558322 kubelet[2428]: I0707 06:01:28.558270 2428 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:01:28.589092 kubelet[2428]: I0707 06:01:28.589050 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:28.589092 kubelet[2428]: I0707 06:01:28.589089 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:01:28.589245 kubelet[2428]: I0707 06:01:28.589112 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:28.589245 kubelet[2428]: I0707 06:01:28.589130 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:28.589245 kubelet[2428]: I0707 06:01:28.589180 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:28.589245 kubelet[2428]: I0707 06:01:28.589206 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95bd7102b69fde5cc66e650f261b212c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"95bd7102b69fde5cc66e650f261b212c\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:28.589245 kubelet[2428]: I0707 06:01:28.589225 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95bd7102b69fde5cc66e650f261b212c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"95bd7102b69fde5cc66e650f261b212c\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:28.589344 kubelet[2428]: I0707 06:01:28.589240 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95bd7102b69fde5cc66e650f261b212c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"95bd7102b69fde5cc66e650f261b212c\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:28.589344 kubelet[2428]: I0707 06:01:28.589260 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:28.803323 kubelet[2428]: E0707 06:01:28.803214 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:28.804393 kubelet[2428]: E0707 06:01:28.804358 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:28.804448 kubelet[2428]: E0707 06:01:28.804361 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:29.380951 kubelet[2428]: I0707 06:01:29.380905 2428 apiserver.go:52] "Watching apiserver" Jul 7 06:01:29.388426 kubelet[2428]: I0707 06:01:29.388368 2428 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:01:29.423242 kubelet[2428]: E0707 06:01:29.422564 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:29.423242 kubelet[2428]: E0707 06:01:29.422646 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:29.423242 kubelet[2428]: I0707 06:01:29.423010 2428 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:29.430455 kubelet[2428]: E0707 06:01:29.430065 2428 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:29.430601 kubelet[2428]: E0707 06:01:29.430579 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:29.457229 kubelet[2428]: I0707 06:01:29.456648 2428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.456633782 podStartE2EDuration="1.456633782s" podCreationTimestamp="2025-07-07 06:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:01:29.456410421 +0000 UTC m=+1.132554002" watchObservedRunningTime="2025-07-07 06:01:29.456633782 +0000 UTC m=+1.132777363" Jul 7 06:01:29.473867 kubelet[2428]: I0707 06:01:29.473798 2428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.473774545 podStartE2EDuration="1.473774545s" podCreationTimestamp="2025-07-07 06:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:01:29.464819861 +0000 UTC m=+1.140963442" watchObservedRunningTime="2025-07-07 06:01:29.473774545 +0000 UTC m=+1.149918326" Jul 7 06:01:29.474697 kubelet[2428]: I0707 06:01:29.474599 2428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.474588389 podStartE2EDuration="1.474588389s" podCreationTimestamp="2025-07-07 06:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:01:29.473744585 +0000 UTC m=+1.149888166" watchObservedRunningTime="2025-07-07 06:01:29.474588389 +0000 UTC m=+1.150732050" Jul 7 06:01:29.747372 sudo[1569]: pam_unix(sudo:session): session closed for user root Jul 7 06:01:29.749445 sshd[1566]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:29.752466 systemd[1]: sshd@4-10.0.0.68:22-10.0.0.1:59188.service: Deactivated successfully. Jul 7 06:01:29.754179 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:01:29.754336 systemd[1]: session-5.scope: Consumed 6.746s CPU time, 155.7M memory peak, 0B memory swap peak. Jul 7 06:01:29.755564 systemd-logind[1414]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:01:29.756885 systemd-logind[1414]: Removed session 5. Jul 7 06:01:30.423476 kubelet[2428]: E0707 06:01:30.423343 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:30.423476 kubelet[2428]: E0707 06:01:30.423410 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:31.425098 kubelet[2428]: E0707 06:01:31.425033 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:32.562190 kubelet[2428]: E0707 06:01:32.562089 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:33.462737 kubelet[2428]: I0707 06:01:33.462705 2428 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:01:33.462998 containerd[1427]: time="2025-07-07T06:01:33.462965295Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:01:33.463692 kubelet[2428]: I0707 06:01:33.463410 2428 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:01:34.209772 systemd[1]: Created slice kubepods-burstable-podb5158f9d_d622_4695_83b7_572a1a02bcb4.slice - libcontainer container kubepods-burstable-podb5158f9d_d622_4695_83b7_572a1a02bcb4.slice. Jul 7 06:01:34.214398 systemd[1]: Created slice kubepods-besteffort-podae45e434_bf31_4436_8331_5191e4aed7e5.slice - libcontainer container kubepods-besteffort-podae45e434_bf31_4436_8331_5191e4aed7e5.slice. Jul 7 06:01:34.228347 kubelet[2428]: I0707 06:01:34.228309 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b5158f9d-d622-4695-83b7-572a1a02bcb4-cni-plugin\") pod \"kube-flannel-ds-pbnwg\" (UID: \"b5158f9d-d622-4695-83b7-572a1a02bcb4\") " pod="kube-flannel/kube-flannel-ds-pbnwg" Jul 7 06:01:34.228623 kubelet[2428]: I0707 06:01:34.228360 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b5158f9d-d622-4695-83b7-572a1a02bcb4-cni\") pod \"kube-flannel-ds-pbnwg\" (UID: \"b5158f9d-d622-4695-83b7-572a1a02bcb4\") " pod="kube-flannel/kube-flannel-ds-pbnwg" Jul 7 06:01:34.228623 kubelet[2428]: I0707 06:01:34.228412 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae45e434-bf31-4436-8331-5191e4aed7e5-xtables-lock\") pod \"kube-proxy-rp7xd\" (UID: \"ae45e434-bf31-4436-8331-5191e4aed7e5\") " pod="kube-system/kube-proxy-rp7xd" Jul 7 06:01:34.228623 kubelet[2428]: I0707 06:01:34.228438 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5158f9d-d622-4695-83b7-572a1a02bcb4-run\") pod \"kube-flannel-ds-pbnwg\" (UID: \"b5158f9d-d622-4695-83b7-572a1a02bcb4\") " pod="kube-flannel/kube-flannel-ds-pbnwg" Jul 7 06:01:34.228623 kubelet[2428]: I0707 06:01:34.228453 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b5158f9d-d622-4695-83b7-572a1a02bcb4-flannel-cfg\") pod \"kube-flannel-ds-pbnwg\" (UID: \"b5158f9d-d622-4695-83b7-572a1a02bcb4\") " pod="kube-flannel/kube-flannel-ds-pbnwg" Jul 7 06:01:34.228623 kubelet[2428]: I0707 06:01:34.228468 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckp78\" (UniqueName: \"kubernetes.io/projected/ae45e434-bf31-4436-8331-5191e4aed7e5-kube-api-access-ckp78\") pod \"kube-proxy-rp7xd\" (UID: \"ae45e434-bf31-4436-8331-5191e4aed7e5\") " pod="kube-system/kube-proxy-rp7xd" Jul 7 06:01:34.228735 kubelet[2428]: I0707 06:01:34.228485 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5158f9d-d622-4695-83b7-572a1a02bcb4-xtables-lock\") pod \"kube-flannel-ds-pbnwg\" (UID: \"b5158f9d-d622-4695-83b7-572a1a02bcb4\") " pod="kube-flannel/kube-flannel-ds-pbnwg" Jul 7 06:01:34.228735 kubelet[2428]: I0707 06:01:34.228500 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6z4j\" (UniqueName: \"kubernetes.io/projected/b5158f9d-d622-4695-83b7-572a1a02bcb4-kube-api-access-h6z4j\") pod \"kube-flannel-ds-pbnwg\" (UID: \"b5158f9d-d622-4695-83b7-572a1a02bcb4\") " pod="kube-flannel/kube-flannel-ds-pbnwg" Jul 7 06:01:34.228735 kubelet[2428]: I0707 06:01:34.228515 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae45e434-bf31-4436-8331-5191e4aed7e5-kube-proxy\") pod \"kube-proxy-rp7xd\" (UID: \"ae45e434-bf31-4436-8331-5191e4aed7e5\") " pod="kube-system/kube-proxy-rp7xd" Jul 7 06:01:34.228735 kubelet[2428]: I0707 06:01:34.228529 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae45e434-bf31-4436-8331-5191e4aed7e5-lib-modules\") pod \"kube-proxy-rp7xd\" (UID: \"ae45e434-bf31-4436-8331-5191e4aed7e5\") " pod="kube-system/kube-proxy-rp7xd" Jul 7 06:01:34.512495 kubelet[2428]: E0707 06:01:34.512134 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:34.512652 containerd[1427]: time="2025-07-07T06:01:34.512616306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pbnwg,Uid:b5158f9d-d622-4695-83b7-572a1a02bcb4,Namespace:kube-flannel,Attempt:0,}" Jul 7 06:01:34.525317 kubelet[2428]: E0707 06:01:34.525270 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:34.525849 containerd[1427]: time="2025-07-07T06:01:34.525810632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rp7xd,Uid:ae45e434-bf31-4436-8331-5191e4aed7e5,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:34.536394 containerd[1427]: time="2025-07-07T06:01:34.534968264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:01:34.536394 containerd[1427]: time="2025-07-07T06:01:34.535017144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:01:34.536394 containerd[1427]: time="2025-07-07T06:01:34.535032664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:34.536394 containerd[1427]: time="2025-07-07T06:01:34.535099784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:34.545119 containerd[1427]: time="2025-07-07T06:01:34.544864099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:01:34.545119 containerd[1427]: time="2025-07-07T06:01:34.544920739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:01:34.545119 containerd[1427]: time="2025-07-07T06:01:34.544934979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:34.545119 containerd[1427]: time="2025-07-07T06:01:34.545008139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:34.555312 systemd[1]: Started cri-containerd-3e0cb36098bdb39f3dfc644c60bbc1e75d767e99383ec3116686156a93aad7e1.scope - libcontainer container 3e0cb36098bdb39f3dfc644c60bbc1e75d767e99383ec3116686156a93aad7e1. Jul 7 06:01:34.559207 systemd[1]: Started cri-containerd-739c66275c96ec28a3bfc7b73b5fa0ff77c0744da48c66d832a140b6ae5186e6.scope - libcontainer container 739c66275c96ec28a3bfc7b73b5fa0ff77c0744da48c66d832a140b6ae5186e6. Jul 7 06:01:34.579518 containerd[1427]: time="2025-07-07T06:01:34.579412980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rp7xd,Uid:ae45e434-bf31-4436-8331-5191e4aed7e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"739c66275c96ec28a3bfc7b73b5fa0ff77c0744da48c66d832a140b6ae5186e6\"" Jul 7 06:01:34.583525 kubelet[2428]: E0707 06:01:34.583456 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:34.589759 containerd[1427]: time="2025-07-07T06:01:34.589722536Z" level=info msg="CreateContainer within sandbox \"739c66275c96ec28a3bfc7b73b5fa0ff77c0744da48c66d832a140b6ae5186e6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:01:34.591089 containerd[1427]: time="2025-07-07T06:01:34.590691619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pbnwg,Uid:b5158f9d-d622-4695-83b7-572a1a02bcb4,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"3e0cb36098bdb39f3dfc644c60bbc1e75d767e99383ec3116686156a93aad7e1\"" Jul 7 06:01:34.591793 kubelet[2428]: E0707 06:01:34.591552 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:34.593411 containerd[1427]: time="2025-07-07T06:01:34.593362149Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jul 7 06:01:34.608935 containerd[1427]: time="2025-07-07T06:01:34.608886643Z" level=info msg="CreateContainer within sandbox \"739c66275c96ec28a3bfc7b73b5fa0ff77c0744da48c66d832a140b6ae5186e6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a9836f82fab3b61ceceb6245732c674e31b3471726b571c0ff28c2fdbe3527b\"" Jul 7 06:01:34.609600 containerd[1427]: time="2025-07-07T06:01:34.609568926Z" level=info msg="StartContainer for \"6a9836f82fab3b61ceceb6245732c674e31b3471726b571c0ff28c2fdbe3527b\"" Jul 7 06:01:34.639302 systemd[1]: Started cri-containerd-6a9836f82fab3b61ceceb6245732c674e31b3471726b571c0ff28c2fdbe3527b.scope - libcontainer container 6a9836f82fab3b61ceceb6245732c674e31b3471726b571c0ff28c2fdbe3527b. Jul 7 06:01:34.663797 containerd[1427]: time="2025-07-07T06:01:34.661985309Z" level=info msg="StartContainer for \"6a9836f82fab3b61ceceb6245732c674e31b3471726b571c0ff28c2fdbe3527b\" returns successfully" Jul 7 06:01:35.435593 kubelet[2428]: E0707 06:01:35.435554 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:35.444697 kubelet[2428]: I0707 06:01:35.444459 2428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rp7xd" podStartSLOduration=1.444444357 podStartE2EDuration="1.444444357s" podCreationTimestamp="2025-07-07 06:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:01:35.444120435 +0000 UTC m=+7.120264056" watchObservedRunningTime="2025-07-07 06:01:35.444444357 +0000 UTC m=+7.120587938" Jul 7 06:01:35.763339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2510001.mount: Deactivated successfully. Jul 7 06:01:35.786200 containerd[1427]: time="2025-07-07T06:01:35.786134480Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:35.787041 containerd[1427]: time="2025-07-07T06:01:35.786813962Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Jul 7 06:01:35.787755 containerd[1427]: time="2025-07-07T06:01:35.787716485Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:35.789919 containerd[1427]: time="2025-07-07T06:01:35.789889132Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:35.790722 containerd[1427]: time="2025-07-07T06:01:35.790680855Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.197286866s" Jul 7 06:01:35.790722 containerd[1427]: time="2025-07-07T06:01:35.790707775Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jul 7 06:01:35.793923 containerd[1427]: time="2025-07-07T06:01:35.793702305Z" level=info msg="CreateContainer within sandbox \"3e0cb36098bdb39f3dfc644c60bbc1e75d767e99383ec3116686156a93aad7e1\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jul 7 06:01:35.802182 containerd[1427]: time="2025-07-07T06:01:35.802135333Z" level=info msg="CreateContainer within sandbox \"3e0cb36098bdb39f3dfc644c60bbc1e75d767e99383ec3116686156a93aad7e1\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"a4f0279408d1ce6530169f3d554b307ed32929e5fa0ded9aefacee9fe520f705\"" Jul 7 06:01:35.802693 containerd[1427]: time="2025-07-07T06:01:35.802672974Z" level=info msg="StartContainer for \"a4f0279408d1ce6530169f3d554b307ed32929e5fa0ded9aefacee9fe520f705\"" Jul 7 06:01:35.827384 systemd[1]: Started cri-containerd-a4f0279408d1ce6530169f3d554b307ed32929e5fa0ded9aefacee9fe520f705.scope - libcontainer container a4f0279408d1ce6530169f3d554b307ed32929e5fa0ded9aefacee9fe520f705. Jul 7 06:01:35.850946 containerd[1427]: time="2025-07-07T06:01:35.850902373Z" level=info msg="StartContainer for \"a4f0279408d1ce6530169f3d554b307ed32929e5fa0ded9aefacee9fe520f705\" returns successfully" Jul 7 06:01:35.851643 systemd[1]: cri-containerd-a4f0279408d1ce6530169f3d554b307ed32929e5fa0ded9aefacee9fe520f705.scope: Deactivated successfully. Jul 7 06:01:35.889772 containerd[1427]: time="2025-07-07T06:01:35.889718021Z" level=info msg="shim disconnected" id=a4f0279408d1ce6530169f3d554b307ed32929e5fa0ded9aefacee9fe520f705 namespace=k8s.io Jul 7 06:01:35.889772 containerd[1427]: time="2025-07-07T06:01:35.889770141Z" level=warning msg="cleaning up after shim disconnected" id=a4f0279408d1ce6530169f3d554b307ed32929e5fa0ded9aefacee9fe520f705 namespace=k8s.io Jul 7 06:01:35.889772 containerd[1427]: time="2025-07-07T06:01:35.889780901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:01:36.440230 kubelet[2428]: E0707 06:01:36.440191 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:36.441179 containerd[1427]: time="2025-07-07T06:01:36.441134504Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jul 7 06:01:37.573975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653339706.mount: Deactivated successfully. Jul 7 06:01:38.766258 containerd[1427]: time="2025-07-07T06:01:38.766211113Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:38.767185 containerd[1427]: time="2025-07-07T06:01:38.766906115Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jul 7 06:01:38.768681 containerd[1427]: time="2025-07-07T06:01:38.768038398Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:38.771021 containerd[1427]: time="2025-07-07T06:01:38.770989846Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:38.773895 containerd[1427]: time="2025-07-07T06:01:38.773695413Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.332507629s" Jul 7 06:01:38.773971 containerd[1427]: time="2025-07-07T06:01:38.773899014Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jul 7 06:01:38.778801 containerd[1427]: time="2025-07-07T06:01:38.777947185Z" level=info msg="CreateContainer within sandbox \"3e0cb36098bdb39f3dfc644c60bbc1e75d767e99383ec3116686156a93aad7e1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:01:38.778903 kubelet[2428]: E0707 06:01:38.778618 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:38.788597 containerd[1427]: time="2025-07-07T06:01:38.788554453Z" level=info msg="CreateContainer within sandbox \"3e0cb36098bdb39f3dfc644c60bbc1e75d767e99383ec3116686156a93aad7e1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"30ac0fe1bca64f4f04b31deaebe509e5cf492be4460d926ba7948aeeacf09935\"" Jul 7 06:01:38.790154 containerd[1427]: time="2025-07-07T06:01:38.789983977Z" level=info msg="StartContainer for \"30ac0fe1bca64f4f04b31deaebe509e5cf492be4460d926ba7948aeeacf09935\"" Jul 7 06:01:38.818309 systemd[1]: Started cri-containerd-30ac0fe1bca64f4f04b31deaebe509e5cf492be4460d926ba7948aeeacf09935.scope - libcontainer container 30ac0fe1bca64f4f04b31deaebe509e5cf492be4460d926ba7948aeeacf09935. Jul 7 06:01:38.840021 systemd[1]: cri-containerd-30ac0fe1bca64f4f04b31deaebe509e5cf492be4460d926ba7948aeeacf09935.scope: Deactivated successfully. Jul 7 06:01:38.840735 containerd[1427]: time="2025-07-07T06:01:38.840680914Z" level=info msg="StartContainer for \"30ac0fe1bca64f4f04b31deaebe509e5cf492be4460d926ba7948aeeacf09935\" returns successfully" Jul 7 06:01:38.878732 kubelet[2428]: I0707 06:01:38.878697 2428 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:01:38.916327 systemd[1]: Created slice kubepods-burstable-pod9c4883ea_25e4_451a_9f7e_5a89f0a61de2.slice - libcontainer container kubepods-burstable-pod9c4883ea_25e4_451a_9f7e_5a89f0a61de2.slice. Jul 7 06:01:38.935648 systemd[1]: Created slice kubepods-burstable-pod10863a02_deff_45ac_a126_4b6a7634b18b.slice - libcontainer container kubepods-burstable-pod10863a02_deff_45ac_a126_4b6a7634b18b.slice. Jul 7 06:01:38.962724 kubelet[2428]: I0707 06:01:38.962674 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6tc\" (UniqueName: \"kubernetes.io/projected/9c4883ea-25e4-451a-9f7e-5a89f0a61de2-kube-api-access-sx6tc\") pod \"coredns-668d6bf9bc-cxcg6\" (UID: \"9c4883ea-25e4-451a-9f7e-5a89f0a61de2\") " pod="kube-system/coredns-668d6bf9bc-cxcg6" Jul 7 06:01:38.962724 kubelet[2428]: I0707 06:01:38.962718 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10863a02-deff-45ac-a126-4b6a7634b18b-config-volume\") pod \"coredns-668d6bf9bc-h9vwc\" (UID: \"10863a02-deff-45ac-a126-4b6a7634b18b\") " pod="kube-system/coredns-668d6bf9bc-h9vwc" Jul 7 06:01:38.962880 kubelet[2428]: I0707 06:01:38.962738 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7vn\" (UniqueName: \"kubernetes.io/projected/10863a02-deff-45ac-a126-4b6a7634b18b-kube-api-access-nm7vn\") pod \"coredns-668d6bf9bc-h9vwc\" (UID: \"10863a02-deff-45ac-a126-4b6a7634b18b\") " pod="kube-system/coredns-668d6bf9bc-h9vwc" Jul 7 06:01:38.962880 kubelet[2428]: I0707 06:01:38.962798 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c4883ea-25e4-451a-9f7e-5a89f0a61de2-config-volume\") pod \"coredns-668d6bf9bc-cxcg6\" (UID: \"9c4883ea-25e4-451a-9f7e-5a89f0a61de2\") " pod="kube-system/coredns-668d6bf9bc-cxcg6" Jul 7 06:01:38.966694 containerd[1427]: time="2025-07-07T06:01:38.966641456Z" level=info msg="shim disconnected" id=30ac0fe1bca64f4f04b31deaebe509e5cf492be4460d926ba7948aeeacf09935 namespace=k8s.io Jul 7 06:01:38.966874 containerd[1427]: time="2025-07-07T06:01:38.966723936Z" level=warning msg="cleaning up after shim disconnected" id=30ac0fe1bca64f4f04b31deaebe509e5cf492be4460d926ba7948aeeacf09935 namespace=k8s.io Jul 7 06:01:38.966874 containerd[1427]: time="2025-07-07T06:01:38.966736256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:01:39.116909 kubelet[2428]: E0707 06:01:39.116871 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:39.274351 kubelet[2428]: E0707 06:01:39.274305 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:39.274480 kubelet[2428]: E0707 06:01:39.274407 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:39.274872 containerd[1427]: time="2025-07-07T06:01:39.274797045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxcg6,Uid:9c4883ea-25e4-451a-9f7e-5a89f0a61de2,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:39.275530 containerd[1427]: time="2025-07-07T06:01:39.275452047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h9vwc,Uid:10863a02-deff-45ac-a126-4b6a7634b18b,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:39.341789 containerd[1427]: time="2025-07-07T06:01:39.341738815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h9vwc,Uid:10863a02-deff-45ac-a126-4b6a7634b18b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d72d59aec1746162950626b44149121490f154d5e5ebbd01f6e94c4beaf29dfc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 7 06:01:39.342226 kubelet[2428]: E0707 06:01:39.342124 2428 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d72d59aec1746162950626b44149121490f154d5e5ebbd01f6e94c4beaf29dfc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 7 06:01:39.342295 kubelet[2428]: E0707 06:01:39.342265 2428 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d72d59aec1746162950626b44149121490f154d5e5ebbd01f6e94c4beaf29dfc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-h9vwc" Jul 7 06:01:39.342295 kubelet[2428]: E0707 06:01:39.342287 2428 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d72d59aec1746162950626b44149121490f154d5e5ebbd01f6e94c4beaf29dfc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-h9vwc" Jul 7 06:01:39.342349 kubelet[2428]: E0707 06:01:39.342328 2428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-h9vwc_kube-system(10863a02-deff-45ac-a126-4b6a7634b18b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-h9vwc_kube-system(10863a02-deff-45ac-a126-4b6a7634b18b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d72d59aec1746162950626b44149121490f154d5e5ebbd01f6e94c4beaf29dfc\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-h9vwc" podUID="10863a02-deff-45ac-a126-4b6a7634b18b" Jul 7 06:01:39.342927 containerd[1427]: time="2025-07-07T06:01:39.342767018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxcg6,Uid:9c4883ea-25e4-451a-9f7e-5a89f0a61de2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66b3d55df75c6c7141f5de183f6a6013a4a034909df31748788b846de5409f79\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 7 06:01:39.342992 kubelet[2428]: E0707 06:01:39.342928 2428 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66b3d55df75c6c7141f5de183f6a6013a4a034909df31748788b846de5409f79\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 7 06:01:39.342992 kubelet[2428]: E0707 06:01:39.342970 2428 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66b3d55df75c6c7141f5de183f6a6013a4a034909df31748788b846de5409f79\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-cxcg6" Jul 7 06:01:39.342992 kubelet[2428]: E0707 06:01:39.342986 2428 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66b3d55df75c6c7141f5de183f6a6013a4a034909df31748788b846de5409f79\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-cxcg6" Jul 7 06:01:39.343065 kubelet[2428]: E0707 06:01:39.343015 2428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cxcg6_kube-system(9c4883ea-25e4-451a-9f7e-5a89f0a61de2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cxcg6_kube-system(9c4883ea-25e4-451a-9f7e-5a89f0a61de2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66b3d55df75c6c7141f5de183f6a6013a4a034909df31748788b846de5409f79\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-cxcg6" podUID="9c4883ea-25e4-451a-9f7e-5a89f0a61de2" Jul 7 06:01:39.460840 kubelet[2428]: E0707 06:01:39.460735 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:39.461377 kubelet[2428]: E0707 06:01:39.461191 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:39.461377 kubelet[2428]: E0707 06:01:39.461302 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:39.477054 containerd[1427]: time="2025-07-07T06:01:39.476999199Z" level=info msg="CreateContainer within sandbox \"3e0cb36098bdb39f3dfc644c60bbc1e75d767e99383ec3116686156a93aad7e1\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jul 7 06:01:39.486956 containerd[1427]: time="2025-07-07T06:01:39.486904024Z" level=info msg="CreateContainer within sandbox \"3e0cb36098bdb39f3dfc644c60bbc1e75d767e99383ec3116686156a93aad7e1\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e8813d15bd69cca1fee5c76b423357ceeea1675122edfe9968e89400c19ce85a\"" Jul 7 06:01:39.487488 containerd[1427]: time="2025-07-07T06:01:39.487367385Z" level=info msg="StartContainer for \"e8813d15bd69cca1fee5c76b423357ceeea1675122edfe9968e89400c19ce85a\"" Jul 7 06:01:39.513287 systemd[1]: Started cri-containerd-e8813d15bd69cca1fee5c76b423357ceeea1675122edfe9968e89400c19ce85a.scope - libcontainer container e8813d15bd69cca1fee5c76b423357ceeea1675122edfe9968e89400c19ce85a. Jul 7 06:01:39.538943 containerd[1427]: time="2025-07-07T06:01:39.538895516Z" level=info msg="StartContainer for \"e8813d15bd69cca1fee5c76b423357ceeea1675122edfe9968e89400c19ce85a\" returns successfully" Jul 7 06:01:39.788741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30ac0fe1bca64f4f04b31deaebe509e5cf492be4460d926ba7948aeeacf09935-rootfs.mount: Deactivated successfully. Jul 7 06:01:40.465128 kubelet[2428]: E0707 06:01:40.464843 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:40.477172 kubelet[2428]: I0707 06:01:40.475860 2428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-pbnwg" podStartSLOduration=2.29336547 podStartE2EDuration="6.475844821s" podCreationTimestamp="2025-07-07 06:01:34 +0000 UTC" firstStartedPulling="2025-07-07 06:01:34.592322545 +0000 UTC m=+6.268466166" lastFinishedPulling="2025-07-07 06:01:38.774801896 +0000 UTC m=+10.450945517" observedRunningTime="2025-07-07 06:01:40.47536886 +0000 UTC m=+12.151512481" watchObservedRunningTime="2025-07-07 06:01:40.475844821 +0000 UTC m=+12.151988442" Jul 7 06:01:40.623889 systemd-networkd[1365]: flannel.1: Link UP Jul 7 06:01:40.623896 systemd-networkd[1365]: flannel.1: Gained carrier Jul 7 06:01:41.466598 kubelet[2428]: E0707 06:01:41.466569 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:41.660249 update_engine[1416]: I20250707 06:01:41.660167 1416 update_attempter.cc:509] Updating boot flags... Jul 7 06:01:41.676241 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3092) Jul 7 06:01:41.702172 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3096) Jul 7 06:01:42.570400 kubelet[2428]: E0707 06:01:42.570332 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:42.682406 systemd-networkd[1365]: flannel.1: Gained IPv6LL Jul 7 06:01:43.470853 kubelet[2428]: E0707 06:01:43.469905 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:49.397119 kubelet[2428]: E0707 06:01:49.396655 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:49.397959 containerd[1427]: time="2025-07-07T06:01:49.397903695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxcg6,Uid:9c4883ea-25e4-451a-9f7e-5a89f0a61de2,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:49.421692 systemd-networkd[1365]: cni0: Link UP Jul 7 06:01:49.421698 systemd-networkd[1365]: cni0: Gained carrier Jul 7 06:01:49.425299 systemd-networkd[1365]: cni0: Lost carrier Jul 7 06:01:49.428036 systemd-networkd[1365]: veth6d456b09: Link UP Jul 7 06:01:49.430275 kernel: cni0: port 1(veth6d456b09) entered blocking state Jul 7 06:01:49.430334 kernel: cni0: port 1(veth6d456b09) entered disabled state Jul 7 06:01:49.430350 kernel: veth6d456b09: entered allmulticast mode Jul 7 06:01:49.432151 kernel: veth6d456b09: entered promiscuous mode Jul 7 06:01:49.432231 kernel: cni0: port 1(veth6d456b09) entered blocking state Jul 7 06:01:49.432246 kernel: cni0: port 1(veth6d456b09) entered forwarding state Jul 7 06:01:49.433218 kernel: cni0: port 1(veth6d456b09) entered disabled state Jul 7 06:01:49.441601 kernel: cni0: port 1(veth6d456b09) entered blocking state Jul 7 06:01:49.441693 kernel: cni0: port 1(veth6d456b09) entered forwarding state Jul 7 06:01:49.441828 systemd-networkd[1365]: veth6d456b09: Gained carrier Jul 7 06:01:49.442482 systemd-networkd[1365]: cni0: Gained carrier Jul 7 06:01:49.443401 containerd[1427]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} Jul 7 06:01:49.443401 containerd[1427]: delegateAdd: netconf sent to delegate plugin: Jul 7 06:01:49.467806 containerd[1427]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-07-07T06:01:49.467718068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:01:49.467806 containerd[1427]: time="2025-07-07T06:01:49.467772188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:01:49.467806 containerd[1427]: time="2025-07-07T06:01:49.467782908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:49.468025 containerd[1427]: time="2025-07-07T06:01:49.467853508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:49.490300 systemd[1]: Started cri-containerd-89485456ced385c958132c75be361608b981b67e925a36997d26e8c0917ca425.scope - libcontainer container 89485456ced385c958132c75be361608b981b67e925a36997d26e8c0917ca425. Jul 7 06:01:49.499304 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:01:49.514723 containerd[1427]: time="2025-07-07T06:01:49.514687931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxcg6,Uid:9c4883ea-25e4-451a-9f7e-5a89f0a61de2,Namespace:kube-system,Attempt:0,} returns sandbox id \"89485456ced385c958132c75be361608b981b67e925a36997d26e8c0917ca425\"" Jul 7 06:01:49.515469 kubelet[2428]: E0707 06:01:49.515447 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:49.519110 containerd[1427]: time="2025-07-07T06:01:49.519079856Z" level=info msg="CreateContainer within sandbox \"89485456ced385c958132c75be361608b981b67e925a36997d26e8c0917ca425\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:01:49.531821 containerd[1427]: time="2025-07-07T06:01:49.531782633Z" level=info msg="CreateContainer within sandbox \"89485456ced385c958132c75be361608b981b67e925a36997d26e8c0917ca425\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32b355492a32c84ac9eb2031e08a791c1d16a3f1d5a741cd699f5fb976eb139a\"" Jul 7 06:01:49.532189 containerd[1427]: time="2025-07-07T06:01:49.532167034Z" level=info msg="StartContainer for \"32b355492a32c84ac9eb2031e08a791c1d16a3f1d5a741cd699f5fb976eb139a\"" Jul 7 06:01:49.563288 systemd[1]: Started cri-containerd-32b355492a32c84ac9eb2031e08a791c1d16a3f1d5a741cd699f5fb976eb139a.scope - libcontainer container 32b355492a32c84ac9eb2031e08a791c1d16a3f1d5a741cd699f5fb976eb139a. Jul 7 06:01:49.584500 containerd[1427]: time="2025-07-07T06:01:49.584455984Z" level=info msg="StartContainer for \"32b355492a32c84ac9eb2031e08a791c1d16a3f1d5a741cd699f5fb976eb139a\" returns successfully" Jul 7 06:01:50.406640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2456781497.mount: Deactivated successfully. Jul 7 06:01:50.489073 kubelet[2428]: E0707 06:01:50.489035 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:50.503330 kubelet[2428]: I0707 06:01:50.503270 2428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cxcg6" podStartSLOduration=16.503253967 podStartE2EDuration="16.503253967s" podCreationTimestamp="2025-07-07 06:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:01:50.502895526 +0000 UTC m=+22.179039147" watchObservedRunningTime="2025-07-07 06:01:50.503253967 +0000 UTC m=+22.179397588" Jul 7 06:01:50.874334 systemd-networkd[1365]: cni0: Gained IPv6LL Jul 7 06:01:50.938280 systemd-networkd[1365]: veth6d456b09: Gained IPv6LL Jul 7 06:01:51.491200 kubelet[2428]: E0707 06:01:51.491098 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:52.397192 kubelet[2428]: E0707 06:01:52.396985 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:52.398289 containerd[1427]: time="2025-07-07T06:01:52.398243195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h9vwc,Uid:10863a02-deff-45ac-a126-4b6a7634b18b,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:52.492573 kubelet[2428]: E0707 06:01:52.492500 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:52.520947 systemd-networkd[1365]: veth200f07e8: Link UP Jul 7 06:01:52.523310 kernel: cni0: port 2(veth200f07e8) entered blocking state Jul 7 06:01:52.523390 kernel: cni0: port 2(veth200f07e8) entered disabled state Jul 7 06:01:52.523408 kernel: veth200f07e8: entered allmulticast mode Jul 7 06:01:52.523430 kernel: veth200f07e8: entered promiscuous mode Jul 7 06:01:52.529391 kernel: cni0: port 2(veth200f07e8) entered blocking state Jul 7 06:01:52.529468 kernel: cni0: port 2(veth200f07e8) entered forwarding state Jul 7 06:01:52.529315 systemd-networkd[1365]: veth200f07e8: Gained carrier Jul 7 06:01:52.531539 containerd[1427]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Jul 7 06:01:52.531539 containerd[1427]: delegateAdd: netconf sent to delegate plugin: Jul 7 06:01:52.577960 containerd[1427]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-07-07T06:01:52.577684112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:01:52.577960 containerd[1427]: time="2025-07-07T06:01:52.577733192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:01:52.577960 containerd[1427]: time="2025-07-07T06:01:52.577753152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:52.577960 containerd[1427]: time="2025-07-07T06:01:52.577833992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:01:52.600303 systemd[1]: Started cri-containerd-16fb2cd2f1c5fefec0e8ed99d26af355fd3f6dc8c9a3272f75f03c5eb48cd594.scope - libcontainer container 16fb2cd2f1c5fefec0e8ed99d26af355fd3f6dc8c9a3272f75f03c5eb48cd594. Jul 7 06:01:52.609214 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:01:52.624248 containerd[1427]: time="2025-07-07T06:01:52.624216403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h9vwc,Uid:10863a02-deff-45ac-a126-4b6a7634b18b,Namespace:kube-system,Attempt:0,} returns sandbox id \"16fb2cd2f1c5fefec0e8ed99d26af355fd3f6dc8c9a3272f75f03c5eb48cd594\"" Jul 7 06:01:52.628730 kubelet[2428]: E0707 06:01:52.628702 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:52.642436 containerd[1427]: time="2025-07-07T06:01:52.642356783Z" level=info msg="CreateContainer within sandbox \"16fb2cd2f1c5fefec0e8ed99d26af355fd3f6dc8c9a3272f75f03c5eb48cd594\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:01:52.651573 containerd[1427]: time="2025-07-07T06:01:52.651343233Z" level=info msg="CreateContainer within sandbox \"16fb2cd2f1c5fefec0e8ed99d26af355fd3f6dc8c9a3272f75f03c5eb48cd594\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ba2455da86a66d569f9ba381e354aba99b93dd1c9a3523a001d9e84101188ce\"" Jul 7 06:01:52.652126 containerd[1427]: time="2025-07-07T06:01:52.652001914Z" level=info msg="StartContainer for \"3ba2455da86a66d569f9ba381e354aba99b93dd1c9a3523a001d9e84101188ce\"" Jul 7 06:01:52.682354 systemd[1]: Started cri-containerd-3ba2455da86a66d569f9ba381e354aba99b93dd1c9a3523a001d9e84101188ce.scope - libcontainer container 3ba2455da86a66d569f9ba381e354aba99b93dd1c9a3523a001d9e84101188ce. Jul 7 06:01:52.702463 containerd[1427]: time="2025-07-07T06:01:52.702327569Z" level=info msg="StartContainer for \"3ba2455da86a66d569f9ba381e354aba99b93dd1c9a3523a001d9e84101188ce\" returns successfully" Jul 7 06:01:53.507167 kubelet[2428]: E0707 06:01:53.507129 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:53.545338 kubelet[2428]: I0707 06:01:53.545272 2428 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h9vwc" podStartSLOduration=19.545254017 podStartE2EDuration="19.545254017s" podCreationTimestamp="2025-07-07 06:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:01:53.536222128 +0000 UTC m=+25.212365749" watchObservedRunningTime="2025-07-07 06:01:53.545254017 +0000 UTC m=+25.221397638" Jul 7 06:01:54.320979 systemd[1]: Started sshd@5-10.0.0.68:22-10.0.0.1:48962.service - OpenSSH per-connection server daemon (10.0.0.1:48962). Jul 7 06:01:54.362674 sshd[3397]: Accepted publickey for core from 10.0.0.1 port 48962 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:01:54.364083 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:54.367875 systemd-logind[1414]: New session 6 of user core. Jul 7 06:01:54.378325 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:01:54.395240 systemd-networkd[1365]: veth200f07e8: Gained IPv6LL Jul 7 06:01:54.495304 sshd[3397]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:54.498908 systemd[1]: sshd@5-10.0.0.68:22-10.0.0.1:48962.service: Deactivated successfully. Jul 7 06:01:54.500808 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:01:54.502125 systemd-logind[1414]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:01:54.502960 systemd-logind[1414]: Removed session 6. Jul 7 06:01:54.521648 kubelet[2428]: E0707 06:01:54.521615 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:55.517276 kubelet[2428]: E0707 06:01:55.517195 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:59.514697 systemd[1]: Started sshd@6-10.0.0.68:22-10.0.0.1:48970.service - OpenSSH per-connection server daemon (10.0.0.1:48970). Jul 7 06:01:59.585793 sshd[3438]: Accepted publickey for core from 10.0.0.1 port 48970 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:01:59.587037 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:59.591273 systemd-logind[1414]: New session 7 of user core. Jul 7 06:01:59.597422 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:01:59.704395 sshd[3438]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:59.707746 systemd-logind[1414]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:01:59.708499 systemd[1]: sshd@6-10.0.0.68:22-10.0.0.1:48970.service: Deactivated successfully. Jul 7 06:01:59.710070 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:01:59.710899 systemd-logind[1414]: Removed session 7. Jul 7 06:02:04.721354 systemd[1]: Started sshd@7-10.0.0.68:22-10.0.0.1:41990.service - OpenSSH per-connection server daemon (10.0.0.1:41990). Jul 7 06:02:04.758718 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 41990 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:02:04.760062 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:04.764061 systemd-logind[1414]: New session 8 of user core. Jul 7 06:02:04.776296 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:02:04.888433 sshd[3475]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:04.897657 systemd[1]: sshd@7-10.0.0.68:22-10.0.0.1:41990.service: Deactivated successfully. Jul 7 06:02:04.899385 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:02:04.900967 systemd-logind[1414]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:02:04.902227 systemd[1]: Started sshd@8-10.0.0.68:22-10.0.0.1:41996.service - OpenSSH per-connection server daemon (10.0.0.1:41996). Jul 7 06:02:04.903558 systemd-logind[1414]: Removed session 8. Jul 7 06:02:04.941066 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 41996 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:02:04.942217 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:04.945583 systemd-logind[1414]: New session 9 of user core. Jul 7 06:02:04.957264 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:02:05.092017 sshd[3493]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:05.102669 systemd[1]: sshd@8-10.0.0.68:22-10.0.0.1:41996.service: Deactivated successfully. Jul 7 06:02:05.104240 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:02:05.106218 systemd-logind[1414]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:02:05.114475 systemd[1]: Started sshd@9-10.0.0.68:22-10.0.0.1:42012.service - OpenSSH per-connection server daemon (10.0.0.1:42012). Jul 7 06:02:05.115729 systemd-logind[1414]: Removed session 9. Jul 7 06:02:05.149267 sshd[3506]: Accepted publickey for core from 10.0.0.1 port 42012 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:02:05.150489 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:05.154064 systemd-logind[1414]: New session 10 of user core. Jul 7 06:02:05.169303 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:02:05.275616 sshd[3506]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:05.279041 systemd[1]: sshd@9-10.0.0.68:22-10.0.0.1:42012.service: Deactivated successfully. Jul 7 06:02:05.280811 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:02:05.281404 systemd-logind[1414]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:02:05.282356 systemd-logind[1414]: Removed session 10. Jul 7 06:02:10.285645 systemd[1]: Started sshd@10-10.0.0.68:22-10.0.0.1:42024.service - OpenSSH per-connection server daemon (10.0.0.1:42024). Jul 7 06:02:10.321933 sshd[3541]: Accepted publickey for core from 10.0.0.1 port 42024 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:02:10.323286 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:10.328073 systemd-logind[1414]: New session 11 of user core. Jul 7 06:02:10.336330 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:02:10.448686 sshd[3541]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:10.462782 systemd[1]: sshd@10-10.0.0.68:22-10.0.0.1:42024.service: Deactivated successfully. Jul 7 06:02:10.465337 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:02:10.466785 systemd-logind[1414]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:02:10.473490 systemd[1]: Started sshd@11-10.0.0.68:22-10.0.0.1:42040.service - OpenSSH per-connection server daemon (10.0.0.1:42040). Jul 7 06:02:10.474716 systemd-logind[1414]: Removed session 11. Jul 7 06:02:10.506602 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 42040 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:02:10.507809 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:10.511272 systemd-logind[1414]: New session 12 of user core. Jul 7 06:02:10.530413 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:02:10.716031 sshd[3556]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:10.727661 systemd[1]: sshd@11-10.0.0.68:22-10.0.0.1:42040.service: Deactivated successfully. Jul 7 06:02:10.729107 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:02:10.730391 systemd-logind[1414]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:02:10.737523 systemd[1]: Started sshd@12-10.0.0.68:22-10.0.0.1:42044.service - OpenSSH per-connection server daemon (10.0.0.1:42044). Jul 7 06:02:10.738262 systemd-logind[1414]: Removed session 12. Jul 7 06:02:10.770999 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 42044 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:02:10.772268 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:10.775871 systemd-logind[1414]: New session 13 of user core. Jul 7 06:02:10.781357 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:02:11.493300 sshd[3575]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:11.502264 systemd[1]: sshd@12-10.0.0.68:22-10.0.0.1:42044.service: Deactivated successfully. Jul 7 06:02:11.505002 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:02:11.507219 systemd-logind[1414]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:02:11.516757 systemd[1]: Started sshd@13-10.0.0.68:22-10.0.0.1:42058.service - OpenSSH per-connection server daemon (10.0.0.1:42058). Jul 7 06:02:11.518019 systemd-logind[1414]: Removed session 13. Jul 7 06:02:11.552770 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 42058 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:02:11.554047 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:11.558113 systemd-logind[1414]: New session 14 of user core. Jul 7 06:02:11.567306 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:02:11.767134 sshd[3609]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:11.780848 systemd[1]: sshd@13-10.0.0.68:22-10.0.0.1:42058.service: Deactivated successfully. Jul 7 06:02:11.782557 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:02:11.783202 systemd-logind[1414]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:02:11.797456 systemd[1]: Started sshd@14-10.0.0.68:22-10.0.0.1:42074.service - OpenSSH per-connection server daemon (10.0.0.1:42074). Jul 7 06:02:11.798558 systemd-logind[1414]: Removed session 14. Jul 7 06:02:11.831193 sshd[3621]: Accepted publickey for core from 10.0.0.1 port 42074 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:02:11.832614 sshd[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:11.836345 systemd-logind[1414]: New session 15 of user core. Jul 7 06:02:11.846307 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:02:11.952618 sshd[3621]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:11.955774 systemd[1]: sshd@14-10.0.0.68:22-10.0.0.1:42074.service: Deactivated successfully. Jul 7 06:02:11.957994 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:02:11.959054 systemd-logind[1414]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:02:11.960280 systemd-logind[1414]: Removed session 15. Jul 7 06:02:16.963024 systemd[1]: Started sshd@15-10.0.0.68:22-10.0.0.1:51346.service - OpenSSH per-connection server daemon (10.0.0.1:51346). Jul 7 06:02:17.000097 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 51346 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:02:17.001377 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:17.007152 systemd-logind[1414]: New session 16 of user core. Jul 7 06:02:17.013320 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:02:17.116119 sshd[3658]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:17.119950 systemd[1]: sshd@15-10.0.0.68:22-10.0.0.1:51346.service: Deactivated successfully. Jul 7 06:02:17.122395 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:02:17.124274 systemd-logind[1414]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:02:17.125638 systemd-logind[1414]: Removed session 16. Jul 7 06:02:22.128814 systemd[1]: Started sshd@16-10.0.0.68:22-10.0.0.1:51356.service - OpenSSH per-connection server daemon (10.0.0.1:51356). Jul 7 06:02:22.165621 sshd[3694]: Accepted publickey for core from 10.0.0.1 port 51356 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:02:22.166725 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:22.170068 systemd-logind[1414]: New session 17 of user core. Jul 7 06:02:22.179287 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:02:22.281039 sshd[3694]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:22.284253 systemd[1]: sshd@16-10.0.0.68:22-10.0.0.1:51356.service: Deactivated successfully. Jul 7 06:02:22.285908 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:02:22.286605 systemd-logind[1414]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:02:22.287314 systemd-logind[1414]: Removed session 17. Jul 7 06:02:27.293125 systemd[1]: Started sshd@17-10.0.0.68:22-10.0.0.1:43372.service - OpenSSH per-connection server daemon (10.0.0.1:43372). Jul 7 06:02:27.330221 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 43372 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:02:27.331485 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:27.335731 systemd-logind[1414]: New session 18 of user core. Jul 7 06:02:27.345297 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:02:27.448998 sshd[3729]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:27.452330 systemd[1]: sshd@17-10.0.0.68:22-10.0.0.1:43372.service: Deactivated successfully. Jul 7 06:02:27.455464 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:02:27.456481 systemd-logind[1414]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:02:27.457515 systemd-logind[1414]: Removed session 18.