Sep 9 00:20:33.853500 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:20:33.853522 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Sep 8 22:48:00 -00 2025 Sep 9 00:20:33.853532 kernel: KASLR enabled Sep 9 00:20:33.853539 kernel: efi: EFI v2.7 by EDK II Sep 9 00:20:33.853545 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 9 00:20:33.853550 kernel: random: crng init done Sep 9 00:20:33.853558 kernel: ACPI: Early table checksum verification disabled Sep 9 00:20:33.853564 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 9 00:20:33.853570 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:20:33.853578 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:33.853584 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:33.853590 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:33.853596 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:33.853603 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:33.853617 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:33.853627 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:33.853634 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:33.853640 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:33.853647 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:20:33.853653 kernel: NUMA: Failed to initialise from firmware Sep 9 00:20:33.853660 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:20:33.853667 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 9 00:20:33.853673 kernel: Zone ranges: Sep 9 00:20:33.853679 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:20:33.853686 kernel: DMA32 empty Sep 9 00:20:33.853693 kernel: Normal empty Sep 9 00:20:33.853700 kernel: Movable zone start for each node Sep 9 00:20:33.853706 kernel: Early memory node ranges Sep 9 00:20:33.853713 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 9 00:20:33.853719 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 9 00:20:33.853726 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 9 00:20:33.853733 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 00:20:33.853739 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 00:20:33.853746 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 00:20:33.853752 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 00:20:33.853759 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:20:33.853765 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:20:33.853773 kernel: psci: probing for conduit method from ACPI. Sep 9 00:20:33.853779 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:20:33.853786 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:20:33.853796 kernel: psci: Trusted OS migration not required Sep 9 00:20:33.853803 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:20:33.853810 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:20:33.853818 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 9 00:20:33.853825 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 9 00:20:33.853832 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:20:33.853839 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:20:33.853845 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:20:33.853852 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:20:33.853859 kernel: CPU features: detected: Spectre-v4 Sep 9 00:20:33.853866 kernel: CPU features: detected: Spectre-BHB Sep 9 00:20:33.853873 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:20:33.853880 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:20:33.853888 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:20:33.853895 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:20:33.853902 kernel: alternatives: applying boot alternatives Sep 9 00:20:33.853911 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7395fe4f9fb368b2829f9349e2a89e9a9e96b552675d3b261a5a30cf3c6cb15c Sep 9 00:20:33.853918 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:20:33.853925 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:20:33.853932 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:20:33.853941 kernel: Fallback order for Node 0: 0 Sep 9 00:20:33.853952 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:20:33.853961 kernel: Policy zone: DMA Sep 9 00:20:33.853968 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:20:33.853976 kernel: software IO TLB: area num 4. Sep 9 00:20:33.853984 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 9 00:20:33.853992 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Sep 9 00:20:33.853999 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:20:33.854006 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:20:33.854013 kernel: rcu: RCU event tracing is enabled. Sep 9 00:20:33.854020 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:20:33.854027 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:20:33.854034 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:20:33.854041 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:20:33.854049 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:20:33.854057 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:20:33.854064 kernel: GICv3: 256 SPIs implemented Sep 9 00:20:33.854071 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:20:33.854078 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:20:33.854085 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 00:20:33.854091 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:20:33.854098 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:20:33.854106 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:20:33.854113 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:20:33.854120 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 9 00:20:33.854136 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 9 00:20:33.854144 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:20:33.854152 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:20:33.854159 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:20:33.854166 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:20:33.854173 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:20:33.854180 kernel: arm-pv: using stolen time PV Sep 9 00:20:33.854188 kernel: Console: colour dummy device 80x25 Sep 9 00:20:33.854195 kernel: ACPI: Core revision 20230628 Sep 9 00:20:33.854203 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:20:33.854210 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:20:33.854220 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:20:33.854229 kernel: landlock: Up and running. Sep 9 00:20:33.854236 kernel: SELinux: Initializing. Sep 9 00:20:33.854243 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:20:33.854250 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:20:33.854257 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:20:33.854264 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:20:33.854271 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:20:33.854279 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:20:33.854285 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:20:33.854294 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:20:33.854301 kernel: Remapping and enabling EFI services. Sep 9 00:20:33.854308 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:20:33.854315 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:20:33.854322 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:20:33.854329 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 9 00:20:33.854336 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:20:33.854343 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:20:33.854351 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:20:33.854358 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:20:33.854366 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 9 00:20:33.854374 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:20:33.854385 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:20:33.854395 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:20:33.854402 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:20:33.854410 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 9 00:20:33.854417 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:20:33.854425 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:20:33.854432 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:20:33.854441 kernel: SMP: Total of 4 processors activated. Sep 9 00:20:33.854448 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:20:33.854456 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:20:33.854463 kernel: CPU features: detected: Common not Private translations Sep 9 00:20:33.854471 kernel: CPU features: detected: CRC32 instructions Sep 9 00:20:33.854478 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 00:20:33.854486 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:20:33.854493 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:20:33.854501 kernel: CPU features: detected: Privileged Access Never Sep 9 00:20:33.854509 kernel: CPU features: detected: RAS Extension Support Sep 9 00:20:33.854517 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:20:33.854524 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:20:33.854531 kernel: alternatives: applying system-wide alternatives Sep 9 00:20:33.854539 kernel: devtmpfs: initialized Sep 9 00:20:33.854546 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:20:33.854554 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:20:33.854561 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:20:33.854570 kernel: SMBIOS 3.0.0 present. Sep 9 00:20:33.854578 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 9 00:20:33.854585 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:20:33.854593 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:20:33.854600 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:20:33.854608 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:20:33.854620 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:20:33.854628 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Sep 9 00:20:33.854635 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:20:33.854644 kernel: cpuidle: using governor menu Sep 9 00:20:33.854652 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:20:33.854659 kernel: ASID allocator initialised with 32768 entries Sep 9 00:20:33.854667 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:20:33.854674 kernel: Serial: AMBA PL011 UART driver Sep 9 00:20:33.854682 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 00:20:33.854689 kernel: Modules: 0 pages in range for non-PLT usage Sep 9 00:20:33.854697 kernel: Modules: 509008 pages in range for PLT usage Sep 9 00:20:33.854705 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:20:33.854713 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:20:33.854721 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:20:33.854729 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 00:20:33.854736 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:20:33.854744 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:20:33.854754 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:20:33.854762 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 00:20:33.854769 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:20:33.854777 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:20:33.854785 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:20:33.854793 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:20:33.854800 kernel: ACPI: Interpreter enabled Sep 9 00:20:33.854808 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:20:33.854815 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:20:33.854823 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:20:33.854830 kernel: printk: console [ttyAMA0] enabled Sep 9 00:20:33.854838 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:20:33.854984 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:20:33.855100 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:20:33.855183 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:20:33.855251 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:20:33.855318 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:20:33.855328 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:20:33.855336 kernel: PCI host bridge to bus 0000:00 Sep 9 00:20:33.855406 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:20:33.855472 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:20:33.855534 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:20:33.855596 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:20:33.855696 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:20:33.855776 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:20:33.855846 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:20:33.855917 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:20:33.855986 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:20:33.856054 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:20:33.856121 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:20:33.856200 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:20:33.856262 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:20:33.856321 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:20:33.856385 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:20:33.856395 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:20:33.856403 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:20:33.856410 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:20:33.856418 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:20:33.856425 kernel: iommu: Default domain type: Translated Sep 9 00:20:33.856433 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:20:33.856440 kernel: efivars: Registered efivars operations Sep 9 00:20:33.856448 kernel: vgaarb: loaded Sep 9 00:20:33.856458 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:20:33.856465 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:20:33.856473 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:20:33.856480 kernel: pnp: PnP ACPI init Sep 9 00:20:33.856557 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:20:33.856568 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:20:33.856575 kernel: NET: Registered PF_INET protocol family Sep 9 00:20:33.856583 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:20:33.856592 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:20:33.856600 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:20:33.856608 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:20:33.856621 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:20:33.856629 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:20:33.856637 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:20:33.856644 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:20:33.856652 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:20:33.856659 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:20:33.856668 kernel: kvm [1]: HYP mode not available Sep 9 00:20:33.856675 kernel: Initialise system trusted keyrings Sep 9 00:20:33.856683 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:20:33.856690 kernel: Key type asymmetric registered Sep 9 00:20:33.856698 kernel: Asymmetric key parser 'x509' registered Sep 9 00:20:33.856705 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:20:33.856713 kernel: io scheduler mq-deadline registered Sep 9 00:20:33.856720 kernel: io scheduler kyber registered Sep 9 00:20:33.856728 kernel: io scheduler bfq registered Sep 9 00:20:33.856737 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:20:33.856744 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:20:33.856752 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:20:33.856823 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:20:33.856833 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:20:33.856841 kernel: thunder_xcv, ver 1.0 Sep 9 00:20:33.856848 kernel: thunder_bgx, ver 1.0 Sep 9 00:20:33.856856 kernel: nicpf, ver 1.0 Sep 9 00:20:33.856863 kernel: nicvf, ver 1.0 Sep 9 00:20:33.856939 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:20:33.857003 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:20:33 UTC (1757377233) Sep 9 00:20:33.857014 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:20:33.857021 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:20:33.857029 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 9 00:20:33.857037 kernel: watchdog: Hard watchdog permanently disabled Sep 9 00:20:33.857044 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:20:33.857051 kernel: Segment Routing with IPv6 Sep 9 00:20:33.857061 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:20:33.857068 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:20:33.857076 kernel: Key type dns_resolver registered Sep 9 00:20:33.857083 kernel: registered taskstats version 1 Sep 9 00:20:33.857090 kernel: Loading compiled-in X.509 certificates Sep 9 00:20:33.857098 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: f5b097e6797722e0cc665195a3c415b6be267631' Sep 9 00:20:33.857105 kernel: Key type .fscrypt registered Sep 9 00:20:33.857113 kernel: Key type fscrypt-provisioning registered Sep 9 00:20:33.857120 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:20:33.857143 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:20:33.857151 kernel: ima: No architecture policies found Sep 9 00:20:33.857158 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:20:33.857165 kernel: clk: Disabling unused clocks Sep 9 00:20:33.857173 kernel: Freeing unused kernel memory: 39424K Sep 9 00:20:33.857180 kernel: Run /init as init process Sep 9 00:20:33.857188 kernel: with arguments: Sep 9 00:20:33.857195 kernel: /init Sep 9 00:20:33.857202 kernel: with environment: Sep 9 00:20:33.857211 kernel: HOME=/ Sep 9 00:20:33.857219 kernel: TERM=linux Sep 9 00:20:33.857226 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:20:33.857235 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:20:33.857245 systemd[1]: Detected virtualization kvm. Sep 9 00:20:33.857253 systemd[1]: Detected architecture arm64. Sep 9 00:20:33.857261 systemd[1]: Running in initrd. Sep 9 00:20:33.857269 systemd[1]: No hostname configured, using default hostname. Sep 9 00:20:33.857278 systemd[1]: Hostname set to . Sep 9 00:20:33.857286 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:20:33.857294 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:20:33.857302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:20:33.857310 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:20:33.857319 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:20:33.857327 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:20:33.857335 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:20:33.857345 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:20:33.857354 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:20:33.857362 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:20:33.857371 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:20:33.857379 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:20:33.857387 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:20:33.857401 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:20:33.857410 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:20:33.857418 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:20:33.857426 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:20:33.857434 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:20:33.857442 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:20:33.857451 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:20:33.857459 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:20:33.857467 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:20:33.857476 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:20:33.857484 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:20:33.857492 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:20:33.857500 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:20:33.857508 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:20:33.857516 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:20:33.857524 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:20:33.857532 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:20:33.857540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:20:33.857550 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:20:33.857558 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:20:33.857566 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:20:33.857591 systemd-journald[237]: Collecting audit messages is disabled. Sep 9 00:20:33.857617 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:20:33.857626 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:20:33.857634 kernel: Bridge firewalling registered Sep 9 00:20:33.857642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:20:33.857652 systemd-journald[237]: Journal started Sep 9 00:20:33.857672 systemd-journald[237]: Runtime Journal (/run/log/journal/bcfba423f53f466cbd7d8492db04a499) is 5.9M, max 47.3M, 41.4M free. Sep 9 00:20:33.857708 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:20:33.840975 systemd-modules-load[238]: Inserted module 'overlay' Sep 9 00:20:33.855599 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 9 00:20:33.861594 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:20:33.862847 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:20:33.880272 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:20:33.882001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:20:33.884103 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:20:33.887057 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:20:33.893604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:20:33.896759 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:20:33.901046 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:20:33.902524 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:20:33.918346 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:20:33.920705 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:20:33.929109 dracut-cmdline[276]: dracut-dracut-053 Sep 9 00:20:33.931532 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7395fe4f9fb368b2829f9349e2a89e9a9e96b552675d3b261a5a30cf3c6cb15c Sep 9 00:20:33.945740 systemd-resolved[278]: Positive Trust Anchors: Sep 9 00:20:33.945760 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:20:33.945792 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:20:33.950585 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 9 00:20:33.951624 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:20:33.956100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:20:33.997161 kernel: SCSI subsystem initialized Sep 9 00:20:34.002142 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:20:34.009147 kernel: iscsi: registered transport (tcp) Sep 9 00:20:34.022155 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:20:34.022180 kernel: QLogic iSCSI HBA Driver Sep 9 00:20:34.062156 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:20:34.074308 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:20:34.089966 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:20:34.090012 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:20:34.090022 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:20:34.135175 kernel: raid6: neonx8 gen() 15739 MB/s Sep 9 00:20:34.152161 kernel: raid6: neonx4 gen() 15626 MB/s Sep 9 00:20:34.169154 kernel: raid6: neonx2 gen() 13262 MB/s Sep 9 00:20:34.186153 kernel: raid6: neonx1 gen() 10486 MB/s Sep 9 00:20:34.203147 kernel: raid6: int64x8 gen() 6937 MB/s Sep 9 00:20:34.220153 kernel: raid6: int64x4 gen() 7341 MB/s Sep 9 00:20:34.237145 kernel: raid6: int64x2 gen() 6111 MB/s Sep 9 00:20:34.254153 kernel: raid6: int64x1 gen() 5058 MB/s Sep 9 00:20:34.254176 kernel: raid6: using algorithm neonx8 gen() 15739 MB/s Sep 9 00:20:34.271160 kernel: raid6: .... xor() 12054 MB/s, rmw enabled Sep 9 00:20:34.271188 kernel: raid6: using neon recovery algorithm Sep 9 00:20:34.276211 kernel: xor: measuring software checksum speed Sep 9 00:20:34.276226 kernel: 8regs : 19826 MB/sec Sep 9 00:20:34.277298 kernel: 32regs : 19674 MB/sec Sep 9 00:20:34.277314 kernel: arm64_neon : 26981 MB/sec Sep 9 00:20:34.277323 kernel: xor: using function: arm64_neon (26981 MB/sec) Sep 9 00:20:34.326152 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:20:34.336660 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:20:34.347266 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:20:34.359324 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 9 00:20:34.362470 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:20:34.368241 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:20:34.380424 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Sep 9 00:20:34.407672 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:20:34.416275 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:20:34.453979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:20:34.462282 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:20:34.474450 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:20:34.475772 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:20:34.478857 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:20:34.481769 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:20:34.491304 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:20:34.501441 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:20:34.512445 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 00:20:34.512602 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:20:34.518171 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:20:34.518210 kernel: GPT:9289727 != 19775487 Sep 9 00:20:34.518220 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:20:34.520330 kernel: GPT:9289727 != 19775487 Sep 9 00:20:34.520365 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:20:34.520375 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:20:34.521656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:20:34.521708 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:20:34.525872 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:20:34.527407 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:20:34.527460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:20:34.537942 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (511) Sep 9 00:20:34.537967 kernel: BTRFS: device fsid 7c1eef97-905d-47ac-bb4a-010204f95541 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (507) Sep 9 00:20:34.531887 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:20:34.542320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:20:34.551646 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:20:34.554150 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:20:34.564355 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:20:34.568949 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:20:34.572839 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:20:34.574173 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:20:34.589314 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:20:34.591086 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:20:34.596820 disk-uuid[554]: Primary Header is updated. Sep 9 00:20:34.596820 disk-uuid[554]: Secondary Entries is updated. Sep 9 00:20:34.596820 disk-uuid[554]: Secondary Header is updated. Sep 9 00:20:34.599879 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:20:34.605157 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:20:34.608153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:20:34.611155 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:20:35.609304 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:20:35.609353 disk-uuid[556]: The operation has completed successfully. Sep 9 00:20:35.632359 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:20:35.632452 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:20:35.651269 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:20:35.654964 sh[579]: Success Sep 9 00:20:35.665142 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:20:35.691328 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:20:35.702400 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:20:35.703751 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:20:35.716785 kernel: BTRFS info (device dm-0): first mount of filesystem 7c1eef97-905d-47ac-bb4a-010204f95541 Sep 9 00:20:35.716823 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:20:35.716834 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:20:35.718269 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:20:35.718285 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:20:35.722310 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:20:35.723689 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:20:35.732265 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:20:35.734332 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:20:35.741152 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:20:35.741190 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:20:35.741200 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:20:35.744169 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:20:35.750337 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:20:35.751910 kernel: BTRFS info (device vda6): last unmount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:20:35.758529 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:20:35.763299 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:20:35.818339 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:20:35.827333 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:20:35.828937 ignition[671]: Ignition 2.19.0 Sep 9 00:20:35.828944 ignition[671]: Stage: fetch-offline Sep 9 00:20:35.828979 ignition[671]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:35.828987 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:35.829148 ignition[671]: parsed url from cmdline: "" Sep 9 00:20:35.829151 ignition[671]: no config URL provided Sep 9 00:20:35.829156 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:20:35.829163 ignition[671]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:20:35.829184 ignition[671]: op(1): [started] loading QEMU firmware config module Sep 9 00:20:35.829189 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:20:35.838908 ignition[671]: op(1): [finished] loading QEMU firmware config module Sep 9 00:20:35.838931 ignition[671]: QEMU firmware config was not found. Ignoring... Sep 9 00:20:35.850072 systemd-networkd[767]: lo: Link UP Sep 9 00:20:35.850083 systemd-networkd[767]: lo: Gained carrier Sep 9 00:20:35.851040 systemd-networkd[767]: Enumeration completed Sep 9 00:20:35.851596 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:20:35.851599 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:20:35.852886 systemd-networkd[767]: eth0: Link UP Sep 9 00:20:35.852889 systemd-networkd[767]: eth0: Gained carrier Sep 9 00:20:35.852896 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:20:35.852949 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:20:35.854339 systemd[1]: Reached target network.target - Network. Sep 9 00:20:35.871500 ignition[671]: parsing config with SHA512: ecbcee9e6a06510ea59a12fa06a058b3eeb57d9fdaa1c36c31133430d11acd60ebedbca163f71bc16ca17a1174b141a15b9d6b73ca39c86573ebf849a9cf585f Sep 9 00:20:35.875173 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:20:35.875380 unknown[671]: fetched base config from "system" Sep 9 00:20:35.875770 ignition[671]: fetch-offline: fetch-offline passed Sep 9 00:20:35.875387 unknown[671]: fetched user config from "qemu" Sep 9 00:20:35.875826 ignition[671]: Ignition finished successfully Sep 9 00:20:35.878345 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:20:35.880260 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:20:35.888291 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:20:35.898256 ignition[773]: Ignition 2.19.0 Sep 9 00:20:35.898266 ignition[773]: Stage: kargs Sep 9 00:20:35.898428 ignition[773]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:35.898437 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:35.899285 ignition[773]: kargs: kargs passed Sep 9 00:20:35.902862 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:20:35.899329 ignition[773]: Ignition finished successfully Sep 9 00:20:35.910292 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:20:35.920392 ignition[781]: Ignition 2.19.0 Sep 9 00:20:35.920401 ignition[781]: Stage: disks Sep 9 00:20:35.920573 ignition[781]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:35.923147 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:20:35.920583 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:35.924641 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:20:35.921417 ignition[781]: disks: disks passed Sep 9 00:20:35.926193 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:20:35.921461 ignition[781]: Ignition finished successfully Sep 9 00:20:35.928207 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:20:35.929948 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:20:35.931303 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:20:35.943273 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:20:35.952416 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 00:20:35.955783 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:20:35.959225 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:20:36.006167 kernel: EXT4-fs (vda9): mounted filesystem d987a4c8-1278-4a59-9d40-0c91e08e9423 r/w with ordered data mode. Quota mode: none. Sep 9 00:20:36.007027 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:20:36.008338 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:20:36.020218 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:20:36.021862 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:20:36.023037 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:20:36.023108 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:20:36.030462 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (800) Sep 9 00:20:36.023143 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:20:36.027312 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:20:36.029164 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:20:36.035146 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:20:36.035173 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:20:36.036338 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:20:36.038144 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:20:36.039750 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:20:36.066560 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:20:36.069617 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:20:36.073618 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:20:36.077088 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:20:36.137784 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:20:36.147310 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:20:36.149284 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:20:36.154141 kernel: BTRFS info (device vda6): last unmount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:20:36.167032 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:20:36.170464 ignition[915]: INFO : Ignition 2.19.0 Sep 9 00:20:36.170464 ignition[915]: INFO : Stage: mount Sep 9 00:20:36.171906 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:36.171906 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:36.171906 ignition[915]: INFO : mount: mount passed Sep 9 00:20:36.171906 ignition[915]: INFO : Ignition finished successfully Sep 9 00:20:36.173013 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:20:36.185203 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:20:36.716142 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:20:36.730322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:20:36.736643 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (927) Sep 9 00:20:36.736675 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:20:36.736694 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:20:36.737452 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:20:36.741158 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:20:36.741835 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:20:36.758259 ignition[944]: INFO : Ignition 2.19.0 Sep 9 00:20:36.758259 ignition[944]: INFO : Stage: files Sep 9 00:20:36.759808 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:36.759808 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:36.759808 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:20:36.763164 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:20:36.763164 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:20:36.763164 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:20:36.763164 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:20:36.763164 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:20:36.762567 unknown[944]: wrote ssh authorized keys file for user: core Sep 9 00:20:36.770116 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 00:20:36.770116 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 00:20:36.813920 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:20:37.257002 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 00:20:37.259014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:20:37.259014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:20:37.259014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:20:37.259014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:20:37.259014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:20:37.259014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:20:37.259014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:20:37.259014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:20:37.272815 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:20:37.272815 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:20:37.272815 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:20:37.272815 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:20:37.272815 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:20:37.272815 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 00:20:37.553037 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:20:37.697374 systemd-networkd[767]: eth0: Gained IPv6LL Sep 9 00:20:38.525762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:20:38.525762 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 00:20:38.528791 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:20:38.528791 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:20:38.528791 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 00:20:38.528791 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 00:20:38.528791 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:20:38.528791 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:20:38.528791 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 00:20:38.528791 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:20:38.553681 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:20:38.563601 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:20:38.565840 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:20:38.565840 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:20:38.565840 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:20:38.565840 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:20:38.565840 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:20:38.565840 ignition[944]: INFO : files: files passed Sep 9 00:20:38.565840 ignition[944]: INFO : Ignition finished successfully Sep 9 00:20:38.566981 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:20:38.577655 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:20:38.581503 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:20:38.585985 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:20:38.587173 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:20:38.603599 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:20:38.605953 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:20:38.605953 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:20:38.609071 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:20:38.610059 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:20:38.611973 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:20:38.624353 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:20:38.645932 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:20:38.646033 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:20:38.647325 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:20:38.649244 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:20:38.651575 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:20:38.652401 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:20:38.671847 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:20:38.686396 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:20:38.698507 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:20:38.700046 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:20:38.702295 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:20:38.704202 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:20:38.704343 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:20:38.707158 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:20:38.709360 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:20:38.711088 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:20:38.713485 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:20:38.719851 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:20:38.721753 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:20:38.723740 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:20:38.726125 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:20:38.728285 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:20:38.730049 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:20:38.731820 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:20:38.731951 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:20:38.734727 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:20:38.736632 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:20:38.738537 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:20:38.738645 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:20:38.740804 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:20:38.740924 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:20:38.743721 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:20:38.743843 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:20:38.745728 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:20:38.747283 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:20:38.751229 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:20:38.752761 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:20:38.754936 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:20:38.756527 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:20:38.756722 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:20:38.758179 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:20:38.758262 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:20:38.761731 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:20:38.761859 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:20:38.763855 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:20:38.763961 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:20:38.771341 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:20:38.772292 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:20:38.772434 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:20:38.775106 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:20:38.776858 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:20:38.777081 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:20:38.778962 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:20:38.779064 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:20:38.784093 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:20:38.786002 ignition[998]: INFO : Ignition 2.19.0 Sep 9 00:20:38.786002 ignition[998]: INFO : Stage: umount Sep 9 00:20:38.786002 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:38.786002 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:38.791583 ignition[998]: INFO : umount: umount passed Sep 9 00:20:38.791583 ignition[998]: INFO : Ignition finished successfully Sep 9 00:20:38.786320 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:20:38.788478 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:20:38.788550 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:20:38.791540 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:20:38.791962 systemd[1]: Stopped target network.target - Network. Sep 9 00:20:38.793215 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:20:38.793282 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:20:38.794844 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:20:38.794893 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:20:38.796836 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:20:38.796879 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:20:38.798418 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:20:38.798462 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:20:38.800474 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:20:38.805694 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:20:38.814212 systemd-networkd[767]: eth0: DHCPv6 lease lost Sep 9 00:20:38.815708 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:20:38.815850 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:20:38.818312 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:20:38.818916 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:20:38.821850 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:20:38.821907 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:20:38.835314 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:20:38.836310 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:20:38.836378 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:20:38.838346 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:20:38.838392 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:20:38.840046 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:20:38.840089 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:20:38.842496 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:20:38.842545 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:20:38.844713 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:20:38.855440 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:20:38.855567 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:20:38.858673 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:20:38.858760 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:20:38.860465 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:20:38.860565 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:20:38.868455 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:20:38.868671 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:20:38.871394 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:20:38.871470 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:20:38.874767 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:20:38.874807 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:20:38.876666 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:20:38.876720 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:20:38.879870 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:20:38.879922 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:20:38.882833 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:20:38.882881 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:20:38.900893 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:20:38.902160 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:20:38.902237 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:20:38.907648 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:20:38.907722 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:20:38.909924 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:20:38.910012 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:20:38.912823 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:20:38.915199 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:20:38.926180 systemd[1]: Switching root. Sep 9 00:20:38.956451 systemd-journald[237]: Journal stopped Sep 9 00:20:39.659360 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 9 00:20:39.659413 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:20:39.659430 kernel: SELinux: policy capability open_perms=1 Sep 9 00:20:39.659440 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:20:39.659449 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:20:39.659463 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:20:39.659472 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:20:39.659481 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:20:39.659491 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:20:39.659501 kernel: audit: type=1403 audit(1757377239.106:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:20:39.659512 systemd[1]: Successfully loaded SELinux policy in 35.002ms. Sep 9 00:20:39.659529 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.408ms. Sep 9 00:20:39.659541 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:20:39.659552 systemd[1]: Detected virtualization kvm. Sep 9 00:20:39.659562 systemd[1]: Detected architecture arm64. Sep 9 00:20:39.659576 systemd[1]: Detected first boot. Sep 9 00:20:39.659600 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:20:39.659611 zram_generator::config[1042]: No configuration found. Sep 9 00:20:39.659622 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:20:39.659634 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:20:39.659644 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:20:39.659655 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:20:39.659665 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:20:39.659677 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:20:39.659687 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:20:39.659697 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:20:39.659707 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:20:39.659717 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:20:39.659729 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:20:39.659754 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:20:39.659765 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:20:39.659776 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:20:39.659788 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:20:39.659798 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:20:39.659808 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:20:39.659819 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:20:39.659830 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 00:20:39.659842 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:20:39.659853 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:20:39.659863 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:20:39.659874 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:20:39.659884 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:20:39.659895 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:20:39.659905 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:20:39.659917 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:20:39.659928 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:20:39.659938 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:20:39.659949 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:20:39.659959 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:20:39.659970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:20:39.659980 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:20:39.659991 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:20:39.660001 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:20:39.660012 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:20:39.660024 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:20:39.660048 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:20:39.660059 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:20:39.660070 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:20:39.660080 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:20:39.660091 systemd[1]: Reached target machines.target - Containers. Sep 9 00:20:39.660112 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:20:39.660123 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:20:39.660143 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:20:39.660154 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:20:39.660166 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:20:39.660177 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:20:39.660187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:20:39.660198 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:20:39.660208 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:20:39.660219 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:20:39.660232 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:20:39.660242 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:20:39.660252 kernel: fuse: init (API version 7.39) Sep 9 00:20:39.660262 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:20:39.660273 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:20:39.660284 kernel: loop: module loaded Sep 9 00:20:39.660293 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:20:39.660303 kernel: ACPI: bus type drm_connector registered Sep 9 00:20:39.660313 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:20:39.660323 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:20:39.660335 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:20:39.660346 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:20:39.660373 systemd-journald[1109]: Collecting audit messages is disabled. Sep 9 00:20:39.660396 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:20:39.660407 systemd[1]: Stopped verity-setup.service. Sep 9 00:20:39.660417 systemd-journald[1109]: Journal started Sep 9 00:20:39.660440 systemd-journald[1109]: Runtime Journal (/run/log/journal/bcfba423f53f466cbd7d8492db04a499) is 5.9M, max 47.3M, 41.4M free. Sep 9 00:20:39.475323 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:20:39.491208 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:20:39.491580 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:20:39.664634 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:20:39.665333 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:20:39.666619 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:20:39.668049 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:20:39.669273 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:20:39.670599 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:20:39.671895 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:20:39.674187 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:20:39.675654 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:20:39.677258 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:20:39.677394 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:20:39.678855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:20:39.678999 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:20:39.680480 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:20:39.680642 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:20:39.682096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:20:39.682286 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:20:39.683764 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:20:39.683906 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:20:39.685301 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:20:39.685444 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:20:39.686772 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:20:39.689206 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:20:39.690785 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:20:39.703374 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:20:39.709244 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:20:39.714291 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:20:39.715395 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:20:39.715439 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:20:39.717474 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 00:20:39.719903 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:20:39.722333 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:20:39.723438 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:20:39.724963 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:20:39.727003 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:20:39.728367 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:20:39.732325 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:20:39.733528 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:20:39.734979 systemd-journald[1109]: Time spent on flushing to /var/log/journal/bcfba423f53f466cbd7d8492db04a499 is 21.552ms for 853 entries. Sep 9 00:20:39.734979 systemd-journald[1109]: System Journal (/var/log/journal/bcfba423f53f466cbd7d8492db04a499) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:20:39.773627 systemd-journald[1109]: Received client request to flush runtime journal. Sep 9 00:20:39.773679 kernel: loop0: detected capacity change from 0 to 114432 Sep 9 00:20:39.737360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:20:39.745406 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:20:39.750349 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:20:39.753057 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:20:39.754708 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:20:39.756189 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:20:39.757854 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:20:39.759812 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:20:39.766025 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:20:39.775660 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 00:20:39.778872 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:20:39.779154 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:20:39.780580 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:20:39.789868 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:20:39.799706 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:20:39.802182 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:20:39.804068 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 00:20:39.806231 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:20:39.813158 kernel: loop1: detected capacity change from 0 to 114328 Sep 9 00:20:39.814677 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:20:39.833158 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Sep 9 00:20:39.833489 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Sep 9 00:20:39.838179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:20:39.843155 kernel: loop2: detected capacity change from 0 to 207008 Sep 9 00:20:39.877169 kernel: loop3: detected capacity change from 0 to 114432 Sep 9 00:20:39.890183 kernel: loop4: detected capacity change from 0 to 114328 Sep 9 00:20:39.895162 kernel: loop5: detected capacity change from 0 to 207008 Sep 9 00:20:39.899404 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:20:39.899844 (sd-merge)[1179]: Merged extensions into '/usr'. Sep 9 00:20:39.903006 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:20:39.903020 systemd[1]: Reloading... Sep 9 00:20:39.955225 zram_generator::config[1201]: No configuration found. Sep 9 00:20:40.023372 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:20:40.062162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:20:40.103525 systemd[1]: Reloading finished in 200 ms. Sep 9 00:20:40.129948 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:20:40.132545 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:20:40.145300 systemd[1]: Starting ensure-sysext.service... Sep 9 00:20:40.147179 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:20:40.152897 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:20:40.152918 systemd[1]: Reloading... Sep 9 00:20:40.163123 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:20:40.163401 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:20:40.164022 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:20:40.164263 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Sep 9 00:20:40.164318 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Sep 9 00:20:40.166545 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:20:40.166557 systemd-tmpfiles[1241]: Skipping /boot Sep 9 00:20:40.173086 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:20:40.173102 systemd-tmpfiles[1241]: Skipping /boot Sep 9 00:20:40.197249 zram_generator::config[1267]: No configuration found. Sep 9 00:20:40.284592 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:20:40.324193 systemd[1]: Reloading finished in 170 ms. Sep 9 00:20:40.339169 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:20:40.351542 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:20:40.359668 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:20:40.361979 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:20:40.364242 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:20:40.369414 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:20:40.377460 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:20:40.379992 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:20:40.383932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:20:40.388457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:20:40.390698 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:20:40.394212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:20:40.395392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:20:40.399095 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:20:40.401884 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:20:40.404441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:20:40.404615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:20:40.407300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:20:40.409208 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Sep 9 00:20:40.409231 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:20:40.414636 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:20:40.414766 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:20:40.419418 augenrules[1329]: No rules Sep 9 00:20:40.421016 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:20:40.426252 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:20:40.428493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:20:40.436337 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:20:40.438173 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:20:40.455417 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 00:20:40.456676 systemd[1]: Finished ensure-sysext.service. Sep 9 00:20:40.462299 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:20:40.469427 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:20:40.474290 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:20:40.475145 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1359) Sep 9 00:20:40.478646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:20:40.482275 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:20:40.483490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:20:40.489312 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:20:40.492288 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:20:40.496664 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:20:40.498215 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:20:40.498740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:20:40.499628 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:20:40.501100 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:20:40.501288 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:20:40.503369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:20:40.503503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:20:40.505642 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:20:40.505758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:20:40.511685 systemd-resolved[1309]: Positive Trust Anchors: Sep 9 00:20:40.511702 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:20:40.511735 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:20:40.512024 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:20:40.518413 systemd-resolved[1309]: Defaulting to hostname 'linux'. Sep 9 00:20:40.521267 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:20:40.529106 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:20:40.531339 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:20:40.531403 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:20:40.532489 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:20:40.540300 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:20:40.552612 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:20:40.557253 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:20:40.558618 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:20:40.576195 systemd-networkd[1371]: lo: Link UP Sep 9 00:20:40.576206 systemd-networkd[1371]: lo: Gained carrier Sep 9 00:20:40.576877 systemd-networkd[1371]: Enumeration completed Sep 9 00:20:40.576963 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:20:40.578194 systemd[1]: Reached target network.target - Network. Sep 9 00:20:40.580836 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:20:40.580848 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:20:40.581606 systemd-networkd[1371]: eth0: Link UP Sep 9 00:20:40.581613 systemd-networkd[1371]: eth0: Gained carrier Sep 9 00:20:40.581627 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:20:40.588344 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:20:40.597908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:20:40.601213 systemd-networkd[1371]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:20:40.601890 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. Sep 9 00:20:40.602697 systemd-timesyncd[1373]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:20:40.602842 systemd-timesyncd[1373]: Initial clock synchronization to Tue 2025-09-09 00:20:40.439403 UTC. Sep 9 00:20:40.609166 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:20:40.611805 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:20:40.623360 lvm[1396]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:20:40.636653 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:20:40.645904 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:20:40.647482 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:20:40.648623 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:20:40.649804 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:20:40.651098 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:20:40.652514 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:20:40.653711 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:20:40.654998 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:20:40.656289 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:20:40.656322 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:20:40.657208 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:20:40.658753 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:20:40.661073 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:20:40.674161 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:20:40.676272 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:20:40.677763 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:20:40.678976 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:20:40.679898 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:20:40.680893 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:20:40.680926 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:20:40.681815 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:20:40.683727 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:20:40.683802 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:20:40.687363 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:20:40.692364 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:20:40.693046 jq[1406]: false Sep 9 00:20:40.693399 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:20:40.694464 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:20:40.697636 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:20:40.702312 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:20:40.704638 extend-filesystems[1407]: Found loop3 Sep 9 00:20:40.704638 extend-filesystems[1407]: Found loop4 Sep 9 00:20:40.704638 extend-filesystems[1407]: Found loop5 Sep 9 00:20:40.704638 extend-filesystems[1407]: Found vda Sep 9 00:20:40.709754 extend-filesystems[1407]: Found vda1 Sep 9 00:20:40.709754 extend-filesystems[1407]: Found vda2 Sep 9 00:20:40.709754 extend-filesystems[1407]: Found vda3 Sep 9 00:20:40.709754 extend-filesystems[1407]: Found usr Sep 9 00:20:40.709754 extend-filesystems[1407]: Found vda4 Sep 9 00:20:40.709754 extend-filesystems[1407]: Found vda6 Sep 9 00:20:40.709754 extend-filesystems[1407]: Found vda7 Sep 9 00:20:40.709754 extend-filesystems[1407]: Found vda9 Sep 9 00:20:40.709754 extend-filesystems[1407]: Checking size of /dev/vda9 Sep 9 00:20:40.705102 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:20:40.728761 extend-filesystems[1407]: Resized partition /dev/vda9 Sep 9 00:20:40.712529 dbus-daemon[1405]: [system] SELinux support is enabled Sep 9 00:20:40.711693 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:20:40.714964 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:20:40.715371 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:20:40.719757 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:20:40.721942 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:20:40.726972 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:20:40.735188 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:20:40.736229 extend-filesystems[1428]: resize2fs 1.47.1 (20-May-2024) Sep 9 00:20:40.746834 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1358) Sep 9 00:20:40.746883 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:20:40.746897 update_engine[1422]: I20250909 00:20:40.746516 1422 main.cc:92] Flatcar Update Engine starting Sep 9 00:20:40.751758 update_engine[1422]: I20250909 00:20:40.749876 1422 update_check_scheduler.cc:74] Next update check in 11m35s Sep 9 00:20:40.748533 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:20:40.748706 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:20:40.748952 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:20:40.749080 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:20:40.751873 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:20:40.752209 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:20:40.755754 jq[1426]: true Sep 9 00:20:40.761656 (ntainerd)[1432]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:20:40.771164 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:20:40.776215 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:20:40.782089 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:20:40.785942 jq[1437]: true Sep 9 00:20:40.782125 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:20:40.786318 extend-filesystems[1428]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:20:40.786318 extend-filesystems[1428]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:20:40.786318 extend-filesystems[1428]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:20:40.784372 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:20:40.804436 tar[1430]: linux-arm64/LICENSE Sep 9 00:20:40.804436 tar[1430]: linux-arm64/helm Sep 9 00:20:40.806488 extend-filesystems[1407]: Resized filesystem in /dev/vda9 Sep 9 00:20:40.784390 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:20:40.784552 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:20:40.785803 systemd-logind[1418]: New seat seat0. Sep 9 00:20:40.792340 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:20:40.793634 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:20:40.795658 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:20:40.795834 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:20:40.845996 locksmithd[1443]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:20:40.847430 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:20:40.847834 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:20:40.852348 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:20:40.914094 containerd[1432]: time="2025-09-09T00:20:40.913989920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 00:20:40.942661 containerd[1432]: time="2025-09-09T00:20:40.942439600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:40.944179 containerd[1432]: time="2025-09-09T00:20:40.944122440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:20:40.944269 containerd[1432]: time="2025-09-09T00:20:40.944254720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:20:40.945380 containerd[1432]: time="2025-09-09T00:20:40.944311080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:20:40.945380 containerd[1432]: time="2025-09-09T00:20:40.944472760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:20:40.945380 containerd[1432]: time="2025-09-09T00:20:40.944490560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:40.945380 containerd[1432]: time="2025-09-09T00:20:40.944541960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:20:40.945380 containerd[1432]: time="2025-09-09T00:20:40.944553520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:40.945380 containerd[1432]: time="2025-09-09T00:20:40.944725080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:20:40.945380 containerd[1432]: time="2025-09-09T00:20:40.944742360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:40.945380 containerd[1432]: time="2025-09-09T00:20:40.944754800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:20:40.945380 containerd[1432]: time="2025-09-09T00:20:40.944765160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:40.945380 containerd[1432]: time="2025-09-09T00:20:40.944836960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:40.945380 containerd[1432]: time="2025-09-09T00:20:40.945014960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:40.945706 containerd[1432]: time="2025-09-09T00:20:40.945102040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:20:40.945706 containerd[1432]: time="2025-09-09T00:20:40.945114720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:20:40.945706 containerd[1432]: time="2025-09-09T00:20:40.945207440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:20:40.945706 containerd[1432]: time="2025-09-09T00:20:40.945244480Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:20:40.949857 containerd[1432]: time="2025-09-09T00:20:40.949831440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:20:40.949977 containerd[1432]: time="2025-09-09T00:20:40.949964360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:20:40.950119 containerd[1432]: time="2025-09-09T00:20:40.950103640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:20:40.950209 containerd[1432]: time="2025-09-09T00:20:40.950193960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:20:40.950263 containerd[1432]: time="2025-09-09T00:20:40.950252440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:20:40.950462 containerd[1432]: time="2025-09-09T00:20:40.950439720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:20:40.950800 containerd[1432]: time="2025-09-09T00:20:40.950774640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:20:40.950993 containerd[1432]: time="2025-09-09T00:20:40.950970720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:20:40.951064 containerd[1432]: time="2025-09-09T00:20:40.951051880Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:20:40.951114 containerd[1432]: time="2025-09-09T00:20:40.951102880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:20:40.951187 containerd[1432]: time="2025-09-09T00:20:40.951174120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:20:40.951261 containerd[1432]: time="2025-09-09T00:20:40.951246560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:20:40.951312 containerd[1432]: time="2025-09-09T00:20:40.951301640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:20:40.951365 containerd[1432]: time="2025-09-09T00:20:40.951353000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:20:40.951419 containerd[1432]: time="2025-09-09T00:20:40.951406720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:20:40.951470 containerd[1432]: time="2025-09-09T00:20:40.951458280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:20:40.951520 containerd[1432]: time="2025-09-09T00:20:40.951508600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:20:40.951576 containerd[1432]: time="2025-09-09T00:20:40.951564880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:20:40.951666 containerd[1432]: time="2025-09-09T00:20:40.951651360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.951718 containerd[1432]: time="2025-09-09T00:20:40.951706800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.951774 containerd[1432]: time="2025-09-09T00:20:40.951761720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.951827 containerd[1432]: time="2025-09-09T00:20:40.951815880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.951882 containerd[1432]: time="2025-09-09T00:20:40.951870040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.951935 containerd[1432]: time="2025-09-09T00:20:40.951923480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.951985 containerd[1432]: time="2025-09-09T00:20:40.951973360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.952047 containerd[1432]: time="2025-09-09T00:20:40.952034600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.952102 containerd[1432]: time="2025-09-09T00:20:40.952089840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.952230 containerd[1432]: time="2025-09-09T00:20:40.952215120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.952286 containerd[1432]: time="2025-09-09T00:20:40.952274680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.952356 containerd[1432]: time="2025-09-09T00:20:40.952325640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.952416 containerd[1432]: time="2025-09-09T00:20:40.952402760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.952472 containerd[1432]: time="2025-09-09T00:20:40.952460080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:20:40.952551 containerd[1432]: time="2025-09-09T00:20:40.952537320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.952626 containerd[1432]: time="2025-09-09T00:20:40.952612400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.952687 containerd[1432]: time="2025-09-09T00:20:40.952674360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:20:40.952855 containerd[1432]: time="2025-09-09T00:20:40.952828480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:20:40.953057 containerd[1432]: time="2025-09-09T00:20:40.953039960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:20:40.953114 containerd[1432]: time="2025-09-09T00:20:40.953102360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:20:40.953177 containerd[1432]: time="2025-09-09T00:20:40.953163440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:20:40.953224 containerd[1432]: time="2025-09-09T00:20:40.953213040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.953298 containerd[1432]: time="2025-09-09T00:20:40.953285360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:20:40.953347 containerd[1432]: time="2025-09-09T00:20:40.953335880Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:20:40.953394 containerd[1432]: time="2025-09-09T00:20:40.953382640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:20:40.953864 containerd[1432]: time="2025-09-09T00:20:40.953791560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:20:40.954031 containerd[1432]: time="2025-09-09T00:20:40.954014320Z" level=info msg="Connect containerd service" Sep 9 00:20:40.954125 containerd[1432]: time="2025-09-09T00:20:40.954110200Z" level=info msg="using legacy CRI server" Sep 9 00:20:40.954195 containerd[1432]: time="2025-09-09T00:20:40.954182360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:20:40.954323 containerd[1432]: time="2025-09-09T00:20:40.954308400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:20:40.955086 containerd[1432]: time="2025-09-09T00:20:40.955055360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:20:40.955480 containerd[1432]: time="2025-09-09T00:20:40.955362360Z" level=info msg="Start subscribing containerd event" Sep 9 00:20:40.955480 containerd[1432]: time="2025-09-09T00:20:40.955420480Z" level=info msg="Start recovering state" Sep 9 00:20:40.955537 containerd[1432]: time="2025-09-09T00:20:40.955483760Z" level=info msg="Start event monitor" Sep 9 00:20:40.955537 containerd[1432]: time="2025-09-09T00:20:40.955495920Z" level=info msg="Start snapshots syncer" Sep 9 00:20:40.955537 containerd[1432]: time="2025-09-09T00:20:40.955505120Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:20:40.955537 containerd[1432]: time="2025-09-09T00:20:40.955512720Z" level=info msg="Start streaming server" Sep 9 00:20:40.955987 containerd[1432]: time="2025-09-09T00:20:40.955892080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:20:40.955987 containerd[1432]: time="2025-09-09T00:20:40.955947840Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:20:40.959733 containerd[1432]: time="2025-09-09T00:20:40.959345040Z" level=info msg="containerd successfully booted in 0.046253s" Sep 9 00:20:40.959411 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:20:41.162000 tar[1430]: linux-arm64/README.md Sep 9 00:20:41.180159 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:20:41.246257 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:20:41.264680 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:20:41.281391 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:20:41.287086 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:20:41.289157 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:20:41.291681 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:20:41.303075 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:20:41.306005 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:20:41.308151 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 00:20:41.309400 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:20:41.921480 systemd-networkd[1371]: eth0: Gained IPv6LL Sep 9 00:20:41.924177 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:20:41.925866 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:20:41.938574 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:20:41.942642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:20:41.945908 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:20:41.963122 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:20:41.963338 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:20:41.964984 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:20:41.967004 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:20:42.531707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:20:42.533329 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:20:42.535368 systemd[1]: Startup finished in 523ms (kernel) + 5.424s (initrd) + 3.464s (userspace) = 9.413s. Sep 9 00:20:42.536159 (kubelet)[1519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:20:42.914618 kubelet[1519]: E0909 00:20:42.914464 1519 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:20:42.918433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:20:42.918572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:20:46.145985 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:20:46.147336 systemd[1]: Started sshd@0-10.0.0.63:22-10.0.0.1:60730.service - OpenSSH per-connection server daemon (10.0.0.1:60730). Sep 9 00:20:46.200297 sshd[1532]: Accepted publickey for core from 10.0.0.1 port 60730 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:46.205350 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:46.212899 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:20:46.227379 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:20:46.229264 systemd-logind[1418]: New session 1 of user core. Sep 9 00:20:46.236216 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:20:46.238298 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:20:46.244806 (systemd)[1536]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:20:46.314161 systemd[1536]: Queued start job for default target default.target. Sep 9 00:20:46.323165 systemd[1536]: Created slice app.slice - User Application Slice. Sep 9 00:20:46.323195 systemd[1536]: Reached target paths.target - Paths. Sep 9 00:20:46.323207 systemd[1536]: Reached target timers.target - Timers. Sep 9 00:20:46.324326 systemd[1536]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:20:46.333680 systemd[1536]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:20:46.333739 systemd[1536]: Reached target sockets.target - Sockets. Sep 9 00:20:46.333750 systemd[1536]: Reached target basic.target - Basic System. Sep 9 00:20:46.333784 systemd[1536]: Reached target default.target - Main User Target. Sep 9 00:20:46.333812 systemd[1536]: Startup finished in 84ms. Sep 9 00:20:46.334011 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:20:46.335159 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:20:46.398425 systemd[1]: Started sshd@1-10.0.0.63:22-10.0.0.1:60740.service - OpenSSH per-connection server daemon (10.0.0.1:60740). Sep 9 00:20:46.432523 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 60740 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:46.433838 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:46.437932 systemd-logind[1418]: New session 2 of user core. Sep 9 00:20:46.449300 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:20:46.499561 sshd[1547]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:46.509397 systemd[1]: sshd@1-10.0.0.63:22-10.0.0.1:60740.service: Deactivated successfully. Sep 9 00:20:46.510706 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:20:46.511843 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:20:46.512957 systemd[1]: Started sshd@2-10.0.0.63:22-10.0.0.1:60748.service - OpenSSH per-connection server daemon (10.0.0.1:60748). Sep 9 00:20:46.513620 systemd-logind[1418]: Removed session 2. Sep 9 00:20:46.544216 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 60748 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:46.545522 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:46.548889 systemd-logind[1418]: New session 3 of user core. Sep 9 00:20:46.559254 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:20:46.606161 sshd[1554]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:46.615266 systemd[1]: sshd@2-10.0.0.63:22-10.0.0.1:60748.service: Deactivated successfully. Sep 9 00:20:46.617495 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:20:46.618622 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:20:46.628456 systemd[1]: Started sshd@3-10.0.0.63:22-10.0.0.1:60764.service - OpenSSH per-connection server daemon (10.0.0.1:60764). Sep 9 00:20:46.629380 systemd-logind[1418]: Removed session 3. Sep 9 00:20:46.657205 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 60764 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:46.658361 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:46.662360 systemd-logind[1418]: New session 4 of user core. Sep 9 00:20:46.677317 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:20:46.728708 sshd[1561]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:46.737513 systemd[1]: sshd@3-10.0.0.63:22-10.0.0.1:60764.service: Deactivated successfully. Sep 9 00:20:46.740514 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:20:46.741686 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:20:46.742815 systemd[1]: Started sshd@4-10.0.0.63:22-10.0.0.1:60774.service - OpenSSH per-connection server daemon (10.0.0.1:60774). Sep 9 00:20:46.743514 systemd-logind[1418]: Removed session 4. Sep 9 00:20:46.775697 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 60774 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:46.776958 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:46.783049 systemd-logind[1418]: New session 5 of user core. Sep 9 00:20:46.798351 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:20:46.856000 sudo[1571]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:20:46.856299 sudo[1571]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:20:47.130429 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:20:47.130564 (dockerd)[1589]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:20:47.363492 dockerd[1589]: time="2025-09-09T00:20:47.363430179Z" level=info msg="Starting up" Sep 9 00:20:47.568487 dockerd[1589]: time="2025-09-09T00:20:47.568437157Z" level=info msg="Loading containers: start." Sep 9 00:20:47.849150 kernel: Initializing XFRM netlink socket Sep 9 00:20:47.923548 systemd-networkd[1371]: docker0: Link UP Sep 9 00:20:48.035410 dockerd[1589]: time="2025-09-09T00:20:48.035368017Z" level=info msg="Loading containers: done." Sep 9 00:20:48.085968 dockerd[1589]: time="2025-09-09T00:20:48.085894542Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:20:48.086145 dockerd[1589]: time="2025-09-09T00:20:48.086012588Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 9 00:20:48.086184 dockerd[1589]: time="2025-09-09T00:20:48.086163609Z" level=info msg="Daemon has completed initialization" Sep 9 00:20:48.322038 dockerd[1589]: time="2025-09-09T00:20:48.321648830Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:20:48.321974 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:20:48.931337 containerd[1432]: time="2025-09-09T00:20:48.931286601Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 00:20:49.558106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2731737031.mount: Deactivated successfully. Sep 9 00:20:50.873305 containerd[1432]: time="2025-09-09T00:20:50.873242590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:50.874246 containerd[1432]: time="2025-09-09T00:20:50.874213558Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 9 00:20:50.875084 containerd[1432]: time="2025-09-09T00:20:50.874992431Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:50.877892 containerd[1432]: time="2025-09-09T00:20:50.877829523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:50.879250 containerd[1432]: time="2025-09-09T00:20:50.879218670Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.947520375s" Sep 9 00:20:50.879372 containerd[1432]: time="2025-09-09T00:20:50.879354711Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 00:20:50.880192 containerd[1432]: time="2025-09-09T00:20:50.880121061Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 00:20:52.003600 containerd[1432]: time="2025-09-09T00:20:52.003542143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:52.004166 containerd[1432]: time="2025-09-09T00:20:52.004133871Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 9 00:20:52.005116 containerd[1432]: time="2025-09-09T00:20:52.005076742Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:52.009239 containerd[1432]: time="2025-09-09T00:20:52.009207732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:52.010330 containerd[1432]: time="2025-09-09T00:20:52.010297272Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.130096488s" Sep 9 00:20:52.010330 containerd[1432]: time="2025-09-09T00:20:52.010333620Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 00:20:52.013148 containerd[1432]: time="2025-09-09T00:20:52.011417984Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 00:20:53.009433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:20:53.018404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:20:53.115095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:20:53.119209 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:20:53.237018 kubelet[1803]: E0909 00:20:53.236962 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:20:53.239931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:20:53.240076 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:20:53.350305 containerd[1432]: time="2025-09-09T00:20:53.349857130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:53.366520 containerd[1432]: time="2025-09-09T00:20:53.366455807Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 9 00:20:53.424974 containerd[1432]: time="2025-09-09T00:20:53.424901947Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:53.444403 containerd[1432]: time="2025-09-09T00:20:53.444344968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:53.445586 containerd[1432]: time="2025-09-09T00:20:53.445492433Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.434002682s" Sep 9 00:20:53.445586 containerd[1432]: time="2025-09-09T00:20:53.445528724Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 00:20:53.445987 containerd[1432]: time="2025-09-09T00:20:53.445964247Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 00:20:54.453347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174288308.mount: Deactivated successfully. Sep 9 00:20:55.105115 containerd[1432]: time="2025-09-09T00:20:55.104755050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:55.122255 containerd[1432]: time="2025-09-09T00:20:55.122180858Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 9 00:20:55.140709 containerd[1432]: time="2025-09-09T00:20:55.140622779Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:55.162149 containerd[1432]: time="2025-09-09T00:20:55.162028945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:55.167465 containerd[1432]: time="2025-09-09T00:20:55.167241615Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.721242541s" Sep 9 00:20:55.167465 containerd[1432]: time="2025-09-09T00:20:55.167454703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 00:20:55.169405 containerd[1432]: time="2025-09-09T00:20:55.168853767Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:20:56.100757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1007428766.mount: Deactivated successfully. Sep 9 00:20:57.254119 containerd[1432]: time="2025-09-09T00:20:57.254049784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:57.279385 containerd[1432]: time="2025-09-09T00:20:57.279318575Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 9 00:20:57.302561 containerd[1432]: time="2025-09-09T00:20:57.302505833Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:57.325513 containerd[1432]: time="2025-09-09T00:20:57.325411809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:57.327026 containerd[1432]: time="2025-09-09T00:20:57.326878069Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.157990361s" Sep 9 00:20:57.327026 containerd[1432]: time="2025-09-09T00:20:57.326917973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 00:20:57.328035 containerd[1432]: time="2025-09-09T00:20:57.327974542Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:20:58.050325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260447213.mount: Deactivated successfully. Sep 9 00:20:58.137625 containerd[1432]: time="2025-09-09T00:20:58.137552288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:58.153868 containerd[1432]: time="2025-09-09T00:20:58.153799016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 00:20:58.168414 containerd[1432]: time="2025-09-09T00:20:58.168328730Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:58.183735 containerd[1432]: time="2025-09-09T00:20:58.183654722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:58.185108 containerd[1432]: time="2025-09-09T00:20:58.184878697Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 856.877736ms" Sep 9 00:20:58.185108 containerd[1432]: time="2025-09-09T00:20:58.184913983Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 00:20:58.185570 containerd[1432]: time="2025-09-09T00:20:58.185531319Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 00:20:58.905589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725690486.mount: Deactivated successfully. Sep 9 00:21:00.544630 containerd[1432]: time="2025-09-09T00:21:00.544579170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:00.545679 containerd[1432]: time="2025-09-09T00:21:00.545418893Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 9 00:21:00.547278 containerd[1432]: time="2025-09-09T00:21:00.547248017Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:00.551502 containerd[1432]: time="2025-09-09T00:21:00.550843566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:00.552445 containerd[1432]: time="2025-09-09T00:21:00.552179807Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.366613198s" Sep 9 00:21:00.552445 containerd[1432]: time="2025-09-09T00:21:00.552222298Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 00:21:03.259464 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:21:03.269387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:03.401432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:03.404630 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:21:03.445734 kubelet[1961]: E0909 00:21:03.445670 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:21:03.448476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:21:03.448614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:21:06.055955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:06.068414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:06.099860 systemd[1]: Reloading requested from client PID 1976 ('systemctl') (unit session-5.scope)... Sep 9 00:21:06.100039 systemd[1]: Reloading... Sep 9 00:21:06.177199 zram_generator::config[2013]: No configuration found. Sep 9 00:21:06.321764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:21:06.379705 systemd[1]: Reloading finished in 279 ms. Sep 9 00:21:06.433644 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:21:06.433742 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:21:06.434000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:06.435715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:06.548464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:06.552928 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:21:06.596195 kubelet[2060]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:21:06.596195 kubelet[2060]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:21:06.596195 kubelet[2060]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:21:06.596195 kubelet[2060]: I0909 00:21:06.596165 2060 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:21:07.309724 kubelet[2060]: I0909 00:21:07.309679 2060 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:21:07.309724 kubelet[2060]: I0909 00:21:07.309712 2060 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:21:07.310010 kubelet[2060]: I0909 00:21:07.309984 2060 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:21:07.331861 kubelet[2060]: E0909 00:21:07.331811 2060 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:07.335533 kubelet[2060]: I0909 00:21:07.335411 2060 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:21:07.341952 kubelet[2060]: E0909 00:21:07.341903 2060 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:21:07.341952 kubelet[2060]: I0909 00:21:07.341947 2060 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:21:07.346861 kubelet[2060]: I0909 00:21:07.346828 2060 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:21:07.347654 kubelet[2060]: I0909 00:21:07.347600 2060 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:21:07.347836 kubelet[2060]: I0909 00:21:07.347654 2060 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:21:07.347972 kubelet[2060]: I0909 00:21:07.347961 2060 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:21:07.347997 kubelet[2060]: I0909 00:21:07.347974 2060 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:21:07.348336 kubelet[2060]: I0909 00:21:07.348297 2060 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:21:07.353618 kubelet[2060]: I0909 00:21:07.353590 2060 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:21:07.353618 kubelet[2060]: I0909 00:21:07.353623 2060 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:21:07.353722 kubelet[2060]: I0909 00:21:07.353655 2060 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:21:07.353722 kubelet[2060]: I0909 00:21:07.353681 2060 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:21:07.357116 kubelet[2060]: W0909 00:21:07.356692 2060 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 9 00:21:07.357116 kubelet[2060]: E0909 00:21:07.356756 2060 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:07.357116 kubelet[2060]: I0909 00:21:07.356837 2060 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:21:07.357116 kubelet[2060]: W0909 00:21:07.356965 2060 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 9 00:21:07.357116 kubelet[2060]: E0909 00:21:07.357010 2060 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:07.357522 kubelet[2060]: I0909 00:21:07.357507 2060 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:21:07.357664 kubelet[2060]: W0909 00:21:07.357650 2060 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:21:07.359695 kubelet[2060]: I0909 00:21:07.358626 2060 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:21:07.359695 kubelet[2060]: I0909 00:21:07.358673 2060 server.go:1287] "Started kubelet" Sep 9 00:21:07.359695 kubelet[2060]: I0909 00:21:07.359481 2060 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:21:07.361775 kubelet[2060]: I0909 00:21:07.361717 2060 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:21:07.362041 kubelet[2060]: I0909 00:21:07.362019 2060 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:21:07.362441 kubelet[2060]: I0909 00:21:07.362420 2060 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:21:07.362981 kubelet[2060]: I0909 00:21:07.362959 2060 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:21:07.366081 kubelet[2060]: E0909 00:21:07.365372 2060 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.63:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.63:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186375504d423e7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:21:07.358645885 +0000 UTC m=+0.798427285,LastTimestamp:2025-09-09 00:21:07.358645885 +0000 UTC m=+0.798427285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:21:07.366301 kubelet[2060]: I0909 00:21:07.366267 2060 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:21:07.367237 kubelet[2060]: I0909 00:21:07.367204 2060 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:21:07.367545 kubelet[2060]: E0909 00:21:07.367525 2060 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:07.367842 kubelet[2060]: I0909 00:21:07.367825 2060 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:21:07.367935 kubelet[2060]: I0909 00:21:07.367926 2060 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:21:07.369907 kubelet[2060]: W0909 00:21:07.369717 2060 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 9 00:21:07.369907 kubelet[2060]: E0909 00:21:07.369773 2060 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:07.370178 kubelet[2060]: I0909 00:21:07.370153 2060 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:21:07.370329 kubelet[2060]: I0909 00:21:07.370309 2060 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:21:07.371092 kubelet[2060]: E0909 00:21:07.371067 2060 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:21:07.371256 kubelet[2060]: E0909 00:21:07.371192 2060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="200ms" Sep 9 00:21:07.373158 kubelet[2060]: I0909 00:21:07.372261 2060 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:21:07.380198 kubelet[2060]: I0909 00:21:07.380153 2060 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:21:07.381932 kubelet[2060]: I0909 00:21:07.381211 2060 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:21:07.381932 kubelet[2060]: I0909 00:21:07.381235 2060 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:21:07.381932 kubelet[2060]: I0909 00:21:07.381255 2060 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:21:07.381932 kubelet[2060]: I0909 00:21:07.381263 2060 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:21:07.381932 kubelet[2060]: E0909 00:21:07.381310 2060 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:21:07.386615 kubelet[2060]: W0909 00:21:07.386550 2060 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 9 00:21:07.386615 kubelet[2060]: E0909 00:21:07.386613 2060 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:07.387512 kubelet[2060]: I0909 00:21:07.387457 2060 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:21:07.387512 kubelet[2060]: I0909 00:21:07.387479 2060 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:21:07.387512 kubelet[2060]: I0909 00:21:07.387497 2060 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:21:07.468267 kubelet[2060]: E0909 00:21:07.468230 2060 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:07.471267 kubelet[2060]: I0909 00:21:07.471252 2060 policy_none.go:49] "None policy: Start" Sep 9 00:21:07.471267 kubelet[2060]: I0909 00:21:07.471271 2060 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:21:07.471394 kubelet[2060]: I0909 00:21:07.471284 2060 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:21:07.476342 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:21:07.482374 kubelet[2060]: E0909 00:21:07.482337 2060 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:21:07.485848 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:21:07.489153 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:21:07.503162 kubelet[2060]: I0909 00:21:07.503096 2060 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:21:07.503368 kubelet[2060]: I0909 00:21:07.503341 2060 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:21:07.503399 kubelet[2060]: I0909 00:21:07.503360 2060 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:21:07.503773 kubelet[2060]: I0909 00:21:07.503700 2060 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:21:07.505059 kubelet[2060]: E0909 00:21:07.505023 2060 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:21:07.505150 kubelet[2060]: E0909 00:21:07.505069 2060 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:21:07.571959 kubelet[2060]: E0909 00:21:07.571844 2060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="400ms" Sep 9 00:21:07.604904 kubelet[2060]: I0909 00:21:07.604626 2060 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:21:07.605443 kubelet[2060]: E0909 00:21:07.605419 2060 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Sep 9 00:21:07.695658 systemd[1]: Created slice kubepods-burstable-pod80609c6b9742571f606ee75b07d3eab1.slice - libcontainer container kubepods-burstable-pod80609c6b9742571f606ee75b07d3eab1.slice. Sep 9 00:21:07.707558 kubelet[2060]: E0909 00:21:07.707349 2060 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:07.711085 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 00:21:07.728044 kubelet[2060]: E0909 00:21:07.727791 2060 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:07.733206 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 00:21:07.736567 kubelet[2060]: E0909 00:21:07.736290 2060 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:07.769022 kubelet[2060]: I0909 00:21:07.768739 2060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:07.769022 kubelet[2060]: I0909 00:21:07.768777 2060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:07.769022 kubelet[2060]: I0909 00:21:07.768797 2060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:07.769022 kubelet[2060]: I0909 00:21:07.768813 2060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80609c6b9742571f606ee75b07d3eab1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80609c6b9742571f606ee75b07d3eab1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:07.769022 kubelet[2060]: I0909 00:21:07.768827 2060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:07.769237 kubelet[2060]: I0909 00:21:07.768850 2060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:07.769237 kubelet[2060]: I0909 00:21:07.768867 2060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:21:07.769237 kubelet[2060]: I0909 00:21:07.768881 2060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80609c6b9742571f606ee75b07d3eab1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80609c6b9742571f606ee75b07d3eab1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:07.769237 kubelet[2060]: I0909 00:21:07.768895 2060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80609c6b9742571f606ee75b07d3eab1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80609c6b9742571f606ee75b07d3eab1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:07.807716 kubelet[2060]: I0909 00:21:07.807684 2060 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:21:07.808616 kubelet[2060]: E0909 00:21:07.808554 2060 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Sep 9 00:21:07.972588 kubelet[2060]: E0909 00:21:07.972448 2060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="800ms" Sep 9 00:21:08.008166 kubelet[2060]: E0909 00:21:08.008064 2060 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:08.009090 containerd[1432]: time="2025-09-09T00:21:08.009045143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80609c6b9742571f606ee75b07d3eab1,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:08.029333 kubelet[2060]: E0909 00:21:08.029017 2060 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:08.029942 containerd[1432]: time="2025-09-09T00:21:08.029531658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:08.037626 kubelet[2060]: E0909 00:21:08.037482 2060 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:08.038485 containerd[1432]: time="2025-09-09T00:21:08.038432281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:08.210291 kubelet[2060]: I0909 00:21:08.210245 2060 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:21:08.210587 kubelet[2060]: E0909 00:21:08.210548 2060 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Sep 9 00:21:08.226984 kubelet[2060]: W0909 00:21:08.226451 2060 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 9 00:21:08.226984 kubelet[2060]: E0909 00:21:08.226530 2060 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:08.262568 kubelet[2060]: W0909 00:21:08.262505 2060 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 9 00:21:08.262824 kubelet[2060]: E0909 00:21:08.262731 2060 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:08.550407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3912299372.mount: Deactivated successfully. Sep 9 00:21:08.565635 containerd[1432]: time="2025-09-09T00:21:08.565565421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:21:08.571903 containerd[1432]: time="2025-09-09T00:21:08.571453394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 9 00:21:08.573877 containerd[1432]: time="2025-09-09T00:21:08.573762154Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:21:08.578916 containerd[1432]: time="2025-09-09T00:21:08.578143843Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:21:08.580361 containerd[1432]: time="2025-09-09T00:21:08.580333628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:21:08.582988 containerd[1432]: time="2025-09-09T00:21:08.581821683Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:21:08.583291 containerd[1432]: time="2025-09-09T00:21:08.583265482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:21:08.584935 containerd[1432]: time="2025-09-09T00:21:08.584898776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:21:08.588918 containerd[1432]: time="2025-09-09T00:21:08.588519767Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.393189ms" Sep 9 00:21:08.591097 containerd[1432]: time="2025-09-09T00:21:08.590951098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.332329ms" Sep 9 00:21:08.596107 containerd[1432]: time="2025-09-09T00:21:08.595760870Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.241678ms" Sep 9 00:21:08.730231 containerd[1432]: time="2025-09-09T00:21:08.730098148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:08.730231 containerd[1432]: time="2025-09-09T00:21:08.730206728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:08.730231 containerd[1432]: time="2025-09-09T00:21:08.730223559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:08.730437 containerd[1432]: time="2025-09-09T00:21:08.730308711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:08.732559 containerd[1432]: time="2025-09-09T00:21:08.732204460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:08.732559 containerd[1432]: time="2025-09-09T00:21:08.732289772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:08.732559 containerd[1432]: time="2025-09-09T00:21:08.732302805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:08.732559 containerd[1432]: time="2025-09-09T00:21:08.732376244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:08.738597 containerd[1432]: time="2025-09-09T00:21:08.737954910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:08.738597 containerd[1432]: time="2025-09-09T00:21:08.738008120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:08.738597 containerd[1432]: time="2025-09-09T00:21:08.738023352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:08.738597 containerd[1432]: time="2025-09-09T00:21:08.738103587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:08.757321 systemd[1]: Started cri-containerd-11e88ea8a14be484e3733f7aea42d7088ba6412f040faa7788524ad537c75f67.scope - libcontainer container 11e88ea8a14be484e3733f7aea42d7088ba6412f040faa7788524ad537c75f67. Sep 9 00:21:08.761787 systemd[1]: Started cri-containerd-313a1ab9f44e9b113be655425d4b34108c72ffdf09096afb547a522dc8779b25.scope - libcontainer container 313a1ab9f44e9b113be655425d4b34108c72ffdf09096afb547a522dc8779b25. Sep 9 00:21:08.763557 systemd[1]: Started cri-containerd-b371b7014248d31c56924ecd991636df5825fc6eb6b7b9614e761c8ddf2b59d3.scope - libcontainer container b371b7014248d31c56924ecd991636df5825fc6eb6b7b9614e761c8ddf2b59d3. Sep 9 00:21:08.773546 kubelet[2060]: E0909 00:21:08.773508 2060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="1.6s" Sep 9 00:21:08.794091 containerd[1432]: time="2025-09-09T00:21:08.794052710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"313a1ab9f44e9b113be655425d4b34108c72ffdf09096afb547a522dc8779b25\"" Sep 9 00:21:08.795445 kubelet[2060]: E0909 00:21:08.795383 2060 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:08.797517 containerd[1432]: time="2025-09-09T00:21:08.797483047Z" level=info msg="CreateContainer within sandbox \"313a1ab9f44e9b113be655425d4b34108c72ffdf09096afb547a522dc8779b25\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:21:08.802474 containerd[1432]: time="2025-09-09T00:21:08.802370736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80609c6b9742571f606ee75b07d3eab1,Namespace:kube-system,Attempt:0,} returns sandbox id \"11e88ea8a14be484e3733f7aea42d7088ba6412f040faa7788524ad537c75f67\"" Sep 9 00:21:08.803099 kubelet[2060]: E0909 00:21:08.803049 2060 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:08.804408 containerd[1432]: time="2025-09-09T00:21:08.804365269Z" level=info msg="CreateContainer within sandbox \"11e88ea8a14be484e3733f7aea42d7088ba6412f040faa7788524ad537c75f67\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:21:08.811601 containerd[1432]: time="2025-09-09T00:21:08.811562397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b371b7014248d31c56924ecd991636df5825fc6eb6b7b9614e761c8ddf2b59d3\"" Sep 9 00:21:08.812443 kubelet[2060]: E0909 00:21:08.812421 2060 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:08.814006 containerd[1432]: time="2025-09-09T00:21:08.813979976Z" level=info msg="CreateContainer within sandbox \"b371b7014248d31c56924ecd991636df5825fc6eb6b7b9614e761c8ddf2b59d3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:21:08.817365 containerd[1432]: time="2025-09-09T00:21:08.817333595Z" level=info msg="CreateContainer within sandbox \"313a1ab9f44e9b113be655425d4b34108c72ffdf09096afb547a522dc8779b25\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dd0e78b220e5bfc90792ae3bfd22635c0fb1076713c6a186c2d027079ca6d525\"" Sep 9 00:21:08.818058 containerd[1432]: time="2025-09-09T00:21:08.818025532Z" level=info msg="StartContainer for \"dd0e78b220e5bfc90792ae3bfd22635c0fb1076713c6a186c2d027079ca6d525\"" Sep 9 00:21:08.828110 containerd[1432]: time="2025-09-09T00:21:08.828069400Z" level=info msg="CreateContainer within sandbox \"11e88ea8a14be484e3733f7aea42d7088ba6412f040faa7788524ad537c75f67\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ab1f8c84d58dd17ed74cebd13fb8a80b78b11b30fdfd0f92b93db4708f39f379\"" Sep 9 00:21:08.828650 containerd[1432]: time="2025-09-09T00:21:08.828624972Z" level=info msg="StartContainer for \"ab1f8c84d58dd17ed74cebd13fb8a80b78b11b30fdfd0f92b93db4708f39f379\"" Sep 9 00:21:08.839363 containerd[1432]: time="2025-09-09T00:21:08.839318520Z" level=info msg="CreateContainer within sandbox \"b371b7014248d31c56924ecd991636df5825fc6eb6b7b9614e761c8ddf2b59d3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0956b35606aaa97dfca92bba264c61e5ff71774d4497b833811b137a1495cd49\"" Sep 9 00:21:08.839845 containerd[1432]: time="2025-09-09T00:21:08.839817563Z" level=info msg="StartContainer for \"0956b35606aaa97dfca92bba264c61e5ff71774d4497b833811b137a1495cd49\"" Sep 9 00:21:08.844943 systemd[1]: Started cri-containerd-dd0e78b220e5bfc90792ae3bfd22635c0fb1076713c6a186c2d027079ca6d525.scope - libcontainer container dd0e78b220e5bfc90792ae3bfd22635c0fb1076713c6a186c2d027079ca6d525. Sep 9 00:21:08.857313 systemd[1]: Started cri-containerd-ab1f8c84d58dd17ed74cebd13fb8a80b78b11b30fdfd0f92b93db4708f39f379.scope - libcontainer container ab1f8c84d58dd17ed74cebd13fb8a80b78b11b30fdfd0f92b93db4708f39f379. Sep 9 00:21:08.860450 systemd[1]: Started cri-containerd-0956b35606aaa97dfca92bba264c61e5ff71774d4497b833811b137a1495cd49.scope - libcontainer container 0956b35606aaa97dfca92bba264c61e5ff71774d4497b833811b137a1495cd49. Sep 9 00:21:08.888442 containerd[1432]: time="2025-09-09T00:21:08.888400132Z" level=info msg="StartContainer for \"dd0e78b220e5bfc90792ae3bfd22635c0fb1076713c6a186c2d027079ca6d525\" returns successfully" Sep 9 00:21:08.905343 containerd[1432]: time="2025-09-09T00:21:08.905281088Z" level=info msg="StartContainer for \"0956b35606aaa97dfca92bba264c61e5ff71774d4497b833811b137a1495cd49\" returns successfully" Sep 9 00:21:08.905472 containerd[1432]: time="2025-09-09T00:21:08.905361403Z" level=info msg="StartContainer for \"ab1f8c84d58dd17ed74cebd13fb8a80b78b11b30fdfd0f92b93db4708f39f379\" returns successfully" Sep 9 00:21:08.920568 kubelet[2060]: W0909 00:21:08.920437 2060 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 9 00:21:08.920568 kubelet[2060]: E0909 00:21:08.920545 2060 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:08.950609 kubelet[2060]: W0909 00:21:08.950531 2060 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 9 00:21:08.950609 kubelet[2060]: E0909 00:21:08.950613 2060 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:21:09.012004 kubelet[2060]: I0909 00:21:09.011972 2060 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:21:09.394460 kubelet[2060]: E0909 00:21:09.394433 2060 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:09.394866 kubelet[2060]: E0909 00:21:09.394788 2060 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:09.396615 kubelet[2060]: E0909 00:21:09.396588 2060 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:09.397334 kubelet[2060]: E0909 00:21:09.396711 2060 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:09.397855 kubelet[2060]: E0909 00:21:09.397838 2060 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:09.397971 kubelet[2060]: E0909 00:21:09.397956 2060 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:10.382541 kubelet[2060]: E0909 00:21:10.382501 2060 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:21:10.399746 kubelet[2060]: E0909 00:21:10.399714 2060 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:10.399881 kubelet[2060]: E0909 00:21:10.399829 2060 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:10.400071 kubelet[2060]: E0909 00:21:10.400055 2060 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:21:10.400186 kubelet[2060]: E0909 00:21:10.400165 2060 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:10.483202 kubelet[2060]: I0909 00:21:10.483165 2060 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:21:10.483202 kubelet[2060]: E0909 00:21:10.483201 2060 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:21:10.493395 kubelet[2060]: E0909 00:21:10.493350 2060 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:10.593947 kubelet[2060]: E0909 00:21:10.593883 2060 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:10.694703 kubelet[2060]: E0909 00:21:10.694568 2060 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:10.795459 kubelet[2060]: E0909 00:21:10.795412 2060 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:10.896291 kubelet[2060]: E0909 00:21:10.896210 2060 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:10.997097 kubelet[2060]: E0909 00:21:10.996932 2060 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:11.098316 kubelet[2060]: E0909 00:21:11.097115 2060 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:21:11.168643 kubelet[2060]: I0909 00:21:11.168290 2060 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:21:11.177096 kubelet[2060]: E0909 00:21:11.176836 2060 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:21:11.177096 kubelet[2060]: I0909 00:21:11.176866 2060 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:11.179107 kubelet[2060]: E0909 00:21:11.178520 2060 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:11.179107 kubelet[2060]: I0909 00:21:11.178541 2060 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:11.183916 kubelet[2060]: E0909 00:21:11.183846 2060 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:11.356588 kubelet[2060]: I0909 00:21:11.356464 2060 apiserver.go:52] "Watching apiserver" Sep 9 00:21:11.369490 kubelet[2060]: I0909 00:21:11.368836 2060 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:21:12.500232 systemd[1]: Reloading requested from client PID 2337 ('systemctl') (unit session-5.scope)... Sep 9 00:21:12.500245 systemd[1]: Reloading... Sep 9 00:21:12.566569 zram_generator::config[2375]: No configuration found. Sep 9 00:21:12.652242 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:21:12.717326 systemd[1]: Reloading finished in 216 ms. Sep 9 00:21:12.755627 kubelet[2060]: I0909 00:21:12.755328 2060 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:21:12.755405 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:12.772431 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:21:12.772625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:12.772667 systemd[1]: kubelet.service: Consumed 1.157s CPU time, 130.9M memory peak, 0B memory swap peak. Sep 9 00:21:12.781447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:21:12.895442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:21:12.899285 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:21:12.943463 kubelet[2418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:21:12.943463 kubelet[2418]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:21:12.943463 kubelet[2418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:21:12.943802 kubelet[2418]: I0909 00:21:12.943515 2418 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:21:12.952014 kubelet[2418]: I0909 00:21:12.951962 2418 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:21:12.952014 kubelet[2418]: I0909 00:21:12.951993 2418 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:21:12.952694 kubelet[2418]: I0909 00:21:12.952596 2418 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:21:12.953932 kubelet[2418]: I0909 00:21:12.953900 2418 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:21:12.957462 kubelet[2418]: I0909 00:21:12.957437 2418 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:21:12.959960 kubelet[2418]: E0909 00:21:12.959915 2418 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:21:12.959960 kubelet[2418]: I0909 00:21:12.959960 2418 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:21:12.962276 kubelet[2418]: I0909 00:21:12.962259 2418 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:21:12.962454 kubelet[2418]: I0909 00:21:12.962434 2418 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:21:12.962614 kubelet[2418]: I0909 00:21:12.962456 2418 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:21:12.962687 kubelet[2418]: I0909 00:21:12.962625 2418 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:21:12.962687 kubelet[2418]: I0909 00:21:12.962634 2418 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:21:12.962687 kubelet[2418]: I0909 00:21:12.962680 2418 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:21:12.962809 kubelet[2418]: I0909 00:21:12.962798 2418 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:21:12.962836 kubelet[2418]: I0909 00:21:12.962811 2418 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:21:12.962836 kubelet[2418]: I0909 00:21:12.962825 2418 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:21:12.962836 kubelet[2418]: I0909 00:21:12.962834 2418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:21:12.963626 kubelet[2418]: I0909 00:21:12.963561 2418 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:21:12.963981 kubelet[2418]: I0909 00:21:12.963960 2418 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:21:12.964354 kubelet[2418]: I0909 00:21:12.964337 2418 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:21:12.964425 kubelet[2418]: I0909 00:21:12.964366 2418 server.go:1287] "Started kubelet" Sep 9 00:21:12.965024 kubelet[2418]: I0909 00:21:12.964493 2418 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:21:12.965024 kubelet[2418]: I0909 00:21:12.964644 2418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:21:12.965024 kubelet[2418]: I0909 00:21:12.964848 2418 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:21:12.978699 kubelet[2418]: I0909 00:21:12.978614 2418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:21:12.979047 kubelet[2418]: I0909 00:21:12.978925 2418 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:21:12.979999 kubelet[2418]: I0909 00:21:12.979972 2418 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:21:12.980598 kubelet[2418]: I0909 00:21:12.980580 2418 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:21:12.980923 kubelet[2418]: I0909 00:21:12.980909 2418 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:21:12.981049 kubelet[2418]: I0909 00:21:12.981039 2418 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:21:12.981311 kubelet[2418]: I0909 00:21:12.981238 2418 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:21:12.989124 kubelet[2418]: I0909 00:21:12.988819 2418 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:21:12.989124 kubelet[2418]: I0909 00:21:12.988842 2418 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:21:12.996155 kubelet[2418]: I0909 00:21:12.995923 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:21:12.999705 kubelet[2418]: I0909 00:21:12.999307 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:21:12.999705 kubelet[2418]: I0909 00:21:12.999426 2418 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:21:12.999705 kubelet[2418]: I0909 00:21:12.999447 2418 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:21:12.999705 kubelet[2418]: I0909 00:21:12.999458 2418 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:21:12.999705 kubelet[2418]: E0909 00:21:12.999573 2418 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:21:13.022776 kubelet[2418]: I0909 00:21:13.022745 2418 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:21:13.022776 kubelet[2418]: I0909 00:21:13.022773 2418 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:21:13.022776 kubelet[2418]: I0909 00:21:13.022796 2418 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:21:13.023058 kubelet[2418]: I0909 00:21:13.022966 2418 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:21:13.023058 kubelet[2418]: I0909 00:21:13.022977 2418 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:21:13.023058 kubelet[2418]: I0909 00:21:13.022994 2418 policy_none.go:49] "None policy: Start" Sep 9 00:21:13.023058 kubelet[2418]: I0909 00:21:13.023003 2418 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:21:13.023058 kubelet[2418]: I0909 00:21:13.023011 2418 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:21:13.023261 kubelet[2418]: I0909 00:21:13.023099 2418 state_mem.go:75] "Updated machine memory state" Sep 9 00:21:13.026790 kubelet[2418]: I0909 00:21:13.026738 2418 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:21:13.027491 kubelet[2418]: I0909 00:21:13.026905 2418 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:21:13.027491 kubelet[2418]: I0909 00:21:13.026917 2418 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:21:13.027491 kubelet[2418]: I0909 00:21:13.027069 2418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:21:13.028719 kubelet[2418]: E0909 00:21:13.028210 2418 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:21:13.101176 kubelet[2418]: I0909 00:21:13.101073 2418 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:13.101290 kubelet[2418]: I0909 00:21:13.101228 2418 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:13.101360 kubelet[2418]: I0909 00:21:13.101228 2418 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:21:13.136322 kubelet[2418]: I0909 00:21:13.136277 2418 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:21:13.144616 kubelet[2418]: I0909 00:21:13.144015 2418 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:21:13.144616 kubelet[2418]: I0909 00:21:13.144102 2418 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:21:13.283685 kubelet[2418]: I0909 00:21:13.282939 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:21:13.283685 kubelet[2418]: I0909 00:21:13.282975 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80609c6b9742571f606ee75b07d3eab1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80609c6b9742571f606ee75b07d3eab1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:13.283685 kubelet[2418]: I0909 00:21:13.283002 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80609c6b9742571f606ee75b07d3eab1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80609c6b9742571f606ee75b07d3eab1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:13.283685 kubelet[2418]: I0909 00:21:13.283038 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:13.283685 kubelet[2418]: I0909 00:21:13.283057 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:13.283900 kubelet[2418]: I0909 00:21:13.283073 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:13.283900 kubelet[2418]: I0909 00:21:13.283088 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80609c6b9742571f606ee75b07d3eab1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80609c6b9742571f606ee75b07d3eab1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:13.283900 kubelet[2418]: I0909 00:21:13.283103 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:13.283900 kubelet[2418]: I0909 00:21:13.283120 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:21:13.408756 kubelet[2418]: E0909 00:21:13.407849 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:13.410509 kubelet[2418]: E0909 00:21:13.410474 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:13.410793 kubelet[2418]: E0909 00:21:13.410702 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:13.963460 kubelet[2418]: I0909 00:21:13.963383 2418 apiserver.go:52] "Watching apiserver" Sep 9 00:21:14.009562 kubelet[2418]: E0909 00:21:14.009497 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:14.010078 kubelet[2418]: I0909 00:21:14.010048 2418 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:14.010406 kubelet[2418]: I0909 00:21:14.010391 2418 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:21:14.018360 kubelet[2418]: E0909 00:21:14.018313 2418 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:21:14.018463 kubelet[2418]: E0909 00:21:14.018448 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:14.018798 kubelet[2418]: E0909 00:21:14.018781 2418 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:21:14.020037 kubelet[2418]: E0909 00:21:14.020014 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:14.037499 kubelet[2418]: I0909 00:21:14.037413 2418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.037380121 podStartE2EDuration="1.037380121s" podCreationTimestamp="2025-09-09 00:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:21:14.037185804 +0000 UTC m=+1.134930253" watchObservedRunningTime="2025-09-09 00:21:14.037380121 +0000 UTC m=+1.135124570" Sep 9 00:21:14.045854 kubelet[2418]: I0909 00:21:14.045806 2418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.04579017 podStartE2EDuration="1.04579017s" podCreationTimestamp="2025-09-09 00:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:21:14.045470936 +0000 UTC m=+1.143215385" watchObservedRunningTime="2025-09-09 00:21:14.04579017 +0000 UTC m=+1.143534619" Sep 9 00:21:14.081532 kubelet[2418]: I0909 00:21:14.081497 2418 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:21:14.269790 sudo[1571]: pam_unix(sudo:session): session closed for user root Sep 9 00:21:14.271308 sshd[1568]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:14.275823 systemd[1]: sshd@4-10.0.0.63:22-10.0.0.1:60774.service: Deactivated successfully. Sep 9 00:21:14.279095 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:21:14.279359 systemd[1]: session-5.scope: Consumed 6.554s CPU time, 151.0M memory peak, 0B memory swap peak. Sep 9 00:21:14.280060 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:21:14.281255 systemd-logind[1418]: Removed session 5. Sep 9 00:21:15.011538 kubelet[2418]: E0909 00:21:15.011387 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:15.013055 kubelet[2418]: E0909 00:21:15.012483 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:16.013813 kubelet[2418]: E0909 00:21:16.013650 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:17.980251 kubelet[2418]: I0909 00:21:17.980208 2418 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:21:17.980611 containerd[1432]: time="2025-09-09T00:21:17.980513479Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:21:17.980796 kubelet[2418]: I0909 00:21:17.980694 2418 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:21:18.663364 kubelet[2418]: E0909 00:21:18.658353 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:18.692526 kubelet[2418]: I0909 00:21:18.692379 2418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.692341265 podStartE2EDuration="5.692341265s" podCreationTimestamp="2025-09-09 00:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:21:14.0541483 +0000 UTC m=+1.151892709" watchObservedRunningTime="2025-09-09 00:21:18.692341265 +0000 UTC m=+5.790085714" Sep 9 00:21:18.955368 systemd[1]: Created slice kubepods-besteffort-podfd4174d5_c034_40c9_9c0e_49c872b548a4.slice - libcontainer container kubepods-besteffort-podfd4174d5_c034_40c9_9c0e_49c872b548a4.slice. Sep 9 00:21:18.970178 systemd[1]: Created slice kubepods-burstable-pod19e86c63_8be6_4cc6_9da1_3612885d364f.slice - libcontainer container kubepods-burstable-pod19e86c63_8be6_4cc6_9da1_3612885d364f.slice. Sep 9 00:21:19.019570 kubelet[2418]: E0909 00:21:19.019531 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:19.027398 kubelet[2418]: I0909 00:21:19.027323 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/19e86c63-8be6-4cc6-9da1-3612885d364f-cni-plugin\") pod \"kube-flannel-ds-2x6qx\" (UID: \"19e86c63-8be6-4cc6-9da1-3612885d364f\") " pod="kube-flannel/kube-flannel-ds-2x6qx" Sep 9 00:21:19.027398 kubelet[2418]: I0909 00:21:19.027360 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/19e86c63-8be6-4cc6-9da1-3612885d364f-flannel-cfg\") pod \"kube-flannel-ds-2x6qx\" (UID: \"19e86c63-8be6-4cc6-9da1-3612885d364f\") " pod="kube-flannel/kube-flannel-ds-2x6qx" Sep 9 00:21:19.027398 kubelet[2418]: I0909 00:21:19.027391 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/19e86c63-8be6-4cc6-9da1-3612885d364f-cni\") pod \"kube-flannel-ds-2x6qx\" (UID: \"19e86c63-8be6-4cc6-9da1-3612885d364f\") " pod="kube-flannel/kube-flannel-ds-2x6qx" Sep 9 00:21:19.027657 kubelet[2418]: I0909 00:21:19.027413 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19e86c63-8be6-4cc6-9da1-3612885d364f-xtables-lock\") pod \"kube-flannel-ds-2x6qx\" (UID: \"19e86c63-8be6-4cc6-9da1-3612885d364f\") " pod="kube-flannel/kube-flannel-ds-2x6qx" Sep 9 00:21:19.027657 kubelet[2418]: I0909 00:21:19.027431 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd4174d5-c034-40c9-9c0e-49c872b548a4-lib-modules\") pod \"kube-proxy-jcddk\" (UID: \"fd4174d5-c034-40c9-9c0e-49c872b548a4\") " pod="kube-system/kube-proxy-jcddk" Sep 9 00:21:19.027657 kubelet[2418]: I0909 00:21:19.027448 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/19e86c63-8be6-4cc6-9da1-3612885d364f-run\") pod \"kube-flannel-ds-2x6qx\" (UID: \"19e86c63-8be6-4cc6-9da1-3612885d364f\") " pod="kube-flannel/kube-flannel-ds-2x6qx" Sep 9 00:21:19.027657 kubelet[2418]: I0909 00:21:19.027462 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc5nd\" (UniqueName: \"kubernetes.io/projected/19e86c63-8be6-4cc6-9da1-3612885d364f-kube-api-access-vc5nd\") pod \"kube-flannel-ds-2x6qx\" (UID: \"19e86c63-8be6-4cc6-9da1-3612885d364f\") " pod="kube-flannel/kube-flannel-ds-2x6qx" Sep 9 00:21:19.027657 kubelet[2418]: I0909 00:21:19.027481 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd4174d5-c034-40c9-9c0e-49c872b548a4-kube-proxy\") pod \"kube-proxy-jcddk\" (UID: \"fd4174d5-c034-40c9-9c0e-49c872b548a4\") " pod="kube-system/kube-proxy-jcddk" Sep 9 00:21:19.027773 kubelet[2418]: I0909 00:21:19.027496 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6cb5\" (UniqueName: \"kubernetes.io/projected/fd4174d5-c034-40c9-9c0e-49c872b548a4-kube-api-access-c6cb5\") pod \"kube-proxy-jcddk\" (UID: \"fd4174d5-c034-40c9-9c0e-49c872b548a4\") " pod="kube-system/kube-proxy-jcddk" Sep 9 00:21:19.027773 kubelet[2418]: I0909 00:21:19.027512 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd4174d5-c034-40c9-9c0e-49c872b548a4-xtables-lock\") pod \"kube-proxy-jcddk\" (UID: \"fd4174d5-c034-40c9-9c0e-49c872b548a4\") " pod="kube-system/kube-proxy-jcddk" Sep 9 00:21:19.268428 kubelet[2418]: E0909 00:21:19.268387 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:19.269071 containerd[1432]: time="2025-09-09T00:21:19.268988930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jcddk,Uid:fd4174d5-c034-40c9-9c0e-49c872b548a4,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:19.276287 kubelet[2418]: E0909 00:21:19.276253 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:19.276794 containerd[1432]: time="2025-09-09T00:21:19.276746304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2x6qx,Uid:19e86c63-8be6-4cc6-9da1-3612885d364f,Namespace:kube-flannel,Attempt:0,}" Sep 9 00:21:19.293936 containerd[1432]: time="2025-09-09T00:21:19.293744554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:19.293936 containerd[1432]: time="2025-09-09T00:21:19.293801633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:19.293936 containerd[1432]: time="2025-09-09T00:21:19.293812993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:19.293936 containerd[1432]: time="2025-09-09T00:21:19.293894192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:19.310518 containerd[1432]: time="2025-09-09T00:21:19.310338129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:19.310518 containerd[1432]: time="2025-09-09T00:21:19.310426608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:19.310518 containerd[1432]: time="2025-09-09T00:21:19.310438647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:19.310702 containerd[1432]: time="2025-09-09T00:21:19.310598725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:19.311351 systemd[1]: Started cri-containerd-525e26c0af30bedac61369dccca96facafe0c0502756b1a9f08d1b2271ca5260.scope - libcontainer container 525e26c0af30bedac61369dccca96facafe0c0502756b1a9f08d1b2271ca5260. Sep 9 00:21:19.323256 systemd[1]: Started cri-containerd-dda816d915c9562dfdb5fe5e57c0b4e67f109546faf258bd672d523d26e93581.scope - libcontainer container dda816d915c9562dfdb5fe5e57c0b4e67f109546faf258bd672d523d26e93581. Sep 9 00:21:19.336556 containerd[1432]: time="2025-09-09T00:21:19.336458655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jcddk,Uid:fd4174d5-c034-40c9-9c0e-49c872b548a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"525e26c0af30bedac61369dccca96facafe0c0502756b1a9f08d1b2271ca5260\"" Sep 9 00:21:19.337881 kubelet[2418]: E0909 00:21:19.337855 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:19.339882 containerd[1432]: time="2025-09-09T00:21:19.339850529Z" level=info msg="CreateContainer within sandbox \"525e26c0af30bedac61369dccca96facafe0c0502756b1a9f08d1b2271ca5260\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:21:19.358658 containerd[1432]: time="2025-09-09T00:21:19.358602554Z" level=info msg="CreateContainer within sandbox \"525e26c0af30bedac61369dccca96facafe0c0502756b1a9f08d1b2271ca5260\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f622da88836b46cc4958433ce3d9b994cbcdda87275ccb94723bbe5c945fe7f\"" Sep 9 00:21:19.359702 containerd[1432]: time="2025-09-09T00:21:19.359282305Z" level=info msg="StartContainer for \"3f622da88836b46cc4958433ce3d9b994cbcdda87275ccb94723bbe5c945fe7f\"" Sep 9 00:21:19.363050 containerd[1432]: time="2025-09-09T00:21:19.363015134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2x6qx,Uid:19e86c63-8be6-4cc6-9da1-3612885d364f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"dda816d915c9562dfdb5fe5e57c0b4e67f109546faf258bd672d523d26e93581\"" Sep 9 00:21:19.363793 kubelet[2418]: E0909 00:21:19.363743 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:19.365123 containerd[1432]: time="2025-09-09T00:21:19.365092906Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Sep 9 00:21:19.394347 systemd[1]: Started cri-containerd-3f622da88836b46cc4958433ce3d9b994cbcdda87275ccb94723bbe5c945fe7f.scope - libcontainer container 3f622da88836b46cc4958433ce3d9b994cbcdda87275ccb94723bbe5c945fe7f. Sep 9 00:21:19.422680 containerd[1432]: time="2025-09-09T00:21:19.422640086Z" level=info msg="StartContainer for \"3f622da88836b46cc4958433ce3d9b994cbcdda87275ccb94723bbe5c945fe7f\" returns successfully" Sep 9 00:21:20.023634 kubelet[2418]: E0909 00:21:20.023079 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:20.044347 kubelet[2418]: I0909 00:21:20.044282 2418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jcddk" podStartSLOduration=2.044264366 podStartE2EDuration="2.044264366s" podCreationTimestamp="2025-09-09 00:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:21:20.044183047 +0000 UTC m=+7.141927496" watchObservedRunningTime="2025-09-09 00:21:20.044264366 +0000 UTC m=+7.142008815" Sep 9 00:21:20.533503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393531768.mount: Deactivated successfully. Sep 9 00:21:20.573449 containerd[1432]: time="2025-09-09T00:21:20.573068813Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:20.575180 containerd[1432]: time="2025-09-09T00:21:20.575090867Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Sep 9 00:21:20.576111 containerd[1432]: time="2025-09-09T00:21:20.576067055Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:20.579366 containerd[1432]: time="2025-09-09T00:21:20.579313293Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:20.580282 containerd[1432]: time="2025-09-09T00:21:20.580218002Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.215090536s" Sep 9 00:21:20.580282 containerd[1432]: time="2025-09-09T00:21:20.580255201Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Sep 9 00:21:20.583071 containerd[1432]: time="2025-09-09T00:21:20.583023366Z" level=info msg="CreateContainer within sandbox \"dda816d915c9562dfdb5fe5e57c0b4e67f109546faf258bd672d523d26e93581\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Sep 9 00:21:20.602151 containerd[1432]: time="2025-09-09T00:21:20.601563167Z" level=info msg="CreateContainer within sandbox \"dda816d915c9562dfdb5fe5e57c0b4e67f109546faf258bd672d523d26e93581\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"4eca9ede89c57be2a4c824d2527fb7f62a83ccdcde694103de0543bd754e3448\"" Sep 9 00:21:20.606414 containerd[1432]: time="2025-09-09T00:21:20.605606835Z" level=info msg="StartContainer for \"4eca9ede89c57be2a4c824d2527fb7f62a83ccdcde694103de0543bd754e3448\"" Sep 9 00:21:20.643364 systemd[1]: Started cri-containerd-4eca9ede89c57be2a4c824d2527fb7f62a83ccdcde694103de0543bd754e3448.scope - libcontainer container 4eca9ede89c57be2a4c824d2527fb7f62a83ccdcde694103de0543bd754e3448. Sep 9 00:21:20.673988 systemd[1]: cri-containerd-4eca9ede89c57be2a4c824d2527fb7f62a83ccdcde694103de0543bd754e3448.scope: Deactivated successfully. Sep 9 00:21:20.675942 containerd[1432]: time="2025-09-09T00:21:20.675905452Z" level=info msg="StartContainer for \"4eca9ede89c57be2a4c824d2527fb7f62a83ccdcde694103de0543bd754e3448\" returns successfully" Sep 9 00:21:20.718100 containerd[1432]: time="2025-09-09T00:21:20.717971192Z" level=info msg="shim disconnected" id=4eca9ede89c57be2a4c824d2527fb7f62a83ccdcde694103de0543bd754e3448 namespace=k8s.io Sep 9 00:21:20.718100 containerd[1432]: time="2025-09-09T00:21:20.718024391Z" level=warning msg="cleaning up after shim disconnected" id=4eca9ede89c57be2a4c824d2527fb7f62a83ccdcde694103de0543bd754e3448 namespace=k8s.io Sep 9 00:21:20.718100 containerd[1432]: time="2025-09-09T00:21:20.718032671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:21.029856 kubelet[2418]: E0909 00:21:21.029506 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:21.030743 containerd[1432]: time="2025-09-09T00:21:21.030712035Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Sep 9 00:21:22.241558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3272408173.mount: Deactivated successfully. Sep 9 00:21:23.057641 kubelet[2418]: E0909 00:21:23.057563 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:23.508012 kubelet[2418]: E0909 00:21:23.507833 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:24.034171 kubelet[2418]: E0909 00:21:24.034065 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:24.034171 kubelet[2418]: E0909 00:21:24.034057 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:24.100611 containerd[1432]: time="2025-09-09T00:21:24.100562686Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:24.101199 containerd[1432]: time="2025-09-09T00:21:24.101164600Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260" Sep 9 00:21:24.103586 containerd[1432]: time="2025-09-09T00:21:24.103523655Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:24.108372 containerd[1432]: time="2025-09-09T00:21:24.106903380Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:24.108717 containerd[1432]: time="2025-09-09T00:21:24.108061688Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.077312574s" Sep 9 00:21:24.108717 containerd[1432]: time="2025-09-09T00:21:24.108624202Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Sep 9 00:21:24.117843 containerd[1432]: time="2025-09-09T00:21:24.117796387Z" level=info msg="CreateContainer within sandbox \"dda816d915c9562dfdb5fe5e57c0b4e67f109546faf258bd672d523d26e93581\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:21:24.165859 containerd[1432]: time="2025-09-09T00:21:24.165791528Z" level=info msg="CreateContainer within sandbox \"dda816d915c9562dfdb5fe5e57c0b4e67f109546faf258bd672d523d26e93581\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"965174cfeb322b79ee6d7808b6b6827e8cee640d9cb15b97a35008a7ef4602da\"" Sep 9 00:21:24.167179 containerd[1432]: time="2025-09-09T00:21:24.166638199Z" level=info msg="StartContainer for \"965174cfeb322b79ee6d7808b6b6827e8cee640d9cb15b97a35008a7ef4602da\"" Sep 9 00:21:24.202345 systemd[1]: Started cri-containerd-965174cfeb322b79ee6d7808b6b6827e8cee640d9cb15b97a35008a7ef4602da.scope - libcontainer container 965174cfeb322b79ee6d7808b6b6827e8cee640d9cb15b97a35008a7ef4602da. Sep 9 00:21:24.223002 systemd[1]: cri-containerd-965174cfeb322b79ee6d7808b6b6827e8cee640d9cb15b97a35008a7ef4602da.scope: Deactivated successfully. Sep 9 00:21:24.224368 containerd[1432]: time="2025-09-09T00:21:24.223919003Z" level=info msg="StartContainer for \"965174cfeb322b79ee6d7808b6b6827e8cee640d9cb15b97a35008a7ef4602da\" returns successfully" Sep 9 00:21:24.243473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-965174cfeb322b79ee6d7808b6b6827e8cee640d9cb15b97a35008a7ef4602da-rootfs.mount: Deactivated successfully. Sep 9 00:21:24.244226 kubelet[2418]: I0909 00:21:24.244194 2418 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:21:24.291875 systemd[1]: Created slice kubepods-burstable-poda5a8b0e4_6528_40cd_9b53_98a4eaa14710.slice - libcontainer container kubepods-burstable-poda5a8b0e4_6528_40cd_9b53_98a4eaa14710.slice. Sep 9 00:21:24.298850 systemd[1]: Created slice kubepods-burstable-podc944489b_9c07_4916_812d_a7cb52003f3d.slice - libcontainer container kubepods-burstable-podc944489b_9c07_4916_812d_a7cb52003f3d.slice. Sep 9 00:21:24.360949 kubelet[2418]: I0909 00:21:24.360893 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfnc\" (UniqueName: \"kubernetes.io/projected/a5a8b0e4-6528-40cd-9b53-98a4eaa14710-kube-api-access-ndfnc\") pod \"coredns-668d6bf9bc-6t5vd\" (UID: \"a5a8b0e4-6528-40cd-9b53-98a4eaa14710\") " pod="kube-system/coredns-668d6bf9bc-6t5vd" Sep 9 00:21:24.360949 kubelet[2418]: I0909 00:21:24.360943 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5a8b0e4-6528-40cd-9b53-98a4eaa14710-config-volume\") pod \"coredns-668d6bf9bc-6t5vd\" (UID: \"a5a8b0e4-6528-40cd-9b53-98a4eaa14710\") " pod="kube-system/coredns-668d6bf9bc-6t5vd" Sep 9 00:21:24.360949 kubelet[2418]: I0909 00:21:24.360962 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8925\" (UniqueName: \"kubernetes.io/projected/c944489b-9c07-4916-812d-a7cb52003f3d-kube-api-access-q8925\") pod \"coredns-668d6bf9bc-b8v6l\" (UID: \"c944489b-9c07-4916-812d-a7cb52003f3d\") " pod="kube-system/coredns-668d6bf9bc-b8v6l" Sep 9 00:21:24.361268 kubelet[2418]: I0909 00:21:24.360982 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c944489b-9c07-4916-812d-a7cb52003f3d-config-volume\") pod \"coredns-668d6bf9bc-b8v6l\" (UID: \"c944489b-9c07-4916-812d-a7cb52003f3d\") " pod="kube-system/coredns-668d6bf9bc-b8v6l" Sep 9 00:21:24.364986 containerd[1432]: time="2025-09-09T00:21:24.364918496Z" level=info msg="shim disconnected" id=965174cfeb322b79ee6d7808b6b6827e8cee640d9cb15b97a35008a7ef4602da namespace=k8s.io Sep 9 00:21:24.364986 containerd[1432]: time="2025-09-09T00:21:24.364971296Z" level=warning msg="cleaning up after shim disconnected" id=965174cfeb322b79ee6d7808b6b6827e8cee640d9cb15b97a35008a7ef4602da namespace=k8s.io Sep 9 00:21:24.364986 containerd[1432]: time="2025-09-09T00:21:24.364982856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:24.596442 kubelet[2418]: E0909 00:21:24.596320 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:24.597432 containerd[1432]: time="2025-09-09T00:21:24.597383518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6t5vd,Uid:a5a8b0e4-6528-40cd-9b53-98a4eaa14710,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:24.602325 kubelet[2418]: E0909 00:21:24.602286 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:24.603104 containerd[1432]: time="2025-09-09T00:21:24.602772342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b8v6l,Uid:c944489b-9c07-4916-812d-a7cb52003f3d,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:24.653252 containerd[1432]: time="2025-09-09T00:21:24.653172498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b8v6l,Uid:c944489b-9c07-4916-812d-a7cb52003f3d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80dee2c32f7a3a68e92a9f881a7617bdfd0a18cc81dd2605d2feb799047f12df\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 9 00:21:24.653597 kubelet[2418]: E0909 00:21:24.653560 2418 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80dee2c32f7a3a68e92a9f881a7617bdfd0a18cc81dd2605d2feb799047f12df\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 9 00:21:24.653653 kubelet[2418]: E0909 00:21:24.653639 2418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80dee2c32f7a3a68e92a9f881a7617bdfd0a18cc81dd2605d2feb799047f12df\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-b8v6l" Sep 9 00:21:24.653677 kubelet[2418]: E0909 00:21:24.653659 2418 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80dee2c32f7a3a68e92a9f881a7617bdfd0a18cc81dd2605d2feb799047f12df\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-b8v6l" Sep 9 00:21:24.653739 kubelet[2418]: E0909 00:21:24.653711 2418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-b8v6l_kube-system(c944489b-9c07-4916-812d-a7cb52003f3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-b8v6l_kube-system(c944489b-9c07-4916-812d-a7cb52003f3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80dee2c32f7a3a68e92a9f881a7617bdfd0a18cc81dd2605d2feb799047f12df\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-b8v6l" podUID="c944489b-9c07-4916-812d-a7cb52003f3d" Sep 9 00:21:24.653891 containerd[1432]: time="2025-09-09T00:21:24.653841211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6t5vd,Uid:a5a8b0e4-6528-40cd-9b53-98a4eaa14710,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22e6d37ae3f755486739215c2717b241bfc54f865b5427ac7ac752ef5e86b3f6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 9 00:21:24.654279 kubelet[2418]: E0909 00:21:24.654249 2418 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22e6d37ae3f755486739215c2717b241bfc54f865b5427ac7ac752ef5e86b3f6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 9 00:21:24.654361 kubelet[2418]: E0909 00:21:24.654287 2418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22e6d37ae3f755486739215c2717b241bfc54f865b5427ac7ac752ef5e86b3f6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-6t5vd" Sep 9 00:21:24.654361 kubelet[2418]: E0909 00:21:24.654310 2418 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22e6d37ae3f755486739215c2717b241bfc54f865b5427ac7ac752ef5e86b3f6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-6t5vd" Sep 9 00:21:24.654361 kubelet[2418]: E0909 00:21:24.654344 2418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6t5vd_kube-system(a5a8b0e4-6528-40cd-9b53-98a4eaa14710)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6t5vd_kube-system(a5a8b0e4-6528-40cd-9b53-98a4eaa14710)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22e6d37ae3f755486739215c2717b241bfc54f865b5427ac7ac752ef5e86b3f6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-6t5vd" podUID="a5a8b0e4-6528-40cd-9b53-98a4eaa14710" Sep 9 00:21:25.038089 kubelet[2418]: E0909 00:21:25.038055 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:25.038412 kubelet[2418]: E0909 00:21:25.038386 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:25.042206 containerd[1432]: time="2025-09-09T00:21:25.042161633Z" level=info msg="CreateContainer within sandbox \"dda816d915c9562dfdb5fe5e57c0b4e67f109546faf258bd672d523d26e93581\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Sep 9 00:21:25.054968 containerd[1432]: time="2025-09-09T00:21:25.054824507Z" level=info msg="CreateContainer within sandbox \"dda816d915c9562dfdb5fe5e57c0b4e67f109546faf258bd672d523d26e93581\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"047854a48540f79a4605bb0216180b068847029d60fba65170ff1030cda7510a\"" Sep 9 00:21:25.055384 containerd[1432]: time="2025-09-09T00:21:25.055333262Z" level=info msg="StartContainer for \"047854a48540f79a4605bb0216180b068847029d60fba65170ff1030cda7510a\"" Sep 9 00:21:25.089494 systemd[1]: Started cri-containerd-047854a48540f79a4605bb0216180b068847029d60fba65170ff1030cda7510a.scope - libcontainer container 047854a48540f79a4605bb0216180b068847029d60fba65170ff1030cda7510a. Sep 9 00:21:25.119272 containerd[1432]: time="2025-09-09T00:21:25.119224711Z" level=info msg="StartContainer for \"047854a48540f79a4605bb0216180b068847029d60fba65170ff1030cda7510a\" returns successfully" Sep 9 00:21:26.041864 kubelet[2418]: E0909 00:21:26.041816 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:26.081238 kubelet[2418]: I0909 00:21:26.081073 2418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-2x6qx" podStartSLOduration=3.329495672 podStartE2EDuration="8.081055483s" podCreationTimestamp="2025-09-09 00:21:18 +0000 UTC" firstStartedPulling="2025-09-09 00:21:19.364663752 +0000 UTC m=+6.462408201" lastFinishedPulling="2025-09-09 00:21:24.116223563 +0000 UTC m=+11.213968012" observedRunningTime="2025-09-09 00:21:26.080629847 +0000 UTC m=+13.178374296" watchObservedRunningTime="2025-09-09 00:21:26.081055483 +0000 UTC m=+13.178799932" Sep 9 00:21:26.124186 update_engine[1422]: I20250909 00:21:26.123579 1422 update_attempter.cc:509] Updating boot flags... Sep 9 00:21:26.152165 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3003) Sep 9 00:21:26.195176 systemd-networkd[1371]: flannel.1: Link UP Sep 9 00:21:26.195186 systemd-networkd[1371]: flannel.1: Gained carrier Sep 9 00:21:26.198580 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3002) Sep 9 00:21:26.253175 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3001) Sep 9 00:21:27.045345 kubelet[2418]: E0909 00:21:27.045308 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:27.745396 systemd-networkd[1371]: flannel.1: Gained IPv6LL Sep 9 00:21:36.003574 kubelet[2418]: E0909 00:21:36.003534 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:36.004171 containerd[1432]: time="2025-09-09T00:21:36.003935746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b8v6l,Uid:c944489b-9c07-4916-812d-a7cb52003f3d,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:36.026637 systemd-networkd[1371]: cni0: Link UP Sep 9 00:21:36.026643 systemd-networkd[1371]: cni0: Gained carrier Sep 9 00:21:36.029956 systemd-networkd[1371]: cni0: Lost carrier Sep 9 00:21:36.034695 systemd-networkd[1371]: veth8c2568b5: Link UP Sep 9 00:21:36.037346 kernel: cni0: port 1(veth8c2568b5) entered blocking state Sep 9 00:21:36.037417 kernel: cni0: port 1(veth8c2568b5) entered disabled state Sep 9 00:21:36.037437 kernel: veth8c2568b5: entered allmulticast mode Sep 9 00:21:36.039153 kernel: veth8c2568b5: entered promiscuous mode Sep 9 00:21:36.039238 kernel: cni0: port 1(veth8c2568b5) entered blocking state Sep 9 00:21:36.039258 kernel: cni0: port 1(veth8c2568b5) entered forwarding state Sep 9 00:21:36.040295 kernel: cni0: port 1(veth8c2568b5) entered disabled state Sep 9 00:21:36.049955 systemd-networkd[1371]: veth8c2568b5: Gained carrier Sep 9 00:21:36.050348 kernel: cni0: port 1(veth8c2568b5) entered blocking state Sep 9 00:21:36.050427 kernel: cni0: port 1(veth8c2568b5) entered forwarding state Sep 9 00:21:36.051060 systemd-networkd[1371]: cni0: Gained carrier Sep 9 00:21:36.054498 containerd[1432]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000018938), "name":"cbr0", "type":"bridge"} Sep 9 00:21:36.054498 containerd[1432]: delegateAdd: netconf sent to delegate plugin: Sep 9 00:21:36.081148 containerd[1432]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-09-09T00:21:36.080676051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:36.081148 containerd[1432]: time="2025-09-09T00:21:36.080730931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:36.081148 containerd[1432]: time="2025-09-09T00:21:36.080754171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:36.081148 containerd[1432]: time="2025-09-09T00:21:36.080839650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:36.102332 systemd[1]: Started cri-containerd-e68c0b68019ca12d360f334ead956fc684bab76ff3761b1239d0c25d6c3daf19.scope - libcontainer container e68c0b68019ca12d360f334ead956fc684bab76ff3761b1239d0c25d6c3daf19. Sep 9 00:21:36.115589 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:21:36.135917 containerd[1432]: time="2025-09-09T00:21:36.135787524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b8v6l,Uid:c944489b-9c07-4916-812d-a7cb52003f3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e68c0b68019ca12d360f334ead956fc684bab76ff3761b1239d0c25d6c3daf19\"" Sep 9 00:21:36.142164 kubelet[2418]: E0909 00:21:36.142107 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:36.146464 containerd[1432]: time="2025-09-09T00:21:36.146421061Z" level=info msg="CreateContainer within sandbox \"e68c0b68019ca12d360f334ead956fc684bab76ff3761b1239d0c25d6c3daf19\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:21:36.157387 containerd[1432]: time="2025-09-09T00:21:36.157253957Z" level=info msg="CreateContainer within sandbox \"e68c0b68019ca12d360f334ead956fc684bab76ff3761b1239d0c25d6c3daf19\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c63deeae4026a6358d7e4f279ab9dc60d8711d56bec012ffba667812590b8ba\"" Sep 9 00:21:36.158534 containerd[1432]: time="2025-09-09T00:21:36.158505190Z" level=info msg="StartContainer for \"0c63deeae4026a6358d7e4f279ab9dc60d8711d56bec012ffba667812590b8ba\"" Sep 9 00:21:36.182328 systemd[1]: Started cri-containerd-0c63deeae4026a6358d7e4f279ab9dc60d8711d56bec012ffba667812590b8ba.scope - libcontainer container 0c63deeae4026a6358d7e4f279ab9dc60d8711d56bec012ffba667812590b8ba. Sep 9 00:21:36.203341 containerd[1432]: time="2025-09-09T00:21:36.203270204Z" level=info msg="StartContainer for \"0c63deeae4026a6358d7e4f279ab9dc60d8711d56bec012ffba667812590b8ba\" returns successfully" Sep 9 00:21:37.070894 kubelet[2418]: E0909 00:21:37.070761 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:37.089985 systemd-networkd[1371]: cni0: Gained IPv6LL Sep 9 00:21:37.091328 kubelet[2418]: I0909 00:21:37.090099 2418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-b8v6l" podStartSLOduration=18.090083805 podStartE2EDuration="18.090083805s" podCreationTimestamp="2025-09-09 00:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:21:37.089729527 +0000 UTC m=+24.187473976" watchObservedRunningTime="2025-09-09 00:21:37.090083805 +0000 UTC m=+24.187828254" Sep 9 00:21:37.345430 systemd-networkd[1371]: veth8c2568b5: Gained IPv6LL Sep 9 00:21:38.067751 kubelet[2418]: E0909 00:21:38.067652 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:39.069846 kubelet[2418]: E0909 00:21:39.069766 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:40.000436 kubelet[2418]: E0909 00:21:40.000253 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:40.001710 containerd[1432]: time="2025-09-09T00:21:40.001640969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6t5vd,Uid:a5a8b0e4-6528-40cd-9b53-98a4eaa14710,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:40.128349 systemd-networkd[1371]: vethbf8ac218: Link UP Sep 9 00:21:40.130415 kernel: cni0: port 2(vethbf8ac218) entered blocking state Sep 9 00:21:40.130484 kernel: cni0: port 2(vethbf8ac218) entered disabled state Sep 9 00:21:40.131173 kernel: vethbf8ac218: entered allmulticast mode Sep 9 00:21:40.131228 kernel: vethbf8ac218: entered promiscuous mode Sep 9 00:21:40.149028 kernel: cni0: port 2(vethbf8ac218) entered blocking state Sep 9 00:21:40.149109 kernel: cni0: port 2(vethbf8ac218) entered forwarding state Sep 9 00:21:40.149295 systemd-networkd[1371]: vethbf8ac218: Gained carrier Sep 9 00:21:40.152242 containerd[1432]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} Sep 9 00:21:40.152242 containerd[1432]: delegateAdd: netconf sent to delegate plugin: Sep 9 00:21:40.181113 containerd[1432]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-09-09T00:21:40.180828142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:40.181113 containerd[1432]: time="2025-09-09T00:21:40.181078180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:40.181341 containerd[1432]: time="2025-09-09T00:21:40.181298419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:40.181479 containerd[1432]: time="2025-09-09T00:21:40.181449578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:40.206302 systemd[1]: Started cri-containerd-aae734dc0c0254f2e07e753ed24311669974bcd925e9b74af91f376a347dbd89.scope - libcontainer container aae734dc0c0254f2e07e753ed24311669974bcd925e9b74af91f376a347dbd89. Sep 9 00:21:40.218309 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:21:40.236779 containerd[1432]: time="2025-09-09T00:21:40.236735139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6t5vd,Uid:a5a8b0e4-6528-40cd-9b53-98a4eaa14710,Namespace:kube-system,Attempt:0,} returns sandbox id \"aae734dc0c0254f2e07e753ed24311669974bcd925e9b74af91f376a347dbd89\"" Sep 9 00:21:40.237528 kubelet[2418]: E0909 00:21:40.237506 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:40.239680 containerd[1432]: time="2025-09-09T00:21:40.239644764Z" level=info msg="CreateContainer within sandbox \"aae734dc0c0254f2e07e753ed24311669974bcd925e9b74af91f376a347dbd89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:21:40.255496 containerd[1432]: time="2025-09-09T00:21:40.255385284Z" level=info msg="CreateContainer within sandbox \"aae734dc0c0254f2e07e753ed24311669974bcd925e9b74af91f376a347dbd89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65c5d9f810cf0fea82bd79181aa602cfb8e33e7e8651e2875b8311b61a639d25\"" Sep 9 00:21:40.257152 containerd[1432]: time="2025-09-09T00:21:40.257083316Z" level=info msg="StartContainer for \"65c5d9f810cf0fea82bd79181aa602cfb8e33e7e8651e2875b8311b61a639d25\"" Sep 9 00:21:40.278291 systemd[1]: Started cri-containerd-65c5d9f810cf0fea82bd79181aa602cfb8e33e7e8651e2875b8311b61a639d25.scope - libcontainer container 65c5d9f810cf0fea82bd79181aa602cfb8e33e7e8651e2875b8311b61a639d25. Sep 9 00:21:40.305873 containerd[1432]: time="2025-09-09T00:21:40.305757109Z" level=info msg="StartContainer for \"65c5d9f810cf0fea82bd79181aa602cfb8e33e7e8651e2875b8311b61a639d25\" returns successfully" Sep 9 00:21:41.076776 kubelet[2418]: E0909 00:21:41.076661 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:41.089557 kubelet[2418]: I0909 00:21:41.089361 2418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6t5vd" podStartSLOduration=22.089343759 podStartE2EDuration="22.089343759s" podCreationTimestamp="2025-09-09 00:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:21:41.088245925 +0000 UTC m=+28.185990374" watchObservedRunningTime="2025-09-09 00:21:41.089343759 +0000 UTC m=+28.187088208" Sep 9 00:21:41.377298 systemd-networkd[1371]: vethbf8ac218: Gained IPv6LL Sep 9 00:21:42.081146 kubelet[2418]: E0909 00:21:42.080990 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:43.083548 kubelet[2418]: E0909 00:21:43.083430 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:44.767167 systemd[1]: Started sshd@5-10.0.0.63:22-10.0.0.1:41902.service - OpenSSH per-connection server daemon (10.0.0.1:41902). Sep 9 00:21:44.822775 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 41902 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:21:44.824305 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:44.828185 systemd-logind[1418]: New session 6 of user core. Sep 9 00:21:44.840278 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:21:44.961340 sshd[3420]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:44.965806 systemd[1]: sshd@5-10.0.0.63:22-10.0.0.1:41902.service: Deactivated successfully. Sep 9 00:21:44.968635 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:21:44.971302 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:21:44.972028 systemd-logind[1418]: Removed session 6. Sep 9 00:21:49.971544 systemd[1]: Started sshd@6-10.0.0.63:22-10.0.0.1:33018.service - OpenSSH per-connection server daemon (10.0.0.1:33018). Sep 9 00:21:50.003827 sshd[3460]: Accepted publickey for core from 10.0.0.1 port 33018 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:21:50.005216 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:50.009628 systemd-logind[1418]: New session 7 of user core. Sep 9 00:21:50.019298 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:21:50.134360 sshd[3460]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:50.138361 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:21:50.138640 systemd[1]: sshd@6-10.0.0.63:22-10.0.0.1:33018.service: Deactivated successfully. Sep 9 00:21:50.140399 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:21:50.141057 systemd-logind[1418]: Removed session 7. Sep 9 00:21:55.148959 systemd[1]: Started sshd@7-10.0.0.63:22-10.0.0.1:33026.service - OpenSSH per-connection server daemon (10.0.0.1:33026). Sep 9 00:21:55.188147 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 33026 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:21:55.189488 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:55.193506 systemd-logind[1418]: New session 8 of user core. Sep 9 00:21:55.205333 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:21:55.322417 sshd[3497]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:55.334875 systemd[1]: sshd@7-10.0.0.63:22-10.0.0.1:33026.service: Deactivated successfully. Sep 9 00:21:55.336642 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:21:55.338424 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:21:55.339306 systemd[1]: Started sshd@8-10.0.0.63:22-10.0.0.1:33030.service - OpenSSH per-connection server daemon (10.0.0.1:33030). Sep 9 00:21:55.341877 systemd-logind[1418]: Removed session 8. Sep 9 00:21:55.379486 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 33030 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:21:55.380823 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:55.385196 systemd-logind[1418]: New session 9 of user core. Sep 9 00:21:55.395324 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:21:55.555637 sshd[3512]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:55.568692 systemd[1]: sshd@8-10.0.0.63:22-10.0.0.1:33030.service: Deactivated successfully. Sep 9 00:21:55.572615 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:21:55.578344 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:21:55.589474 systemd[1]: Started sshd@9-10.0.0.63:22-10.0.0.1:33038.service - OpenSSH per-connection server daemon (10.0.0.1:33038). Sep 9 00:21:55.592254 systemd-logind[1418]: Removed session 9. Sep 9 00:21:55.629652 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 33038 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:21:55.630895 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:55.634434 systemd-logind[1418]: New session 10 of user core. Sep 9 00:21:55.641328 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:21:55.754609 sshd[3528]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:55.757659 systemd[1]: sshd@9-10.0.0.63:22-10.0.0.1:33038.service: Deactivated successfully. Sep 9 00:21:55.759450 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:21:55.760070 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:21:55.760864 systemd-logind[1418]: Removed session 10. Sep 9 00:22:00.766958 systemd[1]: Started sshd@10-10.0.0.63:22-10.0.0.1:56956.service - OpenSSH per-connection server daemon (10.0.0.1:56956). Sep 9 00:22:00.803871 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 56956 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:22:00.808398 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:22:00.814409 systemd-logind[1418]: New session 11 of user core. Sep 9 00:22:00.825422 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:22:00.968217 sshd[3563]: pam_unix(sshd:session): session closed for user core Sep 9 00:22:00.979177 systemd[1]: sshd@10-10.0.0.63:22-10.0.0.1:56956.service: Deactivated successfully. Sep 9 00:22:00.980620 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:22:00.981829 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:22:00.983056 systemd[1]: Started sshd@11-10.0.0.63:22-10.0.0.1:56970.service - OpenSSH per-connection server daemon (10.0.0.1:56970). Sep 9 00:22:00.983715 systemd-logind[1418]: Removed session 11. Sep 9 00:22:01.037983 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 56970 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:22:01.039829 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:22:01.045430 systemd-logind[1418]: New session 12 of user core. Sep 9 00:22:01.053478 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:22:01.300809 sshd[3577]: pam_unix(sshd:session): session closed for user core Sep 9 00:22:01.311808 systemd[1]: sshd@11-10.0.0.63:22-10.0.0.1:56970.service: Deactivated successfully. Sep 9 00:22:01.316530 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:22:01.318425 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:22:01.329503 systemd[1]: Started sshd@12-10.0.0.63:22-10.0.0.1:56976.service - OpenSSH per-connection server daemon (10.0.0.1:56976). Sep 9 00:22:01.331142 systemd-logind[1418]: Removed session 12. Sep 9 00:22:01.365254 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 56976 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:22:01.366836 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:22:01.373716 systemd-logind[1418]: New session 13 of user core. Sep 9 00:22:01.384351 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:22:02.115676 sshd[3589]: pam_unix(sshd:session): session closed for user core Sep 9 00:22:02.124645 systemd[1]: sshd@12-10.0.0.63:22-10.0.0.1:56976.service: Deactivated successfully. Sep 9 00:22:02.126731 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:22:02.128608 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:22:02.138526 systemd[1]: Started sshd@13-10.0.0.63:22-10.0.0.1:56990.service - OpenSSH per-connection server daemon (10.0.0.1:56990). Sep 9 00:22:02.139581 systemd-logind[1418]: Removed session 13. Sep 9 00:22:02.173295 sshd[3629]: Accepted publickey for core from 10.0.0.1 port 56990 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:22:02.174623 sshd[3629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:22:02.179063 systemd-logind[1418]: New session 14 of user core. Sep 9 00:22:02.185302 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:22:02.412159 sshd[3629]: pam_unix(sshd:session): session closed for user core Sep 9 00:22:02.423704 systemd[1]: sshd@13-10.0.0.63:22-10.0.0.1:56990.service: Deactivated successfully. Sep 9 00:22:02.426459 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:22:02.428355 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:22:02.430372 systemd-logind[1418]: Removed session 14. Sep 9 00:22:02.442763 systemd[1]: Started sshd@14-10.0.0.63:22-10.0.0.1:57002.service - OpenSSH per-connection server daemon (10.0.0.1:57002). Sep 9 00:22:02.482267 sshd[3641]: Accepted publickey for core from 10.0.0.1 port 57002 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:22:02.484171 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:22:02.489398 systemd-logind[1418]: New session 15 of user core. Sep 9 00:22:02.498320 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:22:02.627292 sshd[3641]: pam_unix(sshd:session): session closed for user core Sep 9 00:22:02.630498 systemd[1]: sshd@14-10.0.0.63:22-10.0.0.1:57002.service: Deactivated successfully. Sep 9 00:22:02.633866 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:22:02.634618 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:22:02.635775 systemd-logind[1418]: Removed session 15. Sep 9 00:22:07.653470 systemd[1]: Started sshd@15-10.0.0.63:22-10.0.0.1:57010.service - OpenSSH per-connection server daemon (10.0.0.1:57010). Sep 9 00:22:07.684869 sshd[3678]: Accepted publickey for core from 10.0.0.1 port 57010 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:22:07.686743 sshd[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:22:07.692264 systemd-logind[1418]: New session 16 of user core. Sep 9 00:22:07.702622 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:22:07.822003 sshd[3678]: pam_unix(sshd:session): session closed for user core Sep 9 00:22:07.825638 systemd[1]: sshd@15-10.0.0.63:22-10.0.0.1:57010.service: Deactivated successfully. Sep 9 00:22:07.827377 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:22:07.827919 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:22:07.828660 systemd-logind[1418]: Removed session 16. Sep 9 00:22:12.833346 systemd[1]: Started sshd@16-10.0.0.63:22-10.0.0.1:52814.service - OpenSSH per-connection server daemon (10.0.0.1:52814). Sep 9 00:22:12.874613 sshd[3713]: Accepted publickey for core from 10.0.0.1 port 52814 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:22:12.876422 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:22:12.880641 systemd-logind[1418]: New session 17 of user core. Sep 9 00:22:12.893340 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:22:13.022626 sshd[3713]: pam_unix(sshd:session): session closed for user core Sep 9 00:22:13.025957 systemd[1]: sshd@16-10.0.0.63:22-10.0.0.1:52814.service: Deactivated successfully. Sep 9 00:22:13.027716 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:22:13.029451 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:22:13.030814 systemd-logind[1418]: Removed session 17. Sep 9 00:22:18.031641 systemd[1]: Started sshd@17-10.0.0.63:22-10.0.0.1:52828.service - OpenSSH per-connection server daemon (10.0.0.1:52828). Sep 9 00:22:18.083617 sshd[3750]: Accepted publickey for core from 10.0.0.1 port 52828 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:22:18.085655 sshd[3750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:22:18.091260 systemd-logind[1418]: New session 18 of user core. Sep 9 00:22:18.102304 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:22:18.236316 sshd[3750]: pam_unix(sshd:session): session closed for user core Sep 9 00:22:18.246700 systemd[1]: sshd@17-10.0.0.63:22-10.0.0.1:52828.service: Deactivated successfully. Sep 9 00:22:18.251122 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:22:18.254360 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:22:18.255275 systemd-logind[1418]: Removed session 18.