Jul 7 05:58:33.891298 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 05:58:33.891318 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 05:58:33.891328 kernel: KASLR enabled Jul 7 05:58:33.891334 kernel: efi: EFI v2.7 by EDK II Jul 7 05:58:33.891340 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 7 05:58:33.891345 kernel: random: crng init done Jul 7 05:58:33.891352 kernel: ACPI: Early table checksum verification disabled Jul 7 05:58:33.891358 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 7 05:58:33.891364 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 7 05:58:33.891373 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:58:33.891379 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:58:33.891385 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:58:33.891391 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:58:33.891397 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:58:33.891405 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:58:33.891412 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:58:33.891419 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:58:33.891425 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:58:33.891432 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 7 05:58:33.891438 kernel: NUMA: Failed to initialise from firmware Jul 7 05:58:33.891444 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 05:58:33.891450 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 7 05:58:33.891456 kernel: Zone ranges: Jul 7 05:58:33.891463 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 05:58:33.891469 kernel: DMA32 empty Jul 7 05:58:33.891476 kernel: Normal empty Jul 7 05:58:33.891482 kernel: Movable zone start for each node Jul 7 05:58:33.891488 kernel: Early memory node ranges Jul 7 05:58:33.891494 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 7 05:58:33.891501 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 7 05:58:33.891507 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 7 05:58:33.891513 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 7 05:58:33.891519 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 7 05:58:33.891525 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 7 05:58:33.891531 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 7 05:58:33.891538 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 05:58:33.891544 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 7 05:58:33.891551 kernel: psci: probing for conduit method from ACPI. Jul 7 05:58:33.891558 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 05:58:33.891564 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 05:58:33.891573 kernel: psci: Trusted OS migration not required Jul 7 05:58:33.891580 kernel: psci: SMC Calling Convention v1.1 Jul 7 05:58:33.891586 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 7 05:58:33.891594 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 05:58:33.891601 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 05:58:33.891608 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 7 05:58:33.891614 kernel: Detected PIPT I-cache on CPU0 Jul 7 05:58:33.891621 kernel: CPU features: detected: GIC system register CPU interface Jul 7 05:58:33.891627 kernel: CPU features: detected: Hardware dirty bit management Jul 7 05:58:33.891634 kernel: CPU features: detected: Spectre-v4 Jul 7 05:58:33.891641 kernel: CPU features: detected: Spectre-BHB Jul 7 05:58:33.891648 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 05:58:33.891655 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 05:58:33.891662 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 05:58:33.891669 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 05:58:33.891676 kernel: alternatives: applying boot alternatives Jul 7 05:58:33.891683 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:58:33.891690 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 05:58:33.891697 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 05:58:33.891704 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 05:58:33.891711 kernel: Fallback order for Node 0: 0 Jul 7 05:58:33.891717 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 7 05:58:33.891724 kernel: Policy zone: DMA Jul 7 05:58:33.891738 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 05:58:33.891747 kernel: software IO TLB: area num 4. Jul 7 05:58:33.891754 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 7 05:58:33.891761 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 7 05:58:33.891768 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 05:58:33.891775 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 05:58:33.891782 kernel: rcu: RCU event tracing is enabled. Jul 7 05:58:33.891789 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 05:58:33.891796 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 05:58:33.891802 kernel: Tracing variant of Tasks RCU enabled. Jul 7 05:58:33.891809 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 05:58:33.891816 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 05:58:33.891823 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 05:58:33.891830 kernel: GICv3: 256 SPIs implemented Jul 7 05:58:33.891837 kernel: GICv3: 0 Extended SPIs implemented Jul 7 05:58:33.891844 kernel: Root IRQ handler: gic_handle_irq Jul 7 05:58:33.891850 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 05:58:33.891857 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 7 05:58:33.891864 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 7 05:58:33.891870 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 05:58:33.891877 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 7 05:58:33.891884 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 7 05:58:33.891891 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 7 05:58:33.891913 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 05:58:33.891922 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:58:33.891928 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 05:58:33.891935 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 05:58:33.891942 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 05:58:33.891949 kernel: arm-pv: using stolen time PV Jul 7 05:58:33.891956 kernel: Console: colour dummy device 80x25 Jul 7 05:58:33.891963 kernel: ACPI: Core revision 20230628 Jul 7 05:58:33.891970 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 05:58:33.891977 kernel: pid_max: default: 32768 minimum: 301 Jul 7 05:58:33.891984 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 05:58:33.891992 kernel: landlock: Up and running. Jul 7 05:58:33.891998 kernel: SELinux: Initializing. Jul 7 05:58:33.892005 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:58:33.892012 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:58:33.892019 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 05:58:33.892026 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 05:58:33.892033 kernel: rcu: Hierarchical SRCU implementation. Jul 7 05:58:33.892040 kernel: rcu: Max phase no-delay instances is 400. Jul 7 05:58:33.892047 kernel: Platform MSI: ITS@0x8080000 domain created Jul 7 05:58:33.892055 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 7 05:58:33.892061 kernel: Remapping and enabling EFI services. Jul 7 05:58:33.892068 kernel: smp: Bringing up secondary CPUs ... Jul 7 05:58:33.892075 kernel: Detected PIPT I-cache on CPU1 Jul 7 05:58:33.892082 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 7 05:58:33.892089 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 7 05:58:33.892096 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:58:33.892103 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 05:58:33.892110 kernel: Detected PIPT I-cache on CPU2 Jul 7 05:58:33.892117 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 7 05:58:33.892125 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 7 05:58:33.892132 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:58:33.892144 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 7 05:58:33.892152 kernel: Detected PIPT I-cache on CPU3 Jul 7 05:58:33.892159 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 7 05:58:33.892166 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 7 05:58:33.892174 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:58:33.892181 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 7 05:58:33.892188 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 05:58:33.892196 kernel: SMP: Total of 4 processors activated. Jul 7 05:58:33.892204 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 05:58:33.892211 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 05:58:33.892218 kernel: CPU features: detected: Common not Private translations Jul 7 05:58:33.892226 kernel: CPU features: detected: CRC32 instructions Jul 7 05:58:33.892233 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 05:58:33.892240 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 05:58:33.892248 kernel: CPU features: detected: LSE atomic instructions Jul 7 05:58:33.892256 kernel: CPU features: detected: Privileged Access Never Jul 7 05:58:33.892263 kernel: CPU features: detected: RAS Extension Support Jul 7 05:58:33.892271 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 05:58:33.892278 kernel: CPU: All CPU(s) started at EL1 Jul 7 05:58:33.892285 kernel: alternatives: applying system-wide alternatives Jul 7 05:58:33.892292 kernel: devtmpfs: initialized Jul 7 05:58:33.892299 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 05:58:33.892307 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 05:58:33.892314 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 05:58:33.892323 kernel: SMBIOS 3.0.0 present. Jul 7 05:58:33.892330 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 7 05:58:33.892337 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 05:58:33.892345 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 05:58:33.892352 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 05:58:33.892359 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 05:58:33.892367 kernel: audit: initializing netlink subsys (disabled) Jul 7 05:58:33.892374 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 7 05:58:33.892381 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 05:58:33.892390 kernel: cpuidle: using governor menu Jul 7 05:58:33.892397 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 05:58:33.892404 kernel: ASID allocator initialised with 32768 entries Jul 7 05:58:33.892411 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 05:58:33.892418 kernel: Serial: AMBA PL011 UART driver Jul 7 05:58:33.892426 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 05:58:33.892433 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 05:58:33.892440 kernel: Modules: 509008 pages in range for PLT usage Jul 7 05:58:33.892447 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 05:58:33.892455 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 05:58:33.892463 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 05:58:33.892470 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 05:58:33.892477 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 05:58:33.892485 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 05:58:33.892492 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 05:58:33.892499 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 05:58:33.892506 kernel: ACPI: Added _OSI(Module Device) Jul 7 05:58:33.892513 kernel: ACPI: Added _OSI(Processor Device) Jul 7 05:58:33.892522 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 05:58:33.892529 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 05:58:33.892536 kernel: ACPI: Interpreter enabled Jul 7 05:58:33.892543 kernel: ACPI: Using GIC for interrupt routing Jul 7 05:58:33.892550 kernel: ACPI: MCFG table detected, 1 entries Jul 7 05:58:33.892558 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 7 05:58:33.892565 kernel: printk: console [ttyAMA0] enabled Jul 7 05:58:33.892572 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 05:58:33.892700 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 05:58:33.892788 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 05:58:33.892855 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 05:58:33.893003 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 7 05:58:33.893069 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 7 05:58:33.893079 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 7 05:58:33.893086 kernel: PCI host bridge to bus 0000:00 Jul 7 05:58:33.893153 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 7 05:58:33.893214 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 05:58:33.893270 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 7 05:58:33.893325 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 05:58:33.893403 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 7 05:58:33.893477 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 05:58:33.893543 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 7 05:58:33.893610 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 7 05:58:33.893677 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 05:58:33.893750 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 05:58:33.893818 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 7 05:58:33.893883 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 7 05:58:33.893960 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 7 05:58:33.894018 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 05:58:33.894079 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 7 05:58:33.894089 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 05:58:33.894097 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 05:58:33.894104 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 05:58:33.894111 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 05:58:33.894118 kernel: iommu: Default domain type: Translated Jul 7 05:58:33.894126 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 05:58:33.894133 kernel: efivars: Registered efivars operations Jul 7 05:58:33.894140 kernel: vgaarb: loaded Jul 7 05:58:33.894150 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 05:58:33.894157 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 05:58:33.894164 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 05:58:33.894171 kernel: pnp: PnP ACPI init Jul 7 05:58:33.894247 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 7 05:58:33.894258 kernel: pnp: PnP ACPI: found 1 devices Jul 7 05:58:33.894265 kernel: NET: Registered PF_INET protocol family Jul 7 05:58:33.894272 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 05:58:33.894282 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 05:58:33.894290 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 05:58:33.894297 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 05:58:33.894304 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 05:58:33.894312 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 05:58:33.894319 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:58:33.894326 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:58:33.894334 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 05:58:33.894341 kernel: PCI: CLS 0 bytes, default 64 Jul 7 05:58:33.894349 kernel: kvm [1]: HYP mode not available Jul 7 05:58:33.894357 kernel: Initialise system trusted keyrings Jul 7 05:58:33.894364 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 05:58:33.894371 kernel: Key type asymmetric registered Jul 7 05:58:33.894378 kernel: Asymmetric key parser 'x509' registered Jul 7 05:58:33.894386 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 05:58:33.894393 kernel: io scheduler mq-deadline registered Jul 7 05:58:33.894400 kernel: io scheduler kyber registered Jul 7 05:58:33.894407 kernel: io scheduler bfq registered Jul 7 05:58:33.894416 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 05:58:33.894423 kernel: ACPI: button: Power Button [PWRB] Jul 7 05:58:33.894430 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 05:58:33.894497 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 7 05:58:33.894507 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 05:58:33.894514 kernel: thunder_xcv, ver 1.0 Jul 7 05:58:33.894521 kernel: thunder_bgx, ver 1.0 Jul 7 05:58:33.894529 kernel: nicpf, ver 1.0 Jul 7 05:58:33.894536 kernel: nicvf, ver 1.0 Jul 7 05:58:33.894608 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 05:58:33.894669 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T05:58:33 UTC (1751867913) Jul 7 05:58:33.894679 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 05:58:33.894687 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 7 05:58:33.894694 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 05:58:33.894702 kernel: watchdog: Hard watchdog permanently disabled Jul 7 05:58:33.894709 kernel: NET: Registered PF_INET6 protocol family Jul 7 05:58:33.894716 kernel: Segment Routing with IPv6 Jul 7 05:58:33.894725 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 05:58:33.894741 kernel: NET: Registered PF_PACKET protocol family Jul 7 05:58:33.894748 kernel: Key type dns_resolver registered Jul 7 05:58:33.894756 kernel: registered taskstats version 1 Jul 7 05:58:33.894763 kernel: Loading compiled-in X.509 certificates Jul 7 05:58:33.894770 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 05:58:33.894777 kernel: Key type .fscrypt registered Jul 7 05:58:33.894785 kernel: Key type fscrypt-provisioning registered Jul 7 05:58:33.894792 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 05:58:33.894801 kernel: ima: Allocated hash algorithm: sha1 Jul 7 05:58:33.894809 kernel: ima: No architecture policies found Jul 7 05:58:33.894816 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 05:58:33.894824 kernel: clk: Disabling unused clocks Jul 7 05:58:33.894831 kernel: Freeing unused kernel memory: 39424K Jul 7 05:58:33.894838 kernel: Run /init as init process Jul 7 05:58:33.894845 kernel: with arguments: Jul 7 05:58:33.894852 kernel: /init Jul 7 05:58:33.894859 kernel: with environment: Jul 7 05:58:33.894867 kernel: HOME=/ Jul 7 05:58:33.894874 kernel: TERM=linux Jul 7 05:58:33.894882 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 05:58:33.894891 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:58:33.894916 systemd[1]: Detected virtualization kvm. Jul 7 05:58:33.894925 systemd[1]: Detected architecture arm64. Jul 7 05:58:33.894932 systemd[1]: Running in initrd. Jul 7 05:58:33.894942 systemd[1]: No hostname configured, using default hostname. Jul 7 05:58:33.894950 systemd[1]: Hostname set to . Jul 7 05:58:33.894958 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:58:33.894965 systemd[1]: Queued start job for default target initrd.target. Jul 7 05:58:33.894973 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:58:33.894981 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:58:33.894989 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 05:58:33.894997 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:58:33.895007 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 05:58:33.895015 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 05:58:33.895024 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 05:58:33.895032 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 05:58:33.895040 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:58:33.895048 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:58:33.895056 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:58:33.895064 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:58:33.895072 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:58:33.895080 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:58:33.895088 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:58:33.895096 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:58:33.895104 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:58:33.895111 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:58:33.895119 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:58:33.895127 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:58:33.895136 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:58:33.895144 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:58:33.895152 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 05:58:33.895160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:58:33.895168 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 05:58:33.895176 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 05:58:33.895183 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:58:33.895191 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:58:33.895200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:58:33.895208 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 05:58:33.895216 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:58:33.895224 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 05:58:33.895232 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:58:33.895241 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:58:33.895250 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:58:33.895274 systemd-journald[237]: Collecting audit messages is disabled. Jul 7 05:58:33.895293 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:58:33.895303 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:58:33.895311 systemd-journald[237]: Journal started Jul 7 05:58:33.895330 systemd-journald[237]: Runtime Journal (/run/log/journal/5250e5b0dd364c38abca7cd3fa4abbe6) is 5.9M, max 47.3M, 41.4M free. Jul 7 05:58:33.888032 systemd-modules-load[238]: Inserted module 'overlay' Jul 7 05:58:33.897338 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:58:33.900949 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 05:58:33.902140 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 7 05:58:33.902984 kernel: Bridge firewalling registered Jul 7 05:58:33.907060 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:58:33.908164 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:58:33.911051 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:58:33.914047 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 05:58:33.916050 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:58:33.917140 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:58:33.920235 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:58:33.925242 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:58:33.929914 dracut-cmdline[265]: dracut-dracut-053 Jul 7 05:58:33.933581 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:58:33.933071 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:58:33.961459 systemd-resolved[282]: Positive Trust Anchors: Jul 7 05:58:33.961472 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:58:33.961504 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:58:33.967181 systemd-resolved[282]: Defaulting to hostname 'linux'. Jul 7 05:58:33.969308 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:58:33.972140 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:58:33.996917 kernel: SCSI subsystem initialized Jul 7 05:58:34.000920 kernel: Loading iSCSI transport class v2.0-870. Jul 7 05:58:34.008920 kernel: iscsi: registered transport (tcp) Jul 7 05:58:34.021197 kernel: iscsi: registered transport (qla4xxx) Jul 7 05:58:34.021216 kernel: QLogic iSCSI HBA Driver Jul 7 05:58:34.061535 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 05:58:34.075068 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 05:58:34.089939 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 05:58:34.089981 kernel: device-mapper: uevent: version 1.0.3 Jul 7 05:58:34.090925 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 05:58:34.137930 kernel: raid6: neonx8 gen() 15728 MB/s Jul 7 05:58:34.154935 kernel: raid6: neonx4 gen() 15603 MB/s Jul 7 05:58:34.171930 kernel: raid6: neonx2 gen() 13198 MB/s Jul 7 05:58:34.188926 kernel: raid6: neonx1 gen() 10458 MB/s Jul 7 05:58:34.205926 kernel: raid6: int64x8 gen() 6949 MB/s Jul 7 05:58:34.222927 kernel: raid6: int64x4 gen() 7306 MB/s Jul 7 05:58:34.239927 kernel: raid6: int64x2 gen() 6111 MB/s Jul 7 05:58:34.256948 kernel: raid6: int64x1 gen() 5034 MB/s Jul 7 05:58:34.256966 kernel: raid6: using algorithm neonx8 gen() 15728 MB/s Jul 7 05:58:34.274934 kernel: raid6: .... xor() 11906 MB/s, rmw enabled Jul 7 05:58:34.274965 kernel: raid6: using neon recovery algorithm Jul 7 05:58:34.282060 kernel: xor: measuring software checksum speed Jul 7 05:58:34.282089 kernel: 8regs : 19773 MB/sec Jul 7 05:58:34.283287 kernel: 32regs : 19650 MB/sec Jul 7 05:58:34.283300 kernel: arm64_neon : 25568 MB/sec Jul 7 05:58:34.283309 kernel: xor: using function: arm64_neon (25568 MB/sec) Jul 7 05:58:34.334924 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 05:58:34.345734 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:58:34.354040 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:58:34.365167 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jul 7 05:58:34.368243 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:58:34.375071 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 05:58:34.386159 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 7 05:58:34.411263 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:58:34.423040 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:58:34.460934 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:58:34.468087 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 05:58:34.482222 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 05:58:34.483567 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:58:34.486056 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:58:34.487935 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:58:34.497141 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 05:58:34.504423 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:58:34.514206 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 7 05:58:34.514377 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 05:58:34.517618 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 05:58:34.517648 kernel: GPT:9289727 != 19775487 Jul 7 05:58:34.517659 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 05:58:34.517668 kernel: GPT:9289727 != 19775487 Jul 7 05:58:34.517686 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 05:58:34.517703 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 05:58:34.517479 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:58:34.517592 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:58:34.521612 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:58:34.522541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:58:34.522677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:58:34.524512 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:58:34.535663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:58:34.536678 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (516) Jul 7 05:58:34.539439 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (521) Jul 7 05:58:34.549620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:58:34.554724 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 05:58:34.562583 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 05:58:34.567438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 05:58:34.571428 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 05:58:34.572482 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 05:58:34.586027 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 05:58:34.587563 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:58:34.593550 disk-uuid[550]: Primary Header is updated. Jul 7 05:58:34.593550 disk-uuid[550]: Secondary Entries is updated. Jul 7 05:58:34.593550 disk-uuid[550]: Secondary Header is updated. Jul 7 05:58:34.603183 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 05:58:34.606287 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:58:35.613937 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 05:58:35.614057 disk-uuid[552]: The operation has completed successfully. Jul 7 05:58:35.635835 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 05:58:35.635937 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 05:58:35.660049 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 05:58:35.662695 sh[571]: Success Jul 7 05:58:35.673936 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 05:58:35.700265 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 05:58:35.708218 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 05:58:35.709660 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 05:58:35.719497 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 05:58:35.719531 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:58:35.721913 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 05:58:35.721949 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 05:58:35.721961 kernel: BTRFS info (device dm-0): using free space tree Jul 7 05:58:35.725266 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 05:58:35.726421 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 05:58:35.741034 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 05:58:35.742401 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 05:58:35.749512 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:58:35.749549 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:58:35.749560 kernel: BTRFS info (device vda6): using free space tree Jul 7 05:58:35.752108 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 05:58:35.759977 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 05:58:35.761530 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:58:35.766641 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 05:58:35.774041 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 05:58:35.833259 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:58:35.847783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:58:35.877012 systemd-networkd[763]: lo: Link UP Jul 7 05:58:35.877023 systemd-networkd[763]: lo: Gained carrier Jul 7 05:58:35.877684 systemd-networkd[763]: Enumeration completed Jul 7 05:58:35.877787 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:58:35.878121 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:58:35.878124 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:58:35.878972 systemd-networkd[763]: eth0: Link UP Jul 7 05:58:35.878976 systemd-networkd[763]: eth0: Gained carrier Jul 7 05:58:35.885186 ignition[666]: Ignition 2.19.0 Jul 7 05:58:35.878983 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:58:35.885192 ignition[666]: Stage: fetch-offline Jul 7 05:58:35.880712 systemd[1]: Reached target network.target - Network. Jul 7 05:58:35.885226 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:58:35.885234 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:58:35.885376 ignition[666]: parsed url from cmdline: "" Jul 7 05:58:35.885379 ignition[666]: no config URL provided Jul 7 05:58:35.885385 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:58:35.885392 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:58:35.885413 ignition[666]: op(1): [started] loading QEMU firmware config module Jul 7 05:58:35.885418 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 05:58:35.894767 ignition[666]: op(1): [finished] loading QEMU firmware config module Jul 7 05:58:35.901934 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 05:58:35.915586 ignition[666]: parsing config with SHA512: ec80fef648457a48eb19038222812066019ce795aac1d82a5c49c63e305fdbaf18371325b90c43e2dd6bb33aa7c742b612271f603e74f6e033799f945a95cef6 Jul 7 05:58:35.920641 unknown[666]: fetched base config from "system" Jul 7 05:58:35.920655 unknown[666]: fetched user config from "qemu" Jul 7 05:58:35.921100 ignition[666]: fetch-offline: fetch-offline passed Jul 7 05:58:35.921170 ignition[666]: Ignition finished successfully Jul 7 05:58:35.922286 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:58:35.923967 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 05:58:35.931134 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 05:58:35.940659 ignition[770]: Ignition 2.19.0 Jul 7 05:58:35.940669 ignition[770]: Stage: kargs Jul 7 05:58:35.940827 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:58:35.940837 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:58:35.941674 ignition[770]: kargs: kargs passed Jul 7 05:58:35.941713 ignition[770]: Ignition finished successfully Jul 7 05:58:35.945842 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 05:58:35.948435 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 05:58:35.960389 ignition[778]: Ignition 2.19.0 Jul 7 05:58:35.960397 ignition[778]: Stage: disks Jul 7 05:58:35.960547 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:58:35.960555 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:58:35.961374 ignition[778]: disks: disks passed Jul 7 05:58:35.963205 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 05:58:35.961414 ignition[778]: Ignition finished successfully Jul 7 05:58:35.964654 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 05:58:35.966046 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:58:35.967531 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:58:35.969083 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:58:35.970739 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:58:35.984065 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 05:58:35.994402 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 05:58:35.998239 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 05:58:36.006969 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 05:58:36.047930 kernel: EXT4-fs (vda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 05:58:36.048328 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 05:58:36.049434 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 05:58:36.066992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:58:36.068484 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 05:58:36.069860 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 05:58:36.073914 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (797) Jul 7 05:58:36.069909 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 05:58:36.069931 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:58:36.079452 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:58:36.079479 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:58:36.079489 kernel: BTRFS info (device vda6): using free space tree Jul 7 05:58:36.079499 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 05:58:36.076696 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 05:58:36.097067 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 05:58:36.098758 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:58:36.133997 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 05:58:36.137145 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jul 7 05:58:36.140647 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 05:58:36.143384 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 05:58:36.208244 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 05:58:36.220991 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 05:58:36.223371 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 05:58:36.227932 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:58:36.245397 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 05:58:36.249281 ignition[912]: INFO : Ignition 2.19.0 Jul 7 05:58:36.249281 ignition[912]: INFO : Stage: mount Jul 7 05:58:36.250660 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:58:36.250660 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:58:36.250660 ignition[912]: INFO : mount: mount passed Jul 7 05:58:36.250660 ignition[912]: INFO : Ignition finished successfully Jul 7 05:58:36.253562 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 05:58:36.264989 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 05:58:36.718567 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 05:58:36.733124 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:58:36.737919 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (925) Jul 7 05:58:36.740098 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:58:36.740127 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:58:36.740138 kernel: BTRFS info (device vda6): using free space tree Jul 7 05:58:36.742908 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 05:58:36.744221 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:58:36.764148 ignition[942]: INFO : Ignition 2.19.0 Jul 7 05:58:36.764148 ignition[942]: INFO : Stage: files Jul 7 05:58:36.765579 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:58:36.765579 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:58:36.765579 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jul 7 05:58:36.768616 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 05:58:36.768616 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 05:58:36.768616 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 05:58:36.768616 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 05:58:36.768616 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 05:58:36.767883 unknown[942]: wrote ssh authorized keys file for user: core Jul 7 05:58:36.775146 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 7 05:58:36.775146 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 7 05:58:36.831417 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 05:58:36.993181 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 7 05:58:36.993181 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 05:58:36.996426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 7 05:58:37.320040 systemd-networkd[763]: eth0: Gained IPv6LL Jul 7 05:58:37.572390 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 05:58:37.887930 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 05:58:37.889995 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 05:58:37.889995 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:58:37.889995 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:58:37.889995 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 05:58:37.889995 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 05:58:37.889995 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 05:58:37.889995 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 05:58:37.889995 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 05:58:37.889995 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 05:58:37.913330 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 05:58:37.916557 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 05:58:37.917849 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 05:58:37.917849 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 05:58:37.917849 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 05:58:37.917849 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:58:37.917849 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:58:37.917849 ignition[942]: INFO : files: files passed Jul 7 05:58:37.917849 ignition[942]: INFO : Ignition finished successfully Jul 7 05:58:37.918741 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 05:58:37.927054 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 05:58:37.928643 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 05:58:37.930953 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 05:58:37.931030 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 05:58:37.935722 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 05:58:37.939001 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:58:37.939001 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:58:37.941594 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:58:37.940977 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:58:37.942968 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 05:58:37.962072 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 05:58:37.979850 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 05:58:37.979966 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 05:58:37.981873 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 05:58:37.983488 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 05:58:37.985034 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 05:58:37.985770 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 05:58:38.000594 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:58:38.009066 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 05:58:38.016798 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:58:38.017995 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:58:38.019841 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 05:58:38.021353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 05:58:38.021475 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:58:38.023653 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 05:58:38.025403 systemd[1]: Stopped target basic.target - Basic System. Jul 7 05:58:38.026801 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 05:58:38.028293 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:58:38.029961 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 05:58:38.031772 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 05:58:38.033484 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:58:38.035142 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 05:58:38.036862 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 05:58:38.038405 systemd[1]: Stopped target swap.target - Swaps. Jul 7 05:58:38.039759 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 05:58:38.039886 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:58:38.041950 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:58:38.043738 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:58:38.045497 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 05:58:38.049953 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:58:38.051057 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 05:58:38.051175 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 05:58:38.053537 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 05:58:38.053656 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:58:38.055532 systemd[1]: Stopped target paths.target - Path Units. Jul 7 05:58:38.056858 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 05:58:38.058049 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:58:38.059550 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 05:58:38.061210 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 05:58:38.063144 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 05:58:38.063285 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:58:38.064632 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 05:58:38.064773 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:58:38.066114 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 05:58:38.066297 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:58:38.067883 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 05:58:38.068054 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 05:58:38.079062 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 05:58:38.081081 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 05:58:38.082016 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 05:58:38.082150 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:58:38.083927 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 05:58:38.084035 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:58:38.089147 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 05:58:38.089240 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 05:58:38.093969 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 05:58:38.096328 ignition[997]: INFO : Ignition 2.19.0 Jul 7 05:58:38.096328 ignition[997]: INFO : Stage: umount Jul 7 05:58:38.096328 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:58:38.096328 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:58:38.096328 ignition[997]: INFO : umount: umount passed Jul 7 05:58:38.096328 ignition[997]: INFO : Ignition finished successfully Jul 7 05:58:38.096476 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 05:58:38.096587 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 05:58:38.098471 systemd[1]: Stopped target network.target - Network. Jul 7 05:58:38.099598 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 05:58:38.099658 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 05:58:38.101138 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 05:58:38.101178 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 05:58:38.102724 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 05:58:38.102767 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 05:58:38.104148 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 05:58:38.104189 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 05:58:38.105854 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 05:58:38.107319 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 05:58:38.113804 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 05:58:38.113950 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 05:58:38.113958 systemd-networkd[763]: eth0: DHCPv6 lease lost Jul 7 05:58:38.115821 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 05:58:38.115952 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 05:58:38.118326 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 05:58:38.118381 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:58:38.129041 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 05:58:38.129780 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 05:58:38.129836 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:58:38.131696 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:58:38.131752 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:58:38.133289 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 05:58:38.133335 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 05:58:38.135182 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 05:58:38.135224 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:58:38.136994 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:58:38.146509 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 05:58:38.146654 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 05:58:38.154245 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 05:58:38.154364 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 05:58:38.156288 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 05:58:38.156415 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:58:38.158450 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 05:58:38.158510 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 05:58:38.159551 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 05:58:38.159583 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:58:38.161389 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 05:58:38.161430 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:58:38.163956 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 05:58:38.163999 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 05:58:38.166304 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:58:38.166347 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:58:38.168834 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 05:58:38.168878 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 05:58:38.187087 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 05:58:38.188021 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 05:58:38.188073 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:58:38.189980 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 05:58:38.190022 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:58:38.191749 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 05:58:38.191789 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:58:38.193687 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:58:38.193739 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:58:38.195743 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 05:58:38.195964 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 05:58:38.197837 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 05:58:38.199640 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 05:58:38.208393 systemd[1]: Switching root. Jul 7 05:58:38.232964 systemd-journald[237]: Journal stopped Jul 7 05:58:38.906035 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 7 05:58:38.906088 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 05:58:38.906101 kernel: SELinux: policy capability open_perms=1 Jul 7 05:58:38.906114 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 05:58:38.906123 kernel: SELinux: policy capability always_check_network=0 Jul 7 05:58:38.906133 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 05:58:38.906143 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 05:58:38.906153 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 05:58:38.906163 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 05:58:38.906177 kernel: audit: type=1403 audit(1751867918.376:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 05:58:38.906188 systemd[1]: Successfully loaded SELinux policy in 31.248ms. Jul 7 05:58:38.906204 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.127ms. Jul 7 05:58:38.906218 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:58:38.906229 systemd[1]: Detected virtualization kvm. Jul 7 05:58:38.906240 systemd[1]: Detected architecture arm64. Jul 7 05:58:38.906251 systemd[1]: Detected first boot. Jul 7 05:58:38.906262 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:58:38.906272 zram_generator::config[1042]: No configuration found. Jul 7 05:58:38.906288 systemd[1]: Populated /etc with preset unit settings. Jul 7 05:58:38.906299 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 05:58:38.906312 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 05:58:38.906323 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 05:58:38.906334 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 05:58:38.906345 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 05:58:38.906356 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 05:58:38.906367 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 05:58:38.906378 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 05:58:38.906392 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 05:58:38.906404 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 05:58:38.906416 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 05:58:38.906428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:58:38.906439 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:58:38.906450 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 05:58:38.906460 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 05:58:38.906472 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 05:58:38.906482 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:58:38.906493 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 05:58:38.906506 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:58:38.906517 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 05:58:38.906528 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 05:58:38.906539 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 05:58:38.906550 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 05:58:38.906561 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:58:38.906572 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:58:38.906583 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:58:38.906595 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:58:38.906606 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 05:58:38.906616 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 05:58:38.906627 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:58:38.906638 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:58:38.906649 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:58:38.906659 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 05:58:38.906670 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 05:58:38.906681 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 05:58:38.906694 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 05:58:38.906704 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 05:58:38.906720 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 05:58:38.906731 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 05:58:38.906742 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 05:58:38.906753 systemd[1]: Reached target machines.target - Containers. Jul 7 05:58:38.906764 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 05:58:38.906775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:58:38.906787 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:58:38.906800 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 05:58:38.906810 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:58:38.906821 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:58:38.906831 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:58:38.906842 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 05:58:38.906853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:58:38.906865 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 05:58:38.906876 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 05:58:38.906888 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 05:58:38.906944 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 05:58:38.906956 kernel: fuse: init (API version 7.39) Jul 7 05:58:38.906966 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 05:58:38.906976 kernel: loop: module loaded Jul 7 05:58:38.906986 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:58:38.906997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:58:38.907007 kernel: ACPI: bus type drm_connector registered Jul 7 05:58:38.907018 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 05:58:38.907031 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 05:58:38.907058 systemd-journald[1106]: Collecting audit messages is disabled. Jul 7 05:58:38.907079 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:58:38.907092 systemd-journald[1106]: Journal started Jul 7 05:58:38.907114 systemd-journald[1106]: Runtime Journal (/run/log/journal/5250e5b0dd364c38abca7cd3fa4abbe6) is 5.9M, max 47.3M, 41.4M free. Jul 7 05:58:38.724596 systemd[1]: Queued start job for default target multi-user.target. Jul 7 05:58:38.738757 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 05:58:38.739111 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 05:58:38.909445 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 05:58:38.909487 systemd[1]: Stopped verity-setup.service. Jul 7 05:58:38.912945 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:58:38.913516 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 05:58:38.914535 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 05:58:38.915656 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 05:58:38.916668 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 05:58:38.917732 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 05:58:38.918769 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 05:58:38.919864 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 05:58:38.921108 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:58:38.922411 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 05:58:38.922534 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 05:58:38.923801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:58:38.923941 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:58:38.925133 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:58:38.925267 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:58:38.926470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:58:38.926594 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:58:38.927907 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 05:58:38.928038 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 05:58:38.929177 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:58:38.929293 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:58:38.930626 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:58:38.931853 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 05:58:38.933186 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 05:58:38.944843 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 05:58:38.958990 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 05:58:38.960841 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 05:58:38.961841 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 05:58:38.961877 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:58:38.963643 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 05:58:38.965637 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 05:58:38.967595 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 05:58:38.968631 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:58:38.969891 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 05:58:38.971657 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 05:58:38.972782 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:58:38.976039 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 05:58:38.977088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:58:38.979063 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:58:38.979837 systemd-journald[1106]: Time spent on flushing to /var/log/journal/5250e5b0dd364c38abca7cd3fa4abbe6 is 14.235ms for 853 entries. Jul 7 05:58:38.979837 systemd-journald[1106]: System Journal (/var/log/journal/5250e5b0dd364c38abca7cd3fa4abbe6) is 8.0M, max 195.6M, 187.6M free. Jul 7 05:58:39.012380 systemd-journald[1106]: Received client request to flush runtime journal. Jul 7 05:58:39.012438 kernel: loop0: detected capacity change from 0 to 211168 Jul 7 05:58:39.012457 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 05:58:38.982951 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 05:58:38.987099 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:58:38.991381 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:58:38.992760 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 05:58:38.994097 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 05:58:38.995474 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 05:58:39.003647 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 05:58:39.005832 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 05:58:39.007320 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 05:58:39.009467 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 05:58:39.016209 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 05:58:39.028358 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:58:39.034056 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 05:58:39.038086 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jul 7 05:58:39.038100 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jul 7 05:58:39.044019 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 05:58:39.045968 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:58:39.047939 kernel: loop1: detected capacity change from 0 to 114432 Jul 7 05:58:39.048102 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 05:58:39.060166 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 05:58:39.082781 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 05:58:39.087625 kernel: loop2: detected capacity change from 0 to 114328 Jul 7 05:58:39.094319 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:58:39.104047 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 7 05:58:39.104064 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 7 05:58:39.107265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:58:39.118948 kernel: loop3: detected capacity change from 0 to 211168 Jul 7 05:58:39.124929 kernel: loop4: detected capacity change from 0 to 114432 Jul 7 05:58:39.129928 kernel: loop5: detected capacity change from 0 to 114328 Jul 7 05:58:39.132265 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 05:58:39.132620 (sd-merge)[1181]: Merged extensions into '/usr'. Jul 7 05:58:39.135678 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 05:58:39.135786 systemd[1]: Reloading... Jul 7 05:58:39.190998 zram_generator::config[1205]: No configuration found. Jul 7 05:58:39.254236 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 05:58:39.292019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:58:39.327972 systemd[1]: Reloading finished in 191 ms. Jul 7 05:58:39.354057 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 05:58:39.357201 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 05:58:39.370054 systemd[1]: Starting ensure-sysext.service... Jul 7 05:58:39.372180 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:58:39.380111 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jul 7 05:58:39.380216 systemd[1]: Reloading... Jul 7 05:58:39.398889 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 05:58:39.399155 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 05:58:39.399956 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 05:58:39.400172 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 7 05:58:39.400225 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 7 05:58:39.402434 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:58:39.402443 systemd-tmpfiles[1244]: Skipping /boot Jul 7 05:58:39.409191 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:58:39.409207 systemd-tmpfiles[1244]: Skipping /boot Jul 7 05:58:39.434920 zram_generator::config[1274]: No configuration found. Jul 7 05:58:39.512737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:58:39.548739 systemd[1]: Reloading finished in 168 ms. Jul 7 05:58:39.565674 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 05:58:39.578268 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:58:39.585189 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:58:39.587551 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 05:58:39.589690 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 05:58:39.595130 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:58:39.601163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:58:39.605174 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 05:58:39.608071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:58:39.609645 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:58:39.618860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:58:39.621131 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:58:39.622112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:58:39.622844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:58:39.623004 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:58:39.626177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:58:39.626314 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:58:39.631074 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Jul 7 05:58:39.631578 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 05:58:39.642217 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:58:39.642408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:58:39.646987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:58:39.657774 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:58:39.662137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:58:39.663302 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:58:39.665762 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 05:58:39.668504 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 05:58:39.669984 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:58:39.671623 augenrules[1342]: No rules Jul 7 05:58:39.671753 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 05:58:39.675405 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:58:39.676865 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 05:58:39.678414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:58:39.680995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:58:39.682596 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:58:39.682725 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:58:39.684432 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 05:58:39.698911 systemd[1]: Finished ensure-sysext.service. Jul 7 05:58:39.712781 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 05:58:39.714105 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:58:39.727907 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1358) Jul 7 05:58:39.723077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:58:39.726820 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:58:39.729046 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:58:39.734781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:58:39.735797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:58:39.740582 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:58:39.744254 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 05:58:39.745405 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 05:58:39.745695 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 05:58:39.747037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:58:39.748931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:58:39.750228 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:58:39.750344 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:58:39.753224 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:58:39.753351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:58:39.754632 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:58:39.754758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:58:39.766711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 05:58:39.779106 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 05:58:39.784673 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:58:39.784740 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:58:39.794436 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 05:58:39.812094 systemd-networkd[1380]: lo: Link UP Jul 7 05:58:39.812101 systemd-networkd[1380]: lo: Gained carrier Jul 7 05:58:39.812703 systemd-networkd[1380]: Enumeration completed Jul 7 05:58:39.812800 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:58:39.813487 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:58:39.813492 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:58:39.814166 systemd-networkd[1380]: eth0: Link UP Jul 7 05:58:39.814214 systemd-networkd[1380]: eth0: Gained carrier Jul 7 05:58:39.814229 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:58:39.820135 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 05:58:39.821194 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 05:58:39.822445 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 05:58:39.827598 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 05:58:39.828807 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Jul 7 05:58:39.834548 systemd-resolved[1312]: Positive Trust Anchors: Jul 7 05:58:39.834588 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 05:58:39.834632 systemd-timesyncd[1383]: Initial clock synchronization to Mon 2025-07-07 05:58:39.735288 UTC. Jul 7 05:58:39.835040 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:58:39.835095 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:58:39.837901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:58:39.845269 systemd-resolved[1312]: Defaulting to hostname 'linux'. Jul 7 05:58:39.850469 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:58:39.851472 systemd[1]: Reached target network.target - Network. Jul 7 05:58:39.852316 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:58:39.856930 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 05:58:39.868100 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 05:58:39.879718 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:58:39.880414 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:58:39.908222 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 05:58:39.909494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:58:39.910514 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:58:39.911532 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 05:58:39.912631 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 05:58:39.913878 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 05:58:39.914934 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 05:58:39.915988 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 05:58:39.917030 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 05:58:39.917059 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:58:39.917821 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:58:39.919239 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 05:58:39.921356 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 05:58:39.932782 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 05:58:39.934783 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 05:58:39.936198 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 05:58:39.937241 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:58:39.938050 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:58:39.938877 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:58:39.938921 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:58:39.939760 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 05:58:39.941608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 05:58:39.944023 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:58:39.944292 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 05:58:39.947110 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 05:58:39.948124 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 05:58:39.950081 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 05:58:39.955979 jq[1414]: false Jul 7 05:58:39.956427 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 05:58:39.959062 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 05:58:39.961496 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 05:58:39.966363 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 05:58:39.968018 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 05:58:39.968393 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 05:58:39.970475 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 05:58:39.972338 dbus-daemon[1413]: [system] SELinux support is enabled Jul 7 05:58:39.973176 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 05:58:39.977178 extend-filesystems[1415]: Found loop3 Jul 7 05:58:39.979126 extend-filesystems[1415]: Found loop4 Jul 7 05:58:39.979126 extend-filesystems[1415]: Found loop5 Jul 7 05:58:39.979126 extend-filesystems[1415]: Found vda Jul 7 05:58:39.979126 extend-filesystems[1415]: Found vda1 Jul 7 05:58:39.979126 extend-filesystems[1415]: Found vda2 Jul 7 05:58:39.979126 extend-filesystems[1415]: Found vda3 Jul 7 05:58:39.979126 extend-filesystems[1415]: Found usr Jul 7 05:58:39.979126 extend-filesystems[1415]: Found vda4 Jul 7 05:58:39.979126 extend-filesystems[1415]: Found vda6 Jul 7 05:58:39.979126 extend-filesystems[1415]: Found vda7 Jul 7 05:58:39.979126 extend-filesystems[1415]: Found vda9 Jul 7 05:58:39.979126 extend-filesystems[1415]: Checking size of /dev/vda9 Jul 7 05:58:39.977203 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 05:58:39.999488 jq[1429]: true Jul 7 05:58:39.980091 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 05:58:39.988839 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 05:58:39.988997 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 05:58:39.989232 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 05:58:39.989357 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 05:58:39.993067 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 05:58:39.993234 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 05:58:40.003969 extend-filesystems[1415]: Resized partition /dev/vda9 Jul 7 05:58:40.004995 jq[1437]: true Jul 7 05:58:40.008686 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 05:58:40.008764 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 05:58:40.010532 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 05:58:40.010549 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 05:58:40.014619 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Jul 7 05:58:40.029017 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1362) Jul 7 05:58:40.029048 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 05:58:40.028110 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 05:58:40.043727 tar[1435]: linux-arm64/LICENSE Jul 7 05:58:40.044432 tar[1435]: linux-arm64/helm Jul 7 05:58:40.046839 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 05:58:40.047607 systemd-logind[1421]: New seat seat0. Jul 7 05:58:40.048794 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 05:58:40.050047 update_engine[1424]: I20250707 05:58:40.049841 1424 main.cc:92] Flatcar Update Engine starting Jul 7 05:58:40.052690 systemd[1]: Started update-engine.service - Update Engine. Jul 7 05:58:40.053360 update_engine[1424]: I20250707 05:58:40.053093 1424 update_check_scheduler.cc:74] Next update check in 9m29s Jul 7 05:58:40.053908 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 05:58:40.062156 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 05:58:40.069963 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 05:58:40.069963 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 05:58:40.069963 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 05:58:40.073074 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Jul 7 05:58:40.075950 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 05:58:40.076110 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 05:58:40.102407 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:58:40.106089 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 05:58:40.109444 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 05:58:40.130005 locksmithd[1460]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 05:58:40.232293 containerd[1439]: time="2025-07-07T05:58:40.232211108Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 05:58:40.260574 containerd[1439]: time="2025-07-07T05:58:40.260540697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:58:40.262026 containerd[1439]: time="2025-07-07T05:58:40.261989108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:58:40.262149 containerd[1439]: time="2025-07-07T05:58:40.262132902Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 05:58:40.262257 containerd[1439]: time="2025-07-07T05:58:40.262192790Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 05:58:40.262505 containerd[1439]: time="2025-07-07T05:58:40.262484803Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 05:58:40.262622 containerd[1439]: time="2025-07-07T05:58:40.262561164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 05:58:40.262748 containerd[1439]: time="2025-07-07T05:58:40.262728384Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:58:40.262854 containerd[1439]: time="2025-07-07T05:58:40.262838165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:58:40.263197 containerd[1439]: time="2025-07-07T05:58:40.263123660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:58:40.263267 containerd[1439]: time="2025-07-07T05:58:40.263253390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 05:58:40.263384 containerd[1439]: time="2025-07-07T05:58:40.263366332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:58:40.263445 containerd[1439]: time="2025-07-07T05:58:40.263432817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 05:58:40.263666 containerd[1439]: time="2025-07-07T05:58:40.263600590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:58:40.264088 containerd[1439]: time="2025-07-07T05:58:40.264068000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:58:40.264492 containerd[1439]: time="2025-07-07T05:58:40.264349545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:58:40.264492 containerd[1439]: time="2025-07-07T05:58:40.264369652Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 05:58:40.264618 containerd[1439]: time="2025-07-07T05:58:40.264597550Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 05:58:40.264765 containerd[1439]: time="2025-07-07T05:58:40.264749048Z" level=info msg="metadata content store policy set" policy=shared Jul 7 05:58:40.268167 containerd[1439]: time="2025-07-07T05:58:40.268143697Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 05:58:40.268609 containerd[1439]: time="2025-07-07T05:58:40.268320437Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 05:58:40.268609 containerd[1439]: time="2025-07-07T05:58:40.268417143Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 05:58:40.268609 containerd[1439]: time="2025-07-07T05:58:40.268439937Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 05:58:40.268609 containerd[1439]: time="2025-07-07T05:58:40.268453723Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 05:58:40.268609 containerd[1439]: time="2025-07-07T05:58:40.268563702Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 05:58:40.269123 containerd[1439]: time="2025-07-07T05:58:40.269101034Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 05:58:40.269384 containerd[1439]: time="2025-07-07T05:58:40.269363577Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 05:58:40.269462 containerd[1439]: time="2025-07-07T05:58:40.269449024Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269510927Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269529691Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269542372Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269568997Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269582824Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269596058Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269608106Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269619483Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269630229Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269652509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269665585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269677159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269688813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270086 containerd[1439]: time="2025-07-07T05:58:40.269700940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269716031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269727211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269741669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269755337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269770941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269781489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269795236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269806574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269826247Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269848409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269860260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.270356 containerd[1439]: time="2025-07-07T05:58:40.269870728Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 05:58:40.271338 containerd[1439]: time="2025-07-07T05:58:40.271302271Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 05:58:40.271665 containerd[1439]: time="2025-07-07T05:58:40.271642755Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 05:58:40.271728 containerd[1439]: time="2025-07-07T05:58:40.271715876Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 05:58:40.271854 containerd[1439]: time="2025-07-07T05:58:40.271766599Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 05:58:40.273042 containerd[1439]: time="2025-07-07T05:58:40.271780702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.273042 containerd[1439]: time="2025-07-07T05:58:40.271928486Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 05:58:40.273042 containerd[1439]: time="2025-07-07T05:58:40.271940575Z" level=info msg="NRI interface is disabled by configuration." Jul 7 05:58:40.273042 containerd[1439]: time="2025-07-07T05:58:40.271950767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 05:58:40.273147 containerd[1439]: time="2025-07-07T05:58:40.272370100Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 05:58:40.273147 containerd[1439]: time="2025-07-07T05:58:40.272425840Z" level=info msg="Connect containerd service" Jul 7 05:58:40.273147 containerd[1439]: time="2025-07-07T05:58:40.272452387Z" level=info msg="using legacy CRI server" Jul 7 05:58:40.273147 containerd[1439]: time="2025-07-07T05:58:40.272458747Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 05:58:40.273147 containerd[1439]: time="2025-07-07T05:58:40.272532224Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 05:58:40.273705 containerd[1439]: time="2025-07-07T05:58:40.273680761Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:58:40.274357 containerd[1439]: time="2025-07-07T05:58:40.274271028Z" level=info msg="Start subscribing containerd event" Jul 7 05:58:40.274444 containerd[1439]: time="2025-07-07T05:58:40.274429083Z" level=info msg="Start recovering state" Jul 7 05:58:40.274536 containerd[1439]: time="2025-07-07T05:58:40.274524801Z" level=info msg="Start event monitor" Jul 7 05:58:40.274768 containerd[1439]: time="2025-07-07T05:58:40.274751672Z" level=info msg="Start snapshots syncer" Jul 7 05:58:40.275133 containerd[1439]: time="2025-07-07T05:58:40.274867142Z" level=info msg="Start cni network conf syncer for default" Jul 7 05:58:40.275133 containerd[1439]: time="2025-07-07T05:58:40.274889264Z" level=info msg="Start streaming server" Jul 7 05:58:40.275133 containerd[1439]: time="2025-07-07T05:58:40.274722439Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 05:58:40.275285 containerd[1439]: time="2025-07-07T05:58:40.275257362Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 05:58:40.275493 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 05:58:40.276649 containerd[1439]: time="2025-07-07T05:58:40.276628858Z" level=info msg="containerd successfully booted in 0.045677s" Jul 7 05:58:40.290547 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 05:58:40.309029 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 05:58:40.319151 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 05:58:40.323757 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 05:58:40.323922 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 05:58:40.327274 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 05:58:40.337632 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 05:58:40.340591 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 05:58:40.343119 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 05:58:40.344646 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 05:58:40.430345 tar[1435]: linux-arm64/README.md Jul 7 05:58:40.443287 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 05:58:41.415981 systemd-networkd[1380]: eth0: Gained IPv6LL Jul 7 05:58:41.419963 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 05:58:41.421505 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 05:58:41.439179 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 05:58:41.441183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:58:41.443066 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 05:58:41.456788 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 05:58:41.456959 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 05:58:41.458869 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 05:58:41.464556 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 05:58:41.981453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:58:41.982818 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 05:58:41.984429 systemd[1]: Startup finished in 543ms (kernel) + 4.670s (initrd) + 3.642s (userspace) = 8.856s. Jul 7 05:58:41.985100 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:58:42.390468 kubelet[1526]: E0707 05:58:42.390355 1526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:58:42.393089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:58:42.393227 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:58:46.785505 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 05:58:46.786580 systemd[1]: Started sshd@0-10.0.0.62:22-10.0.0.1:39116.service - OpenSSH per-connection server daemon (10.0.0.1:39116). Jul 7 05:58:46.843094 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 39116 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:58:46.844714 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:58:46.852651 systemd-logind[1421]: New session 1 of user core. Jul 7 05:58:46.853539 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 05:58:46.878182 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 05:58:46.887922 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 05:58:46.889963 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 05:58:46.895677 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 05:58:46.965875 systemd[1544]: Queued start job for default target default.target. Jul 7 05:58:46.974759 systemd[1544]: Created slice app.slice - User Application Slice. Jul 7 05:58:46.974789 systemd[1544]: Reached target paths.target - Paths. Jul 7 05:58:46.974801 systemd[1544]: Reached target timers.target - Timers. Jul 7 05:58:46.976025 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 05:58:46.985050 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 05:58:46.985113 systemd[1544]: Reached target sockets.target - Sockets. Jul 7 05:58:46.985128 systemd[1544]: Reached target basic.target - Basic System. Jul 7 05:58:46.985162 systemd[1544]: Reached target default.target - Main User Target. Jul 7 05:58:46.985185 systemd[1544]: Startup finished in 84ms. Jul 7 05:58:46.985413 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 05:58:46.986579 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 05:58:47.045388 systemd[1]: Started sshd@1-10.0.0.62:22-10.0.0.1:39128.service - OpenSSH per-connection server daemon (10.0.0.1:39128). Jul 7 05:58:47.111145 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 39128 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:58:47.112468 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:58:47.116592 systemd-logind[1421]: New session 2 of user core. Jul 7 05:58:47.128007 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 05:58:47.178908 sshd[1555]: pam_unix(sshd:session): session closed for user core Jul 7 05:58:47.188163 systemd[1]: sshd@1-10.0.0.62:22-10.0.0.1:39128.service: Deactivated successfully. Jul 7 05:58:47.189532 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 05:58:47.192018 systemd-logind[1421]: Session 2 logged out. Waiting for processes to exit. Jul 7 05:58:47.193052 systemd[1]: Started sshd@2-10.0.0.62:22-10.0.0.1:39136.service - OpenSSH per-connection server daemon (10.0.0.1:39136). Jul 7 05:58:47.193841 systemd-logind[1421]: Removed session 2. Jul 7 05:58:47.230102 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 39136 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:58:47.231213 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:58:47.234799 systemd-logind[1421]: New session 3 of user core. Jul 7 05:58:47.243074 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 05:58:47.289972 sshd[1562]: pam_unix(sshd:session): session closed for user core Jul 7 05:58:47.304023 systemd[1]: sshd@2-10.0.0.62:22-10.0.0.1:39136.service: Deactivated successfully. Jul 7 05:58:47.305266 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 05:58:47.306958 systemd-logind[1421]: Session 3 logged out. Waiting for processes to exit. Jul 7 05:58:47.316132 systemd[1]: Started sshd@3-10.0.0.62:22-10.0.0.1:39150.service - OpenSSH per-connection server daemon (10.0.0.1:39150). Jul 7 05:58:47.317066 systemd-logind[1421]: Removed session 3. Jul 7 05:58:47.350378 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 39150 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:58:47.351459 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:58:47.354424 systemd-logind[1421]: New session 4 of user core. Jul 7 05:58:47.372083 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 05:58:47.421364 sshd[1570]: pam_unix(sshd:session): session closed for user core Jul 7 05:58:47.434130 systemd[1]: sshd@3-10.0.0.62:22-10.0.0.1:39150.service: Deactivated successfully. Jul 7 05:58:47.435441 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 05:58:47.437936 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Jul 7 05:58:47.438935 systemd[1]: Started sshd@4-10.0.0.62:22-10.0.0.1:39162.service - OpenSSH per-connection server daemon (10.0.0.1:39162). Jul 7 05:58:47.439644 systemd-logind[1421]: Removed session 4. Jul 7 05:58:47.475865 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 39162 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:58:47.476973 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:58:47.480383 systemd-logind[1421]: New session 5 of user core. Jul 7 05:58:47.486014 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 05:58:47.548315 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 05:58:47.548570 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:58:47.849102 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 05:58:47.849311 (dockerd)[1597]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 05:58:48.103821 dockerd[1597]: time="2025-07-07T05:58:48.103704137Z" level=info msg="Starting up" Jul 7 05:58:48.277819 dockerd[1597]: time="2025-07-07T05:58:48.277774263Z" level=info msg="Loading containers: start." Jul 7 05:58:48.361919 kernel: Initializing XFRM netlink socket Jul 7 05:58:48.422490 systemd-networkd[1380]: docker0: Link UP Jul 7 05:58:48.438929 dockerd[1597]: time="2025-07-07T05:58:48.438880473Z" level=info msg="Loading containers: done." Jul 7 05:58:48.451295 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck229379323-merged.mount: Deactivated successfully. Jul 7 05:58:48.455737 dockerd[1597]: time="2025-07-07T05:58:48.455691325Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 05:58:48.455816 dockerd[1597]: time="2025-07-07T05:58:48.455786756Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 05:58:48.455924 dockerd[1597]: time="2025-07-07T05:58:48.455885573Z" level=info msg="Daemon has completed initialization" Jul 7 05:58:48.482824 dockerd[1597]: time="2025-07-07T05:58:48.482705229Z" level=info msg="API listen on /run/docker.sock" Jul 7 05:58:48.483255 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 05:58:49.029838 containerd[1439]: time="2025-07-07T05:58:49.029795937Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 05:58:49.720213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085623102.mount: Deactivated successfully. Jul 7 05:58:50.627048 containerd[1439]: time="2025-07-07T05:58:50.627000212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:50.628002 containerd[1439]: time="2025-07-07T05:58:50.627717504Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 7 05:58:50.628782 containerd[1439]: time="2025-07-07T05:58:50.628738043Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:50.631802 containerd[1439]: time="2025-07-07T05:58:50.631746435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:50.633012 containerd[1439]: time="2025-07-07T05:58:50.632981632Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.603142493s" Jul 7 05:58:50.633403 containerd[1439]: time="2025-07-07T05:58:50.633096895Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 7 05:58:50.636140 containerd[1439]: time="2025-07-07T05:58:50.636115095Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 05:58:51.814028 containerd[1439]: time="2025-07-07T05:58:51.813977358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:51.814613 containerd[1439]: time="2025-07-07T05:58:51.814571417Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 7 05:58:51.815507 containerd[1439]: time="2025-07-07T05:58:51.815478300Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:51.820931 containerd[1439]: time="2025-07-07T05:58:51.820874251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:51.822048 containerd[1439]: time="2025-07-07T05:58:51.821997555Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.185845177s" Jul 7 05:58:51.822048 containerd[1439]: time="2025-07-07T05:58:51.822036922Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 7 05:58:51.823022 containerd[1439]: time="2025-07-07T05:58:51.823000763Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 05:58:52.477416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 05:58:52.487133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:58:52.587101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:58:52.590453 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:58:52.623491 kubelet[1811]: E0707 05:58:52.623436 1811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:58:52.626926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:58:52.627062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:58:53.005837 containerd[1439]: time="2025-07-07T05:58:53.005790270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:53.006835 containerd[1439]: time="2025-07-07T05:58:53.006623046Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 7 05:58:53.007708 containerd[1439]: time="2025-07-07T05:58:53.007647761Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:53.010617 containerd[1439]: time="2025-07-07T05:58:53.010574270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:53.011753 containerd[1439]: time="2025-07-07T05:58:53.011718563Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.188688798s" Jul 7 05:58:53.011800 containerd[1439]: time="2025-07-07T05:58:53.011753766Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 7 05:58:53.012355 containerd[1439]: time="2025-07-07T05:58:53.012184104Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 05:58:53.946041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1765195811.mount: Deactivated successfully. Jul 7 05:58:54.170343 containerd[1439]: time="2025-07-07T05:58:54.170289903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:54.170909 containerd[1439]: time="2025-07-07T05:58:54.170852585Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 7 05:58:54.171530 containerd[1439]: time="2025-07-07T05:58:54.171492678Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:54.173923 containerd[1439]: time="2025-07-07T05:58:54.173877069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:54.174794 containerd[1439]: time="2025-07-07T05:58:54.174752911Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.162536635s" Jul 7 05:58:54.174794 containerd[1439]: time="2025-07-07T05:58:54.174789521Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 7 05:58:54.175428 containerd[1439]: time="2025-07-07T05:58:54.175353400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 05:58:54.768304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3077321457.mount: Deactivated successfully. Jul 7 05:58:55.611733 containerd[1439]: time="2025-07-07T05:58:55.611681335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:55.612721 containerd[1439]: time="2025-07-07T05:58:55.612688767Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 7 05:58:55.613526 containerd[1439]: time="2025-07-07T05:58:55.613495055Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:55.616874 containerd[1439]: time="2025-07-07T05:58:55.616845239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:55.619164 containerd[1439]: time="2025-07-07T05:58:55.619107966Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.443711047s" Jul 7 05:58:55.619164 containerd[1439]: time="2025-07-07T05:58:55.619144704Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 7 05:58:55.619852 containerd[1439]: time="2025-07-07T05:58:55.619696699Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 05:58:56.046846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660736385.mount: Deactivated successfully. Jul 7 05:58:56.050145 containerd[1439]: time="2025-07-07T05:58:56.050103388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:56.051281 containerd[1439]: time="2025-07-07T05:58:56.051215717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 7 05:58:56.051955 containerd[1439]: time="2025-07-07T05:58:56.051930509Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:56.054120 containerd[1439]: time="2025-07-07T05:58:56.054081794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:56.054780 containerd[1439]: time="2025-07-07T05:58:56.054737352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 435.009544ms" Jul 7 05:58:56.054780 containerd[1439]: time="2025-07-07T05:58:56.054774778Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 05:58:56.055201 containerd[1439]: time="2025-07-07T05:58:56.055175310Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 05:58:56.496446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3398937899.mount: Deactivated successfully. Jul 7 05:58:58.115938 containerd[1439]: time="2025-07-07T05:58:58.115865700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:58.116655 containerd[1439]: time="2025-07-07T05:58:58.116606509Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 7 05:58:58.117272 containerd[1439]: time="2025-07-07T05:58:58.117237321Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:58.120883 containerd[1439]: time="2025-07-07T05:58:58.120848547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:58:58.123142 containerd[1439]: time="2025-07-07T05:58:58.123097583Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.067892713s" Jul 7 05:58:58.123183 containerd[1439]: time="2025-07-07T05:58:58.123145609Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 7 05:59:02.727373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 05:59:02.737131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:59:02.858384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:59:02.861569 (kubelet)[1974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:59:02.893026 kubelet[1974]: E0707 05:59:02.892977 1974 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:59:02.895800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:59:02.895956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:59:03.221326 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:59:03.232192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:59:03.254222 systemd[1]: Reloading requested from client PID 1989 ('systemctl') (unit session-5.scope)... Jul 7 05:59:03.254384 systemd[1]: Reloading... Jul 7 05:59:03.324934 zram_generator::config[2028]: No configuration found. Jul 7 05:59:03.421752 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:59:03.475804 systemd[1]: Reloading finished in 220 ms. Jul 7 05:59:03.513474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:59:03.514836 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:59:03.516973 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 05:59:03.517145 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:59:03.518550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:59:03.623339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:59:03.626762 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:59:03.658414 kubelet[2075]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:59:03.658414 kubelet[2075]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 05:59:03.658414 kubelet[2075]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:59:03.658693 kubelet[2075]: I0707 05:59:03.658484 2075 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:59:04.608732 kubelet[2075]: I0707 05:59:04.608691 2075 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 05:59:04.608732 kubelet[2075]: I0707 05:59:04.608720 2075 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:59:04.608933 kubelet[2075]: I0707 05:59:04.608917 2075 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 05:59:04.637226 kubelet[2075]: E0707 05:59:04.637188 2075 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 05:59:04.638127 kubelet[2075]: I0707 05:59:04.638107 2075 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:59:04.647749 kubelet[2075]: E0707 05:59:04.647703 2075 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:59:04.647749 kubelet[2075]: I0707 05:59:04.647736 2075 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:59:04.650054 kubelet[2075]: I0707 05:59:04.650039 2075 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:59:04.650985 kubelet[2075]: I0707 05:59:04.650956 2075 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:59:04.651124 kubelet[2075]: I0707 05:59:04.650989 2075 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 05:59:04.651205 kubelet[2075]: I0707 05:59:04.651193 2075 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:59:04.651205 kubelet[2075]: I0707 05:59:04.651201 2075 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 05:59:04.651398 kubelet[2075]: I0707 05:59:04.651383 2075 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:59:04.655538 kubelet[2075]: I0707 05:59:04.655517 2075 kubelet.go:480] "Attempting to sync node with API server" Jul 7 05:59:04.655538 kubelet[2075]: I0707 05:59:04.655538 2075 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:59:04.655594 kubelet[2075]: I0707 05:59:04.655561 2075 kubelet.go:386] "Adding apiserver pod source" Jul 7 05:59:04.656595 kubelet[2075]: I0707 05:59:04.656568 2075 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:59:04.659589 kubelet[2075]: I0707 05:59:04.659344 2075 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:59:04.659589 kubelet[2075]: E0707 05:59:04.659513 2075 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 05:59:04.660808 kubelet[2075]: E0707 05:59:04.660769 2075 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 05:59:04.660878 kubelet[2075]: I0707 05:59:04.660791 2075 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 05:59:04.665920 kubelet[2075]: W0707 05:59:04.665885 2075 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 05:59:04.668611 kubelet[2075]: I0707 05:59:04.668593 2075 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 05:59:04.668680 kubelet[2075]: I0707 05:59:04.668643 2075 server.go:1289] "Started kubelet" Jul 7 05:59:04.669541 kubelet[2075]: I0707 05:59:04.668734 2075 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:59:04.670313 kubelet[2075]: I0707 05:59:04.670294 2075 server.go:317] "Adding debug handlers to kubelet server" Jul 7 05:59:04.670410 kubelet[2075]: I0707 05:59:04.670370 2075 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:59:04.670691 kubelet[2075]: I0707 05:59:04.670654 2075 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:59:04.671624 kubelet[2075]: I0707 05:59:04.671605 2075 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:59:04.673958 kubelet[2075]: I0707 05:59:04.673920 2075 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:59:04.674344 kubelet[2075]: E0707 05:59:04.672778 2075 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.62:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.62:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe29d275db729 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 05:59:04.668608297 +0000 UTC m=+1.038760380,LastTimestamp:2025-07-07 05:59:04.668608297 +0000 UTC m=+1.038760380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 05:59:04.675220 kubelet[2075]: E0707 05:59:04.675198 2075 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 05:59:04.675220 kubelet[2075]: I0707 05:59:04.675225 2075 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 05:59:04.675651 kubelet[2075]: I0707 05:59:04.675336 2075 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 05:59:04.675651 kubelet[2075]: I0707 05:59:04.675452 2075 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:59:04.675651 kubelet[2075]: E0707 05:59:04.675616 2075 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 05:59:04.675876 kubelet[2075]: I0707 05:59:04.675760 2075 factory.go:223] Registration of the systemd container factory successfully Jul 7 05:59:04.675876 kubelet[2075]: E0707 05:59:04.675781 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="200ms" Jul 7 05:59:04.675876 kubelet[2075]: I0707 05:59:04.675840 2075 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:59:04.676518 kubelet[2075]: E0707 05:59:04.676416 2075 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:59:04.677324 kubelet[2075]: I0707 05:59:04.676673 2075 factory.go:223] Registration of the containerd container factory successfully Jul 7 05:59:04.687081 kubelet[2075]: I0707 05:59:04.687064 2075 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 05:59:04.687081 kubelet[2075]: I0707 05:59:04.687078 2075 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 05:59:04.687178 kubelet[2075]: I0707 05:59:04.687093 2075 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:59:04.687818 kubelet[2075]: I0707 05:59:04.687667 2075 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 05:59:04.688851 kubelet[2075]: I0707 05:59:04.688825 2075 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 05:59:04.688851 kubelet[2075]: I0707 05:59:04.688846 2075 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 05:59:04.688933 kubelet[2075]: I0707 05:59:04.688867 2075 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 05:59:04.688933 kubelet[2075]: I0707 05:59:04.688874 2075 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 05:59:04.688974 kubelet[2075]: E0707 05:59:04.688933 2075 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:59:04.694696 kubelet[2075]: E0707 05:59:04.694646 2075 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 05:59:04.775674 kubelet[2075]: E0707 05:59:04.775622 2075 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 05:59:04.789843 kubelet[2075]: E0707 05:59:04.789804 2075 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 05:59:04.871037 kubelet[2075]: I0707 05:59:04.870959 2075 policy_none.go:49] "None policy: Start" Jul 7 05:59:04.871037 kubelet[2075]: I0707 05:59:04.870987 2075 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 05:59:04.871037 kubelet[2075]: I0707 05:59:04.871013 2075 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:59:04.875877 kubelet[2075]: E0707 05:59:04.875840 2075 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 05:59:04.876160 kubelet[2075]: E0707 05:59:04.876133 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="400ms" Jul 7 05:59:04.876824 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 05:59:04.890408 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 05:59:04.892694 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 05:59:04.906944 kubelet[2075]: E0707 05:59:04.906517 2075 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 05:59:04.906944 kubelet[2075]: I0707 05:59:04.906680 2075 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:59:04.906944 kubelet[2075]: I0707 05:59:04.906692 2075 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:59:04.906944 kubelet[2075]: I0707 05:59:04.906882 2075 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:59:04.907695 kubelet[2075]: E0707 05:59:04.907654 2075 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 05:59:04.907695 kubelet[2075]: E0707 05:59:04.907696 2075 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 05:59:05.000841 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 7 05:59:05.007617 kubelet[2075]: I0707 05:59:05.007588 2075 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 05:59:05.007973 kubelet[2075]: E0707 05:59:05.007950 2075 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Jul 7 05:59:05.023194 kubelet[2075]: E0707 05:59:05.023177 2075 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 05:59:05.025834 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 7 05:59:05.035795 kubelet[2075]: E0707 05:59:05.035775 2075 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 05:59:05.037869 systemd[1]: Created slice kubepods-burstable-podeeb02cc9eaf4fdc67acc063d47c7794e.slice - libcontainer container kubepods-burstable-podeeb02cc9eaf4fdc67acc063d47c7794e.slice. Jul 7 05:59:05.039244 kubelet[2075]: E0707 05:59:05.039208 2075 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 05:59:05.178107 kubelet[2075]: I0707 05:59:05.178012 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:05.178107 kubelet[2075]: I0707 05:59:05.178059 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:05.178107 kubelet[2075]: I0707 05:59:05.178080 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:05.178107 kubelet[2075]: I0707 05:59:05.178095 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:05.178107 kubelet[2075]: I0707 05:59:05.178110 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 05:59:05.178296 kubelet[2075]: I0707 05:59:05.178123 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eeb02cc9eaf4fdc67acc063d47c7794e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eeb02cc9eaf4fdc67acc063d47c7794e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:59:05.178296 kubelet[2075]: I0707 05:59:05.178137 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eeb02cc9eaf4fdc67acc063d47c7794e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eeb02cc9eaf4fdc67acc063d47c7794e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:59:05.178296 kubelet[2075]: I0707 05:59:05.178153 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eeb02cc9eaf4fdc67acc063d47c7794e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eeb02cc9eaf4fdc67acc063d47c7794e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:59:05.178296 kubelet[2075]: I0707 05:59:05.178167 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:05.208826 kubelet[2075]: I0707 05:59:05.208797 2075 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 05:59:05.209098 kubelet[2075]: E0707 05:59:05.209059 2075 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Jul 7 05:59:05.276553 kubelet[2075]: E0707 05:59:05.276508 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="800ms" Jul 7 05:59:05.323964 kubelet[2075]: E0707 05:59:05.323935 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:05.324421 containerd[1439]: time="2025-07-07T05:59:05.324375702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 7 05:59:05.336618 kubelet[2075]: E0707 05:59:05.336580 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:05.336943 containerd[1439]: time="2025-07-07T05:59:05.336917458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 7 05:59:05.340357 kubelet[2075]: E0707 05:59:05.340318 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:05.340653 containerd[1439]: time="2025-07-07T05:59:05.340618788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eeb02cc9eaf4fdc67acc063d47c7794e,Namespace:kube-system,Attempt:0,}" Jul 7 05:59:05.505291 kubelet[2075]: E0707 05:59:05.505251 2075 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 05:59:05.610904 kubelet[2075]: I0707 05:59:05.610860 2075 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 05:59:05.611132 kubelet[2075]: E0707 05:59:05.611110 2075 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Jul 7 05:59:05.788838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2405175671.mount: Deactivated successfully. Jul 7 05:59:05.794636 containerd[1439]: time="2025-07-07T05:59:05.794595871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:59:05.795545 containerd[1439]: time="2025-07-07T05:59:05.795499873Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:59:05.796170 containerd[1439]: time="2025-07-07T05:59:05.795986819Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:59:05.796934 containerd[1439]: time="2025-07-07T05:59:05.796889621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 7 05:59:05.797393 containerd[1439]: time="2025-07-07T05:59:05.797360094Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:59:05.798178 containerd[1439]: time="2025-07-07T05:59:05.798149106Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:59:05.798915 containerd[1439]: time="2025-07-07T05:59:05.798457210Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:59:05.801041 containerd[1439]: time="2025-07-07T05:59:05.800994053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:59:05.802626 containerd[1439]: time="2025-07-07T05:59:05.802544610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.561341ms" Jul 7 05:59:05.805178 containerd[1439]: time="2025-07-07T05:59:05.805094727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 480.640539ms" Jul 7 05:59:05.805782 containerd[1439]: time="2025-07-07T05:59:05.805746520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.072596ms" Jul 7 05:59:05.828368 kubelet[2075]: E0707 05:59:05.828332 2075 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 05:59:05.844493 kubelet[2075]: E0707 05:59:05.844444 2075 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 05:59:05.927956 containerd[1439]: time="2025-07-07T05:59:05.927860134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:59:05.927956 containerd[1439]: time="2025-07-07T05:59:05.927927864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:59:05.927956 containerd[1439]: time="2025-07-07T05:59:05.927942058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:05.928182 containerd[1439]: time="2025-07-07T05:59:05.928029340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:05.929207 containerd[1439]: time="2025-07-07T05:59:05.928805318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:59:05.929207 containerd[1439]: time="2025-07-07T05:59:05.929188309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:59:05.929207 containerd[1439]: time="2025-07-07T05:59:05.929201583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:05.929395 containerd[1439]: time="2025-07-07T05:59:05.929265475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:05.930939 containerd[1439]: time="2025-07-07T05:59:05.930719915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:59:05.930939 containerd[1439]: time="2025-07-07T05:59:05.930830346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:59:05.930939 containerd[1439]: time="2025-07-07T05:59:05.930859613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:05.931070 containerd[1439]: time="2025-07-07T05:59:05.931007908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:05.953150 systemd[1]: Started cri-containerd-6f9b4dddbee8be89b27dc43267227831ae3b3f1ecf7ef2dca677275cc4d69ec6.scope - libcontainer container 6f9b4dddbee8be89b27dc43267227831ae3b3f1ecf7ef2dca677275cc4d69ec6. Jul 7 05:59:05.954636 systemd[1]: Started cri-containerd-c8a1adf63fde6aae72bd6658c16c70f14e666429e5d1bb0eaa6ebb6823f34625.scope - libcontainer container c8a1adf63fde6aae72bd6658c16c70f14e666429e5d1bb0eaa6ebb6823f34625. Jul 7 05:59:05.958169 systemd[1]: Started cri-containerd-32bcfa7de73551742b2071a44e93f82fa8adfe7d89836911dd785a347a39b68c.scope - libcontainer container 32bcfa7de73551742b2071a44e93f82fa8adfe7d89836911dd785a347a39b68c. Jul 7 05:59:05.985468 containerd[1439]: time="2025-07-07T05:59:05.985402669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f9b4dddbee8be89b27dc43267227831ae3b3f1ecf7ef2dca677275cc4d69ec6\"" Jul 7 05:59:05.993957 containerd[1439]: time="2025-07-07T05:59:05.990682224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eeb02cc9eaf4fdc67acc063d47c7794e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8a1adf63fde6aae72bd6658c16c70f14e666429e5d1bb0eaa6ebb6823f34625\"" Jul 7 05:59:05.994691 kubelet[2075]: E0707 05:59:05.994639 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:05.995136 kubelet[2075]: E0707 05:59:05.995117 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:05.998755 containerd[1439]: time="2025-07-07T05:59:05.998718684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"32bcfa7de73551742b2071a44e93f82fa8adfe7d89836911dd785a347a39b68c\"" Jul 7 05:59:05.999552 containerd[1439]: time="2025-07-07T05:59:05.999521171Z" level=info msg="CreateContainer within sandbox \"c8a1adf63fde6aae72bd6658c16c70f14e666429e5d1bb0eaa6ebb6823f34625\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 05:59:05.999965 kubelet[2075]: E0707 05:59:05.999916 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:06.001848 containerd[1439]: time="2025-07-07T05:59:06.001772587Z" level=info msg="CreateContainer within sandbox \"6f9b4dddbee8be89b27dc43267227831ae3b3f1ecf7ef2dca677275cc4d69ec6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 05:59:06.004252 containerd[1439]: time="2025-07-07T05:59:06.004201971Z" level=info msg="CreateContainer within sandbox \"32bcfa7de73551742b2071a44e93f82fa8adfe7d89836911dd785a347a39b68c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 05:59:06.016281 containerd[1439]: time="2025-07-07T05:59:06.016248049Z" level=info msg="CreateContainer within sandbox \"c8a1adf63fde6aae72bd6658c16c70f14e666429e5d1bb0eaa6ebb6823f34625\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3abccce3a39a065edcc5b2b4b01108e5cc9ed6f62366e425d31cb4d42ade2a3e\"" Jul 7 05:59:06.016988 containerd[1439]: time="2025-07-07T05:59:06.016950338Z" level=info msg="StartContainer for \"3abccce3a39a065edcc5b2b4b01108e5cc9ed6f62366e425d31cb4d42ade2a3e\"" Jul 7 05:59:06.018944 containerd[1439]: time="2025-07-07T05:59:06.018907704Z" level=info msg="CreateContainer within sandbox \"6f9b4dddbee8be89b27dc43267227831ae3b3f1ecf7ef2dca677275cc4d69ec6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2f2e34ce2df2c1de6bb0e232d4840b555d2664b479a98f86a4291bee6bbe9065\"" Jul 7 05:59:06.019301 containerd[1439]: time="2025-07-07T05:59:06.019270164Z" level=info msg="StartContainer for \"2f2e34ce2df2c1de6bb0e232d4840b555d2664b479a98f86a4291bee6bbe9065\"" Jul 7 05:59:06.021726 containerd[1439]: time="2025-07-07T05:59:06.021622458Z" level=info msg="CreateContainer within sandbox \"32bcfa7de73551742b2071a44e93f82fa8adfe7d89836911dd785a347a39b68c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c83e7c0577c73387abce68cb31aba8f9898bcbfde24ea5a684c60026629f87a3\"" Jul 7 05:59:06.022075 containerd[1439]: time="2025-07-07T05:59:06.022052052Z" level=info msg="StartContainer for \"c83e7c0577c73387abce68cb31aba8f9898bcbfde24ea5a684c60026629f87a3\"" Jul 7 05:59:06.048147 systemd[1]: Started cri-containerd-2f2e34ce2df2c1de6bb0e232d4840b555d2664b479a98f86a4291bee6bbe9065.scope - libcontainer container 2f2e34ce2df2c1de6bb0e232d4840b555d2664b479a98f86a4291bee6bbe9065. Jul 7 05:59:06.053180 systemd[1]: Started cri-containerd-3abccce3a39a065edcc5b2b4b01108e5cc9ed6f62366e425d31cb4d42ade2a3e.scope - libcontainer container 3abccce3a39a065edcc5b2b4b01108e5cc9ed6f62366e425d31cb4d42ade2a3e. Jul 7 05:59:06.054421 systemd[1]: Started cri-containerd-c83e7c0577c73387abce68cb31aba8f9898bcbfde24ea5a684c60026629f87a3.scope - libcontainer container c83e7c0577c73387abce68cb31aba8f9898bcbfde24ea5a684c60026629f87a3. Jul 7 05:59:06.077439 kubelet[2075]: E0707 05:59:06.077395 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="1.6s" Jul 7 05:59:06.091227 containerd[1439]: time="2025-07-07T05:59:06.085884413Z" level=info msg="StartContainer for \"2f2e34ce2df2c1de6bb0e232d4840b555d2664b479a98f86a4291bee6bbe9065\" returns successfully" Jul 7 05:59:06.091227 containerd[1439]: time="2025-07-07T05:59:06.087997198Z" level=info msg="StartContainer for \"3abccce3a39a065edcc5b2b4b01108e5cc9ed6f62366e425d31cb4d42ade2a3e\" returns successfully" Jul 7 05:59:06.093373 containerd[1439]: time="2025-07-07T05:59:06.093329983Z" level=info msg="StartContainer for \"c83e7c0577c73387abce68cb31aba8f9898bcbfde24ea5a684c60026629f87a3\" returns successfully" Jul 7 05:59:06.412710 kubelet[2075]: I0707 05:59:06.412601 2075 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 05:59:06.700737 kubelet[2075]: E0707 05:59:06.700449 2075 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 05:59:06.700737 kubelet[2075]: E0707 05:59:06.700570 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:06.702779 kubelet[2075]: E0707 05:59:06.702383 2075 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 05:59:06.702779 kubelet[2075]: E0707 05:59:06.702497 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:06.705045 kubelet[2075]: E0707 05:59:06.705024 2075 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 05:59:06.705148 kubelet[2075]: E0707 05:59:06.705120 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:07.706672 kubelet[2075]: E0707 05:59:07.706547 2075 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 05:59:07.706672 kubelet[2075]: E0707 05:59:07.706652 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:07.707004 kubelet[2075]: E0707 05:59:07.706653 2075 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 05:59:07.707004 kubelet[2075]: E0707 05:59:07.706753 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:08.524573 kubelet[2075]: E0707 05:59:08.524499 2075 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 05:59:08.654906 kubelet[2075]: I0707 05:59:08.654853 2075 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 05:59:08.660605 kubelet[2075]: I0707 05:59:08.660584 2075 apiserver.go:52] "Watching apiserver" Jul 7 05:59:08.676177 kubelet[2075]: I0707 05:59:08.676139 2075 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 05:59:08.676267 kubelet[2075]: I0707 05:59:08.676175 2075 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 05:59:08.680404 kubelet[2075]: E0707 05:59:08.680374 2075 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 05:59:08.680404 kubelet[2075]: I0707 05:59:08.680398 2075 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 05:59:08.681960 kubelet[2075]: E0707 05:59:08.681930 2075 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 05:59:08.681960 kubelet[2075]: I0707 05:59:08.681952 2075 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:08.683820 kubelet[2075]: E0707 05:59:08.683788 2075 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:10.601624 systemd[1]: Reloading requested from client PID 2361 ('systemctl') (unit session-5.scope)... Jul 7 05:59:10.601638 systemd[1]: Reloading... Jul 7 05:59:10.667929 zram_generator::config[2400]: No configuration found. Jul 7 05:59:10.749707 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:59:10.814240 systemd[1]: Reloading finished in 212 ms. Jul 7 05:59:10.846621 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:59:10.858778 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 05:59:10.859053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:59:10.859103 systemd[1]: kubelet.service: Consumed 1.430s CPU time, 130.8M memory peak, 0B memory swap peak. Jul 7 05:59:10.865171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:59:10.963070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:59:10.967471 (kubelet)[2442]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:59:10.997575 kubelet[2442]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:59:10.997575 kubelet[2442]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 05:59:10.997575 kubelet[2442]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:59:10.999682 kubelet[2442]: I0707 05:59:10.997583 2442 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:59:11.009261 kubelet[2442]: I0707 05:59:11.007820 2442 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 05:59:11.009261 kubelet[2442]: I0707 05:59:11.007858 2442 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:59:11.009261 kubelet[2442]: I0707 05:59:11.008047 2442 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 05:59:11.009388 kubelet[2442]: I0707 05:59:11.009285 2442 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 05:59:11.012401 kubelet[2442]: I0707 05:59:11.012382 2442 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:59:11.018105 kubelet[2442]: E0707 05:59:11.018050 2442 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:59:11.018105 kubelet[2442]: I0707 05:59:11.018074 2442 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:59:11.024059 kubelet[2442]: I0707 05:59:11.023918 2442 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:59:11.024762 kubelet[2442]: I0707 05:59:11.024734 2442 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:59:11.024902 kubelet[2442]: I0707 05:59:11.024764 2442 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 05:59:11.024986 kubelet[2442]: I0707 05:59:11.024915 2442 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:59:11.024986 kubelet[2442]: I0707 05:59:11.024924 2442 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 05:59:11.024986 kubelet[2442]: I0707 05:59:11.024965 2442 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:59:11.025124 kubelet[2442]: I0707 05:59:11.025114 2442 kubelet.go:480] "Attempting to sync node with API server" Jul 7 05:59:11.025980 kubelet[2442]: I0707 05:59:11.025130 2442 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:59:11.025980 kubelet[2442]: I0707 05:59:11.025159 2442 kubelet.go:386] "Adding apiserver pod source" Jul 7 05:59:11.025980 kubelet[2442]: I0707 05:59:11.025173 2442 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:59:11.029876 kubelet[2442]: I0707 05:59:11.026119 2442 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:59:11.029876 kubelet[2442]: I0707 05:59:11.026652 2442 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 05:59:11.033399 kubelet[2442]: I0707 05:59:11.033366 2442 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 05:59:11.033472 kubelet[2442]: I0707 05:59:11.033410 2442 server.go:1289] "Started kubelet" Jul 7 05:59:11.036976 kubelet[2442]: I0707 05:59:11.036834 2442 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:59:11.040157 kubelet[2442]: I0707 05:59:11.039870 2442 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:59:11.040803 kubelet[2442]: I0707 05:59:11.040779 2442 server.go:317] "Adding debug handlers to kubelet server" Jul 7 05:59:11.043755 kubelet[2442]: I0707 05:59:11.043334 2442 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:59:11.043755 kubelet[2442]: I0707 05:59:11.043506 2442 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:59:11.043755 kubelet[2442]: I0707 05:59:11.043653 2442 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:59:11.046293 kubelet[2442]: I0707 05:59:11.046217 2442 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 05:59:11.046293 kubelet[2442]: I0707 05:59:11.046302 2442 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 05:59:11.046724 kubelet[2442]: I0707 05:59:11.046658 2442 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:59:11.048097 kubelet[2442]: I0707 05:59:11.048061 2442 factory.go:223] Registration of the systemd container factory successfully Jul 7 05:59:11.048183 kubelet[2442]: I0707 05:59:11.048162 2442 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:59:11.049521 kubelet[2442]: I0707 05:59:11.049503 2442 factory.go:223] Registration of the containerd container factory successfully Jul 7 05:59:11.051196 kubelet[2442]: E0707 05:59:11.051171 2442 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:59:11.062133 kubelet[2442]: I0707 05:59:11.062075 2442 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 05:59:11.066992 kubelet[2442]: I0707 05:59:11.066949 2442 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 05:59:11.066992 kubelet[2442]: I0707 05:59:11.066989 2442 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 05:59:11.067087 kubelet[2442]: I0707 05:59:11.067009 2442 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 05:59:11.067087 kubelet[2442]: I0707 05:59:11.067015 2442 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 05:59:11.067087 kubelet[2442]: E0707 05:59:11.067065 2442 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:59:11.093143 kubelet[2442]: I0707 05:59:11.093110 2442 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 05:59:11.093143 kubelet[2442]: I0707 05:59:11.093130 2442 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 05:59:11.093290 kubelet[2442]: I0707 05:59:11.093161 2442 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:59:11.093313 kubelet[2442]: I0707 05:59:11.093297 2442 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 05:59:11.093340 kubelet[2442]: I0707 05:59:11.093314 2442 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 05:59:11.093340 kubelet[2442]: I0707 05:59:11.093331 2442 policy_none.go:49] "None policy: Start" Jul 7 05:59:11.093381 kubelet[2442]: I0707 05:59:11.093343 2442 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 05:59:11.093381 kubelet[2442]: I0707 05:59:11.093352 2442 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:59:11.093554 kubelet[2442]: I0707 05:59:11.093537 2442 state_mem.go:75] "Updated machine memory state" Jul 7 05:59:11.096885 kubelet[2442]: E0707 05:59:11.096847 2442 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 05:59:11.097204 kubelet[2442]: I0707 05:59:11.097028 2442 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:59:11.097204 kubelet[2442]: I0707 05:59:11.097046 2442 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:59:11.097470 kubelet[2442]: I0707 05:59:11.097449 2442 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:59:11.098326 kubelet[2442]: E0707 05:59:11.098037 2442 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 05:59:11.168034 kubelet[2442]: I0707 05:59:11.167949 2442 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 05:59:11.168973 kubelet[2442]: I0707 05:59:11.168018 2442 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 05:59:11.170212 kubelet[2442]: I0707 05:59:11.168064 2442 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:11.204687 kubelet[2442]: I0707 05:59:11.204606 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 05:59:11.209695 kubelet[2442]: I0707 05:59:11.209658 2442 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 05:59:11.209768 kubelet[2442]: I0707 05:59:11.209725 2442 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 05:59:11.248161 kubelet[2442]: I0707 05:59:11.248111 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eeb02cc9eaf4fdc67acc063d47c7794e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eeb02cc9eaf4fdc67acc063d47c7794e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:59:11.248255 kubelet[2442]: I0707 05:59:11.248185 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eeb02cc9eaf4fdc67acc063d47c7794e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eeb02cc9eaf4fdc67acc063d47c7794e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:59:11.248255 kubelet[2442]: I0707 05:59:11.248220 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eeb02cc9eaf4fdc67acc063d47c7794e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eeb02cc9eaf4fdc67acc063d47c7794e\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:59:11.248299 kubelet[2442]: I0707 05:59:11.248262 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:11.248331 kubelet[2442]: I0707 05:59:11.248296 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:11.248331 kubelet[2442]: I0707 05:59:11.248321 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:11.248375 kubelet[2442]: I0707 05:59:11.248346 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:11.248375 kubelet[2442]: I0707 05:59:11.248362 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:59:11.248420 kubelet[2442]: I0707 05:59:11.248380 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 05:59:11.473513 kubelet[2442]: E0707 05:59:11.473426 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:11.473513 kubelet[2442]: E0707 05:59:11.473491 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:11.474583 kubelet[2442]: E0707 05:59:11.474556 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:12.026155 kubelet[2442]: I0707 05:59:12.026113 2442 apiserver.go:52] "Watching apiserver" Jul 7 05:59:12.047392 kubelet[2442]: I0707 05:59:12.047362 2442 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 05:59:12.085215 kubelet[2442]: I0707 05:59:12.085180 2442 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 05:59:12.085504 kubelet[2442]: E0707 05:59:12.085479 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:12.085603 kubelet[2442]: E0707 05:59:12.085563 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:12.097038 kubelet[2442]: E0707 05:59:12.096995 2442 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 05:59:12.097163 kubelet[2442]: E0707 05:59:12.097142 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:12.115684 kubelet[2442]: I0707 05:59:12.115634 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.115611864 podStartE2EDuration="1.115611864s" podCreationTimestamp="2025-07-07 05:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:59:12.108607028 +0000 UTC m=+1.137197717" watchObservedRunningTime="2025-07-07 05:59:12.115611864 +0000 UTC m=+1.144202553" Jul 7 05:59:12.115832 kubelet[2442]: I0707 05:59:12.115734 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.11572946 podStartE2EDuration="1.11572946s" podCreationTimestamp="2025-07-07 05:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:59:12.115553426 +0000 UTC m=+1.144144115" watchObservedRunningTime="2025-07-07 05:59:12.11572946 +0000 UTC m=+1.144320149" Jul 7 05:59:12.132197 kubelet[2442]: I0707 05:59:12.132138 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.13212665 podStartE2EDuration="1.13212665s" podCreationTimestamp="2025-07-07 05:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:59:12.125198251 +0000 UTC m=+1.153788940" watchObservedRunningTime="2025-07-07 05:59:12.13212665 +0000 UTC m=+1.160717299" Jul 7 05:59:12.423111 sudo[1580]: pam_unix(sudo:session): session closed for user root Jul 7 05:59:12.427069 sshd[1577]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:12.430645 systemd[1]: sshd@4-10.0.0.62:22-10.0.0.1:39162.service: Deactivated successfully. Jul 7 05:59:12.432880 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 05:59:12.434002 systemd[1]: session-5.scope: Consumed 6.362s CPU time, 155.8M memory peak, 0B memory swap peak. Jul 7 05:59:12.434432 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Jul 7 05:59:12.435319 systemd-logind[1421]: Removed session 5. Jul 7 05:59:13.086918 kubelet[2442]: E0707 05:59:13.086877 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:13.087290 kubelet[2442]: E0707 05:59:13.086994 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:13.332618 kubelet[2442]: E0707 05:59:13.332582 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:14.088408 kubelet[2442]: E0707 05:59:14.088330 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:17.542850 kubelet[2442]: I0707 05:59:17.542805 2442 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 05:59:17.543210 containerd[1439]: time="2025-07-07T05:59:17.543085680Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 05:59:17.543418 kubelet[2442]: I0707 05:59:17.543271 2442 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 05:59:18.426480 systemd[1]: Created slice kubepods-besteffort-pod6b9a2a2a_82ed_4364_9b18_00dad76d17ee.slice - libcontainer container kubepods-besteffort-pod6b9a2a2a_82ed_4364_9b18_00dad76d17ee.slice. Jul 7 05:59:18.441111 systemd[1]: Created slice kubepods-burstable-podd12c5c68_bdd2_4dd7_8c66_9e777ef8026b.slice - libcontainer container kubepods-burstable-podd12c5c68_bdd2_4dd7_8c66_9e777ef8026b.slice. Jul 7 05:59:18.494353 kubelet[2442]: I0707 05:59:18.494305 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/d12c5c68-bdd2-4dd7-8c66-9e777ef8026b-flannel-cfg\") pod \"kube-flannel-ds-qp9fs\" (UID: \"d12c5c68-bdd2-4dd7-8c66-9e777ef8026b\") " pod="kube-flannel/kube-flannel-ds-qp9fs" Jul 7 05:59:18.494353 kubelet[2442]: I0707 05:59:18.494355 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d12c5c68-bdd2-4dd7-8c66-9e777ef8026b-xtables-lock\") pod \"kube-flannel-ds-qp9fs\" (UID: \"d12c5c68-bdd2-4dd7-8c66-9e777ef8026b\") " pod="kube-flannel/kube-flannel-ds-qp9fs" Jul 7 05:59:18.494514 kubelet[2442]: I0707 05:59:18.494370 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/d12c5c68-bdd2-4dd7-8c66-9e777ef8026b-cni-plugin\") pod \"kube-flannel-ds-qp9fs\" (UID: \"d12c5c68-bdd2-4dd7-8c66-9e777ef8026b\") " pod="kube-flannel/kube-flannel-ds-qp9fs" Jul 7 05:59:18.494514 kubelet[2442]: I0707 05:59:18.494391 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b9a2a2a-82ed-4364-9b18-00dad76d17ee-kube-proxy\") pod \"kube-proxy-j9phk\" (UID: \"6b9a2a2a-82ed-4364-9b18-00dad76d17ee\") " pod="kube-system/kube-proxy-j9phk" Jul 7 05:59:18.494514 kubelet[2442]: I0707 05:59:18.494406 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9ddz\" (UniqueName: \"kubernetes.io/projected/6b9a2a2a-82ed-4364-9b18-00dad76d17ee-kube-api-access-f9ddz\") pod \"kube-proxy-j9phk\" (UID: \"6b9a2a2a-82ed-4364-9b18-00dad76d17ee\") " pod="kube-system/kube-proxy-j9phk" Jul 7 05:59:18.494514 kubelet[2442]: I0707 05:59:18.494445 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d12c5c68-bdd2-4dd7-8c66-9e777ef8026b-run\") pod \"kube-flannel-ds-qp9fs\" (UID: \"d12c5c68-bdd2-4dd7-8c66-9e777ef8026b\") " pod="kube-flannel/kube-flannel-ds-qp9fs" Jul 7 05:59:18.494514 kubelet[2442]: I0707 05:59:18.494459 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b9a2a2a-82ed-4364-9b18-00dad76d17ee-xtables-lock\") pod \"kube-proxy-j9phk\" (UID: \"6b9a2a2a-82ed-4364-9b18-00dad76d17ee\") " pod="kube-system/kube-proxy-j9phk" Jul 7 05:59:18.494618 kubelet[2442]: I0707 05:59:18.494473 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b9a2a2a-82ed-4364-9b18-00dad76d17ee-lib-modules\") pod \"kube-proxy-j9phk\" (UID: \"6b9a2a2a-82ed-4364-9b18-00dad76d17ee\") " pod="kube-system/kube-proxy-j9phk" Jul 7 05:59:18.494618 kubelet[2442]: I0707 05:59:18.494488 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/d12c5c68-bdd2-4dd7-8c66-9e777ef8026b-cni\") pod \"kube-flannel-ds-qp9fs\" (UID: \"d12c5c68-bdd2-4dd7-8c66-9e777ef8026b\") " pod="kube-flannel/kube-flannel-ds-qp9fs" Jul 7 05:59:18.494618 kubelet[2442]: I0707 05:59:18.494503 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jcgh\" (UniqueName: \"kubernetes.io/projected/d12c5c68-bdd2-4dd7-8c66-9e777ef8026b-kube-api-access-8jcgh\") pod \"kube-flannel-ds-qp9fs\" (UID: \"d12c5c68-bdd2-4dd7-8c66-9e777ef8026b\") " pod="kube-flannel/kube-flannel-ds-qp9fs" Jul 7 05:59:18.651965 kubelet[2442]: E0707 05:59:18.651938 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:18.739677 kubelet[2442]: E0707 05:59:18.739646 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:18.740892 containerd[1439]: time="2025-07-07T05:59:18.740822651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j9phk,Uid:6b9a2a2a-82ed-4364-9b18-00dad76d17ee,Namespace:kube-system,Attempt:0,}" Jul 7 05:59:18.744325 kubelet[2442]: E0707 05:59:18.744239 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:18.745025 containerd[1439]: time="2025-07-07T05:59:18.744886790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-qp9fs,Uid:d12c5c68-bdd2-4dd7-8c66-9e777ef8026b,Namespace:kube-flannel,Attempt:0,}" Jul 7 05:59:18.760707 containerd[1439]: time="2025-07-07T05:59:18.760602799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:59:18.760707 containerd[1439]: time="2025-07-07T05:59:18.760659398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:59:18.760707 containerd[1439]: time="2025-07-07T05:59:18.760674277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:18.760995 containerd[1439]: time="2025-07-07T05:59:18.760759915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:18.770843 containerd[1439]: time="2025-07-07T05:59:18.770553111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:59:18.770843 containerd[1439]: time="2025-07-07T05:59:18.770606470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:59:18.770843 containerd[1439]: time="2025-07-07T05:59:18.770626390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:18.770843 containerd[1439]: time="2025-07-07T05:59:18.770706828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:18.779056 systemd[1]: Started cri-containerd-ed5a8d698376a0f8163b14902d7a131a8756cc179ea378b477d55a6cf56874cc.scope - libcontainer container ed5a8d698376a0f8163b14902d7a131a8756cc179ea378b477d55a6cf56874cc. Jul 7 05:59:18.781707 systemd[1]: Started cri-containerd-38fcf3c0a1731b3a3b47bf9cce886ec9b9ae373ecb6cb479f7eb7fb094f0469f.scope - libcontainer container 38fcf3c0a1731b3a3b47bf9cce886ec9b9ae373ecb6cb479f7eb7fb094f0469f. Jul 7 05:59:18.797826 containerd[1439]: time="2025-07-07T05:59:18.797773594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j9phk,Uid:6b9a2a2a-82ed-4364-9b18-00dad76d17ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed5a8d698376a0f8163b14902d7a131a8756cc179ea378b477d55a6cf56874cc\"" Jul 7 05:59:18.799167 kubelet[2442]: E0707 05:59:18.799141 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:18.807028 containerd[1439]: time="2025-07-07T05:59:18.806929327Z" level=info msg="CreateContainer within sandbox \"ed5a8d698376a0f8163b14902d7a131a8756cc179ea378b477d55a6cf56874cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 05:59:18.812686 containerd[1439]: time="2025-07-07T05:59:18.812657904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-qp9fs,Uid:d12c5c68-bdd2-4dd7-8c66-9e777ef8026b,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"38fcf3c0a1731b3a3b47bf9cce886ec9b9ae373ecb6cb479f7eb7fb094f0469f\"" Jul 7 05:59:18.813336 kubelet[2442]: E0707 05:59:18.813313 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:18.814775 containerd[1439]: time="2025-07-07T05:59:18.814745372Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jul 7 05:59:18.820770 containerd[1439]: time="2025-07-07T05:59:18.820716864Z" level=info msg="CreateContainer within sandbox \"ed5a8d698376a0f8163b14902d7a131a8756cc179ea378b477d55a6cf56874cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b9a91d6a72092efff65bbd66696b5473e9a872913e7515de3dc3041df5695c6\"" Jul 7 05:59:18.821213 containerd[1439]: time="2025-07-07T05:59:18.821187532Z" level=info msg="StartContainer for \"8b9a91d6a72092efff65bbd66696b5473e9a872913e7515de3dc3041df5695c6\"" Jul 7 05:59:18.846036 systemd[1]: Started cri-containerd-8b9a91d6a72092efff65bbd66696b5473e9a872913e7515de3dc3041df5695c6.scope - libcontainer container 8b9a91d6a72092efff65bbd66696b5473e9a872913e7515de3dc3041df5695c6. Jul 7 05:59:18.867946 containerd[1439]: time="2025-07-07T05:59:18.867912490Z" level=info msg="StartContainer for \"8b9a91d6a72092efff65bbd66696b5473e9a872913e7515de3dc3041df5695c6\" returns successfully" Jul 7 05:59:19.096496 kubelet[2442]: E0707 05:59:19.096407 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:19.097194 kubelet[2442]: E0707 05:59:19.097173 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:19.726093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521680697.mount: Deactivated successfully. Jul 7 05:59:19.752703 containerd[1439]: time="2025-07-07T05:59:19.752653953Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:59:19.753070 containerd[1439]: time="2025-07-07T05:59:19.753038144Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=5125564" Jul 7 05:59:19.754061 containerd[1439]: time="2025-07-07T05:59:19.754038321Z" level=info msg="ImageCreate event name:\"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:59:19.757982 containerd[1439]: time="2025-07-07T05:59:19.757947629Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:59:19.759148 containerd[1439]: time="2025-07-07T05:59:19.759114401Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"5125394\" in 944.241312ms" Jul 7 05:59:19.759209 containerd[1439]: time="2025-07-07T05:59:19.759147240Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\"" Jul 7 05:59:19.763210 containerd[1439]: time="2025-07-07T05:59:19.763149666Z" level=info msg="CreateContainer within sandbox \"38fcf3c0a1731b3a3b47bf9cce886ec9b9ae373ecb6cb479f7eb7fb094f0469f\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jul 7 05:59:19.773513 containerd[1439]: time="2025-07-07T05:59:19.773470783Z" level=info msg="CreateContainer within sandbox \"38fcf3c0a1731b3a3b47bf9cce886ec9b9ae373ecb6cb479f7eb7fb094f0469f\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"03938ed99d0b520203b535b5a4cf63e2e3a01a4a817ebf5cade1ce7425b7a40f\"" Jul 7 05:59:19.774166 containerd[1439]: time="2025-07-07T05:59:19.774025530Z" level=info msg="StartContainer for \"03938ed99d0b520203b535b5a4cf63e2e3a01a4a817ebf5cade1ce7425b7a40f\"" Jul 7 05:59:19.802052 systemd[1]: Started cri-containerd-03938ed99d0b520203b535b5a4cf63e2e3a01a4a817ebf5cade1ce7425b7a40f.scope - libcontainer container 03938ed99d0b520203b535b5a4cf63e2e3a01a4a817ebf5cade1ce7425b7a40f. Jul 7 05:59:19.824858 containerd[1439]: time="2025-07-07T05:59:19.824820893Z" level=info msg="StartContainer for \"03938ed99d0b520203b535b5a4cf63e2e3a01a4a817ebf5cade1ce7425b7a40f\" returns successfully" Jul 7 05:59:19.827422 systemd[1]: cri-containerd-03938ed99d0b520203b535b5a4cf63e2e3a01a4a817ebf5cade1ce7425b7a40f.scope: Deactivated successfully. Jul 7 05:59:19.868517 containerd[1439]: time="2025-07-07T05:59:19.863671618Z" level=info msg="shim disconnected" id=03938ed99d0b520203b535b5a4cf63e2e3a01a4a817ebf5cade1ce7425b7a40f namespace=k8s.io Jul 7 05:59:19.868517 containerd[1439]: time="2025-07-07T05:59:19.868513824Z" level=warning msg="cleaning up after shim disconnected" id=03938ed99d0b520203b535b5a4cf63e2e3a01a4a817ebf5cade1ce7425b7a40f namespace=k8s.io Jul 7 05:59:19.868517 containerd[1439]: time="2025-07-07T05:59:19.868525984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:59:19.878056 containerd[1439]: time="2025-07-07T05:59:19.878018480Z" level=warning msg="cleanup warnings time=\"2025-07-07T05:59:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 05:59:20.098983 kubelet[2442]: E0707 05:59:20.098916 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:20.101001 containerd[1439]: time="2025-07-07T05:59:20.100858873Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jul 7 05:59:20.110565 kubelet[2442]: I0707 05:59:20.110040 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j9phk" podStartSLOduration=2.110028709 podStartE2EDuration="2.110028709s" podCreationTimestamp="2025-07-07 05:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:59:19.114140995 +0000 UTC m=+8.142731684" watchObservedRunningTime="2025-07-07 05:59:20.110028709 +0000 UTC m=+9.138619398" Jul 7 05:59:21.911828 containerd[1439]: time="2025-07-07T05:59:21.911781372Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:59:21.912821 containerd[1439]: time="2025-07-07T05:59:21.912553636Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=28419854" Jul 7 05:59:21.913944 containerd[1439]: time="2025-07-07T05:59:21.913765450Z" level=info msg="ImageCreate event name:\"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:59:21.926629 containerd[1439]: time="2025-07-07T05:59:21.926575259Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:59:21.928098 containerd[1439]: time="2025-07-07T05:59:21.928043148Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32412118\" in 1.827144875s" Jul 7 05:59:21.928098 containerd[1439]: time="2025-07-07T05:59:21.928075947Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\"" Jul 7 05:59:21.932438 containerd[1439]: time="2025-07-07T05:59:21.932256619Z" level=info msg="CreateContainer within sandbox \"38fcf3c0a1731b3a3b47bf9cce886ec9b9ae373ecb6cb479f7eb7fb094f0469f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 05:59:21.950942 containerd[1439]: time="2025-07-07T05:59:21.950820586Z" level=info msg="CreateContainer within sandbox \"38fcf3c0a1731b3a3b47bf9cce886ec9b9ae373ecb6cb479f7eb7fb094f0469f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"56fe194892690efdebf44fb8757b84458a67f75f865b2c77854cba8a903c71a0\"" Jul 7 05:59:21.951343 containerd[1439]: time="2025-07-07T05:59:21.951307055Z" level=info msg="StartContainer for \"56fe194892690efdebf44fb8757b84458a67f75f865b2c77854cba8a903c71a0\"" Jul 7 05:59:21.979053 systemd[1]: Started cri-containerd-56fe194892690efdebf44fb8757b84458a67f75f865b2c77854cba8a903c71a0.scope - libcontainer container 56fe194892690efdebf44fb8757b84458a67f75f865b2c77854cba8a903c71a0. Jul 7 05:59:21.999860 containerd[1439]: time="2025-07-07T05:59:21.998296580Z" level=info msg="StartContainer for \"56fe194892690efdebf44fb8757b84458a67f75f865b2c77854cba8a903c71a0\" returns successfully" Jul 7 05:59:22.004792 systemd[1]: cri-containerd-56fe194892690efdebf44fb8757b84458a67f75f865b2c77854cba8a903c71a0.scope: Deactivated successfully. Jul 7 05:59:22.038999 kubelet[2442]: I0707 05:59:22.038956 2442 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 05:59:22.137700 containerd[1439]: time="2025-07-07T05:59:22.137558300Z" level=info msg="shim disconnected" id=56fe194892690efdebf44fb8757b84458a67f75f865b2c77854cba8a903c71a0 namespace=k8s.io Jul 7 05:59:22.137700 containerd[1439]: time="2025-07-07T05:59:22.137607859Z" level=warning msg="cleaning up after shim disconnected" id=56fe194892690efdebf44fb8757b84458a67f75f865b2c77854cba8a903c71a0 namespace=k8s.io Jul 7 05:59:22.137700 containerd[1439]: time="2025-07-07T05:59:22.137617099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:59:22.143341 kubelet[2442]: E0707 05:59:22.143185 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:22.144694 systemd[1]: Created slice kubepods-burstable-pod376d99cb_300a_481b_89e5_86d379db33cd.slice - libcontainer container kubepods-burstable-pod376d99cb_300a_481b_89e5_86d379db33cd.slice. Jul 7 05:59:22.162979 containerd[1439]: time="2025-07-07T05:59:22.161988929Z" level=warning msg="cleanup warnings time=\"2025-07-07T05:59:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 05:59:22.163096 systemd[1]: Created slice kubepods-burstable-poda2b9d6f1_d77b_4301_ba44_8d68aa0044a3.slice - libcontainer container kubepods-burstable-poda2b9d6f1_d77b_4301_ba44_8d68aa0044a3.slice. Jul 7 05:59:22.218101 kubelet[2442]: I0707 05:59:22.218060 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrfd\" (UniqueName: \"kubernetes.io/projected/376d99cb-300a-481b-89e5-86d379db33cd-kube-api-access-pdrfd\") pod \"coredns-674b8bbfcf-pxp9h\" (UID: \"376d99cb-300a-481b-89e5-86d379db33cd\") " pod="kube-system/coredns-674b8bbfcf-pxp9h" Jul 7 05:59:22.218101 kubelet[2442]: I0707 05:59:22.218102 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgqh\" (UniqueName: \"kubernetes.io/projected/a2b9d6f1-d77b-4301-ba44-8d68aa0044a3-kube-api-access-9jgqh\") pod \"coredns-674b8bbfcf-h99kb\" (UID: \"a2b9d6f1-d77b-4301-ba44-8d68aa0044a3\") " pod="kube-system/coredns-674b8bbfcf-h99kb" Jul 7 05:59:22.218218 kubelet[2442]: I0707 05:59:22.218129 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/376d99cb-300a-481b-89e5-86d379db33cd-config-volume\") pod \"coredns-674b8bbfcf-pxp9h\" (UID: \"376d99cb-300a-481b-89e5-86d379db33cd\") " pod="kube-system/coredns-674b8bbfcf-pxp9h" Jul 7 05:59:22.218218 kubelet[2442]: I0707 05:59:22.218146 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2b9d6f1-d77b-4301-ba44-8d68aa0044a3-config-volume\") pod \"coredns-674b8bbfcf-h99kb\" (UID: \"a2b9d6f1-d77b-4301-ba44-8d68aa0044a3\") " pod="kube-system/coredns-674b8bbfcf-h99kb" Jul 7 05:59:22.411959 kubelet[2442]: E0707 05:59:22.411925 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:22.459383 kubelet[2442]: E0707 05:59:22.459245 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:22.460212 containerd[1439]: time="2025-07-07T05:59:22.460156058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxp9h,Uid:376d99cb-300a-481b-89e5-86d379db33cd,Namespace:kube-system,Attempt:0,}" Jul 7 05:59:22.468029 kubelet[2442]: E0707 05:59:22.467946 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:22.468520 containerd[1439]: time="2025-07-07T05:59:22.468467331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h99kb,Uid:a2b9d6f1-d77b-4301-ba44-8d68aa0044a3,Namespace:kube-system,Attempt:0,}" Jul 7 05:59:22.552203 containerd[1439]: time="2025-07-07T05:59:22.552151250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h99kb,Uid:a2b9d6f1-d77b-4301-ba44-8d68aa0044a3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"653b78193aef57f59e2e632bdb30751e00f76d0158ded7ff4a823cfadd8a984a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 7 05:59:22.552435 kubelet[2442]: E0707 05:59:22.552391 2442 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"653b78193aef57f59e2e632bdb30751e00f76d0158ded7ff4a823cfadd8a984a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 7 05:59:22.552494 kubelet[2442]: E0707 05:59:22.552462 2442 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"653b78193aef57f59e2e632bdb30751e00f76d0158ded7ff4a823cfadd8a984a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-h99kb" Jul 7 05:59:22.552494 kubelet[2442]: E0707 05:59:22.552482 2442 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"653b78193aef57f59e2e632bdb30751e00f76d0158ded7ff4a823cfadd8a984a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-h99kb" Jul 7 05:59:22.552552 kubelet[2442]: E0707 05:59:22.552529 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h99kb_kube-system(a2b9d6f1-d77b-4301-ba44-8d68aa0044a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h99kb_kube-system(a2b9d6f1-d77b-4301-ba44-8d68aa0044a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"653b78193aef57f59e2e632bdb30751e00f76d0158ded7ff4a823cfadd8a984a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-h99kb" podUID="a2b9d6f1-d77b-4301-ba44-8d68aa0044a3" Jul 7 05:59:22.554349 containerd[1439]: time="2025-07-07T05:59:22.554298367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxp9h,Uid:376d99cb-300a-481b-89e5-86d379db33cd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aed2e02a80fa54bfddb1d327fc6970745bc9b796f3547babe522a4a14157770e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 7 05:59:22.554579 kubelet[2442]: E0707 05:59:22.554545 2442 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aed2e02a80fa54bfddb1d327fc6970745bc9b796f3547babe522a4a14157770e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 7 05:59:22.554638 kubelet[2442]: E0707 05:59:22.554594 2442 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aed2e02a80fa54bfddb1d327fc6970745bc9b796f3547babe522a4a14157770e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-pxp9h" Jul 7 05:59:22.554638 kubelet[2442]: E0707 05:59:22.554611 2442 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aed2e02a80fa54bfddb1d327fc6970745bc9b796f3547babe522a4a14157770e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-pxp9h" Jul 7 05:59:22.554687 kubelet[2442]: E0707 05:59:22.554651 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pxp9h_kube-system(376d99cb-300a-481b-89e5-86d379db33cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pxp9h_kube-system(376d99cb-300a-481b-89e5-86d379db33cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aed2e02a80fa54bfddb1d327fc6970745bc9b796f3547babe522a4a14157770e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-pxp9h" podUID="376d99cb-300a-481b-89e5-86d379db33cd" Jul 7 05:59:22.940470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56fe194892690efdebf44fb8757b84458a67f75f865b2c77854cba8a903c71a0-rootfs.mount: Deactivated successfully. Jul 7 05:59:23.148995 kubelet[2442]: E0707 05:59:23.148624 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:23.148995 kubelet[2442]: E0707 05:59:23.148705 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:23.162478 containerd[1439]: time="2025-07-07T05:59:23.162437992Z" level=info msg="CreateContainer within sandbox \"38fcf3c0a1731b3a3b47bf9cce886ec9b9ae373ecb6cb479f7eb7fb094f0469f\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jul 7 05:59:23.172535 containerd[1439]: time="2025-07-07T05:59:23.172493880Z" level=info msg="CreateContainer within sandbox \"38fcf3c0a1731b3a3b47bf9cce886ec9b9ae373ecb6cb479f7eb7fb094f0469f\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e7920687f908bb611a695e3b2409597b245238dd3592db15105af23dfc256dc9\"" Jul 7 05:59:23.173191 containerd[1439]: time="2025-07-07T05:59:23.173001751Z" level=info msg="StartContainer for \"e7920687f908bb611a695e3b2409597b245238dd3592db15105af23dfc256dc9\"" Jul 7 05:59:23.206044 systemd[1]: Started cri-containerd-e7920687f908bb611a695e3b2409597b245238dd3592db15105af23dfc256dc9.scope - libcontainer container e7920687f908bb611a695e3b2409597b245238dd3592db15105af23dfc256dc9. Jul 7 05:59:23.232436 containerd[1439]: time="2025-07-07T05:59:23.231963666Z" level=info msg="StartContainer for \"e7920687f908bb611a695e3b2409597b245238dd3592db15105af23dfc256dc9\" returns successfully" Jul 7 05:59:23.344136 kubelet[2442]: E0707 05:59:23.344081 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:24.152819 kubelet[2442]: E0707 05:59:24.152490 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:24.167046 kubelet[2442]: I0707 05:59:24.166982 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-qp9fs" podStartSLOduration=3.048559312 podStartE2EDuration="6.163737046s" podCreationTimestamp="2025-07-07 05:59:18 +0000 UTC" firstStartedPulling="2025-07-07 05:59:18.814325943 +0000 UTC m=+7.842916592" lastFinishedPulling="2025-07-07 05:59:21.929503677 +0000 UTC m=+10.958094326" observedRunningTime="2025-07-07 05:59:24.163009099 +0000 UTC m=+13.191599788" watchObservedRunningTime="2025-07-07 05:59:24.163737046 +0000 UTC m=+13.192327735" Jul 7 05:59:24.320022 systemd-networkd[1380]: flannel.1: Link UP Jul 7 05:59:24.320029 systemd-networkd[1380]: flannel.1: Gained carrier Jul 7 05:59:25.129769 update_engine[1424]: I20250707 05:59:25.129703 1424 update_attempter.cc:509] Updating boot flags... Jul 7 05:59:25.149925 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3093) Jul 7 05:59:25.160750 kubelet[2442]: E0707 05:59:25.160720 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:25.185923 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3095) Jul 7 05:59:25.204952 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3095) Jul 7 05:59:26.152127 systemd-networkd[1380]: flannel.1: Gained IPv6LL Jul 7 05:59:35.068473 kubelet[2442]: E0707 05:59:35.068429 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:35.069358 containerd[1439]: time="2025-07-07T05:59:35.068828251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxp9h,Uid:376d99cb-300a-481b-89e5-86d379db33cd,Namespace:kube-system,Attempt:0,}" Jul 7 05:59:35.089835 systemd-networkd[1380]: cni0: Link UP Jul 7 05:59:35.089841 systemd-networkd[1380]: cni0: Gained carrier Jul 7 05:59:35.092772 systemd-networkd[1380]: cni0: Lost carrier Jul 7 05:59:35.096810 systemd-networkd[1380]: vethcc5ecd2e: Link UP Jul 7 05:59:35.099242 kernel: cni0: port 1(vethcc5ecd2e) entered blocking state Jul 7 05:59:35.099300 kernel: cni0: port 1(vethcc5ecd2e) entered disabled state Jul 7 05:59:35.099327 kernel: vethcc5ecd2e: entered allmulticast mode Jul 7 05:59:35.099343 kernel: vethcc5ecd2e: entered promiscuous mode Jul 7 05:59:35.100599 kernel: cni0: port 1(vethcc5ecd2e) entered blocking state Jul 7 05:59:35.100650 kernel: cni0: port 1(vethcc5ecd2e) entered forwarding state Jul 7 05:59:35.101967 kernel: cni0: port 1(vethcc5ecd2e) entered disabled state Jul 7 05:59:35.108418 kernel: cni0: port 1(vethcc5ecd2e) entered blocking state Jul 7 05:59:35.108482 kernel: cni0: port 1(vethcc5ecd2e) entered forwarding state Jul 7 05:59:35.108182 systemd-networkd[1380]: vethcc5ecd2e: Gained carrier Jul 7 05:59:35.108379 systemd-networkd[1380]: cni0: Gained carrier Jul 7 05:59:35.109756 containerd[1439]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400000e9a0), "name":"cbr0", "type":"bridge"} Jul 7 05:59:35.109756 containerd[1439]: delegateAdd: netconf sent to delegate plugin: Jul 7 05:59:35.125651 containerd[1439]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-07-07T05:59:35.125534674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:59:35.125651 containerd[1439]: time="2025-07-07T05:59:35.125595953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:59:35.125651 containerd[1439]: time="2025-07-07T05:59:35.125624713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:35.125803 containerd[1439]: time="2025-07-07T05:59:35.125698912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:35.148070 systemd[1]: Started cri-containerd-88e2c7d8d0c6e63a3a55046d29dc01cf7dc7904052f92a0dc9d175e41bb814b4.scope - libcontainer container 88e2c7d8d0c6e63a3a55046d29dc01cf7dc7904052f92a0dc9d175e41bb814b4. Jul 7 05:59:35.157272 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 05:59:35.172748 containerd[1439]: time="2025-07-07T05:59:35.172708161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxp9h,Uid:376d99cb-300a-481b-89e5-86d379db33cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"88e2c7d8d0c6e63a3a55046d29dc01cf7dc7904052f92a0dc9d175e41bb814b4\"" Jul 7 05:59:35.173523 kubelet[2442]: E0707 05:59:35.173491 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:35.177229 containerd[1439]: time="2025-07-07T05:59:35.177187192Z" level=info msg="CreateContainer within sandbox \"88e2c7d8d0c6e63a3a55046d29dc01cf7dc7904052f92a0dc9d175e41bb814b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:59:35.188171 containerd[1439]: time="2025-07-07T05:59:35.188132433Z" level=info msg="CreateContainer within sandbox \"88e2c7d8d0c6e63a3a55046d29dc01cf7dc7904052f92a0dc9d175e41bb814b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b67deb8b3fdbd44e91a7a909fe7eb58b94c0dbf2ef06520c70057f47e61e089a\"" Jul 7 05:59:35.190700 containerd[1439]: time="2025-07-07T05:59:35.190145571Z" level=info msg="StartContainer for \"b67deb8b3fdbd44e91a7a909fe7eb58b94c0dbf2ef06520c70057f47e61e089a\"" Jul 7 05:59:35.216130 systemd[1]: Started cri-containerd-b67deb8b3fdbd44e91a7a909fe7eb58b94c0dbf2ef06520c70057f47e61e089a.scope - libcontainer container b67deb8b3fdbd44e91a7a909fe7eb58b94c0dbf2ef06520c70057f47e61e089a. Jul 7 05:59:35.236642 containerd[1439]: time="2025-07-07T05:59:35.236597426Z" level=info msg="StartContainer for \"b67deb8b3fdbd44e91a7a909fe7eb58b94c0dbf2ef06520c70057f47e61e089a\" returns successfully" Jul 7 05:59:36.171680 kubelet[2442]: E0707 05:59:36.171653 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:36.181841 kubelet[2442]: I0707 05:59:36.180339 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pxp9h" podStartSLOduration=18.180325158 podStartE2EDuration="18.180325158s" podCreationTimestamp="2025-07-07 05:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:59:36.18016728 +0000 UTC m=+25.208757929" watchObservedRunningTime="2025-07-07 05:59:36.180325158 +0000 UTC m=+25.208915847" Jul 7 05:59:36.456032 systemd-networkd[1380]: vethcc5ecd2e: Gained IPv6LL Jul 7 05:59:36.520056 systemd-networkd[1380]: cni0: Gained IPv6LL Jul 7 05:59:37.071867 kubelet[2442]: E0707 05:59:37.071828 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:37.072156 containerd[1439]: time="2025-07-07T05:59:37.072120156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h99kb,Uid:a2b9d6f1-d77b-4301-ba44-8d68aa0044a3,Namespace:kube-system,Attempt:0,}" Jul 7 05:59:37.098705 systemd-networkd[1380]: vethac53c60c: Link UP Jul 7 05:59:37.101186 kernel: cni0: port 2(vethac53c60c) entered blocking state Jul 7 05:59:37.101227 kernel: cni0: port 2(vethac53c60c) entered disabled state Jul 7 05:59:37.101249 kernel: vethac53c60c: entered allmulticast mode Jul 7 05:59:37.101261 kernel: vethac53c60c: entered promiscuous mode Jul 7 05:59:37.107972 kernel: cni0: port 2(vethac53c60c) entered blocking state Jul 7 05:59:37.108048 kernel: cni0: port 2(vethac53c60c) entered forwarding state Jul 7 05:59:37.108316 systemd-networkd[1380]: vethac53c60c: Gained carrier Jul 7 05:59:37.109419 containerd[1439]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400000e9a0), "name":"cbr0", "type":"bridge"} Jul 7 05:59:37.109419 containerd[1439]: delegateAdd: netconf sent to delegate plugin: Jul 7 05:59:37.125654 containerd[1439]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-07-07T05:59:37.125535940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:59:37.125654 containerd[1439]: time="2025-07-07T05:59:37.125630819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:59:37.125654 containerd[1439]: time="2025-07-07T05:59:37.125642739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:37.125884 containerd[1439]: time="2025-07-07T05:59:37.125721258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:59:37.144128 systemd[1]: Started cri-containerd-d36ef7b409e6390c86695bc5812425557c9380826714738f6a313eecdd45812d.scope - libcontainer container d36ef7b409e6390c86695bc5812425557c9380826714738f6a313eecdd45812d. Jul 7 05:59:37.152470 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 05:59:37.166785 containerd[1439]: time="2025-07-07T05:59:37.166752167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h99kb,Uid:a2b9d6f1-d77b-4301-ba44-8d68aa0044a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d36ef7b409e6390c86695bc5812425557c9380826714738f6a313eecdd45812d\"" Jul 7 05:59:37.167646 kubelet[2442]: E0707 05:59:37.167616 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:37.172124 containerd[1439]: time="2025-07-07T05:59:37.172040834Z" level=info msg="CreateContainer within sandbox \"d36ef7b409e6390c86695bc5812425557c9380826714738f6a313eecdd45812d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:59:37.176332 kubelet[2442]: E0707 05:59:37.176269 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:37.186251 containerd[1439]: time="2025-07-07T05:59:37.186208692Z" level=info msg="CreateContainer within sandbox \"d36ef7b409e6390c86695bc5812425557c9380826714738f6a313eecdd45812d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e29b6898432d6cfa4ceb334b5e291a9c31294f20a90c58f6d7201f59dc5f0de\"" Jul 7 05:59:37.187546 containerd[1439]: time="2025-07-07T05:59:37.187467239Z" level=info msg="StartContainer for \"9e29b6898432d6cfa4ceb334b5e291a9c31294f20a90c58f6d7201f59dc5f0de\"" Jul 7 05:59:37.213048 systemd[1]: Started cri-containerd-9e29b6898432d6cfa4ceb334b5e291a9c31294f20a90c58f6d7201f59dc5f0de.scope - libcontainer container 9e29b6898432d6cfa4ceb334b5e291a9c31294f20a90c58f6d7201f59dc5f0de. Jul 7 05:59:37.232282 containerd[1439]: time="2025-07-07T05:59:37.232247750Z" level=info msg="StartContainer for \"9e29b6898432d6cfa4ceb334b5e291a9c31294f20a90c58f6d7201f59dc5f0de\" returns successfully" Jul 7 05:59:38.032649 systemd[1]: Started sshd@5-10.0.0.62:22-10.0.0.1:57286.service - OpenSSH per-connection server daemon (10.0.0.1:57286). Jul 7 05:59:38.074313 sshd[3394]: Accepted publickey for core from 10.0.0.1 port 57286 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:59:38.075776 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:38.079971 systemd-logind[1421]: New session 6 of user core. Jul 7 05:59:38.092052 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 05:59:38.178798 kubelet[2442]: E0707 05:59:38.178678 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:38.178798 kubelet[2442]: E0707 05:59:38.178757 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:38.213714 sshd[3394]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:38.216990 systemd[1]: sshd@5-10.0.0.62:22-10.0.0.1:57286.service: Deactivated successfully. Jul 7 05:59:38.218547 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 05:59:38.219114 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Jul 7 05:59:38.220012 systemd-logind[1421]: Removed session 6. Jul 7 05:59:38.952367 systemd-networkd[1380]: vethac53c60c: Gained IPv6LL Jul 7 05:59:39.179629 kubelet[2442]: E0707 05:59:39.179476 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:39.191613 kubelet[2442]: I0707 05:59:39.191490 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h99kb" podStartSLOduration=21.191476068 podStartE2EDuration="21.191476068s" podCreationTimestamp="2025-07-07 05:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:59:38.188156075 +0000 UTC m=+27.216746764" watchObservedRunningTime="2025-07-07 05:59:39.191476068 +0000 UTC m=+28.220066757" Jul 7 05:59:40.181209 kubelet[2442]: E0707 05:59:40.181180 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:41.182410 kubelet[2442]: E0707 05:59:41.182364 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:59:43.224463 systemd[1]: Started sshd@6-10.0.0.62:22-10.0.0.1:42180.service - OpenSSH per-connection server daemon (10.0.0.1:42180). Jul 7 05:59:43.263535 sshd[3438]: Accepted publickey for core from 10.0.0.1 port 42180 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:59:43.264776 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:43.268966 systemd-logind[1421]: New session 7 of user core. Jul 7 05:59:43.279050 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 05:59:43.384355 sshd[3438]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:43.387607 systemd[1]: sshd@6-10.0.0.62:22-10.0.0.1:42180.service: Deactivated successfully. Jul 7 05:59:43.389225 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 05:59:43.389905 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Jul 7 05:59:43.390736 systemd-logind[1421]: Removed session 7. Jul 7 05:59:48.397460 systemd[1]: Started sshd@7-10.0.0.62:22-10.0.0.1:42184.service - OpenSSH per-connection server daemon (10.0.0.1:42184). Jul 7 05:59:48.437616 sshd[3473]: Accepted publickey for core from 10.0.0.1 port 42184 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:59:48.438835 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:48.442411 systemd-logind[1421]: New session 8 of user core. Jul 7 05:59:48.459027 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 05:59:48.565069 sshd[3473]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:48.575364 systemd[1]: sshd@7-10.0.0.62:22-10.0.0.1:42184.service: Deactivated successfully. Jul 7 05:59:48.576793 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 05:59:48.578094 systemd-logind[1421]: Session 8 logged out. Waiting for processes to exit. Jul 7 05:59:48.585153 systemd[1]: Started sshd@8-10.0.0.62:22-10.0.0.1:42200.service - OpenSSH per-connection server daemon (10.0.0.1:42200). Jul 7 05:59:48.586790 systemd-logind[1421]: Removed session 8. Jul 7 05:59:48.621591 sshd[3488]: Accepted publickey for core from 10.0.0.1 port 42200 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:59:48.622727 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:48.626388 systemd-logind[1421]: New session 9 of user core. Jul 7 05:59:48.638093 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 05:59:48.777775 sshd[3488]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:48.788743 systemd[1]: sshd@8-10.0.0.62:22-10.0.0.1:42200.service: Deactivated successfully. Jul 7 05:59:48.790672 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 05:59:48.792175 systemd-logind[1421]: Session 9 logged out. Waiting for processes to exit. Jul 7 05:59:48.799286 systemd[1]: Started sshd@9-10.0.0.62:22-10.0.0.1:42202.service - OpenSSH per-connection server daemon (10.0.0.1:42202). Jul 7 05:59:48.800659 systemd-logind[1421]: Removed session 9. Jul 7 05:59:48.833608 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 42202 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:59:48.834811 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:48.838526 systemd-logind[1421]: New session 10 of user core. Jul 7 05:59:48.846125 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 05:59:48.950826 sshd[3501]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:48.953891 systemd[1]: sshd@9-10.0.0.62:22-10.0.0.1:42202.service: Deactivated successfully. Jul 7 05:59:48.955643 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 05:59:48.956297 systemd-logind[1421]: Session 10 logged out. Waiting for processes to exit. Jul 7 05:59:48.957311 systemd-logind[1421]: Removed session 10. Jul 7 05:59:53.965411 systemd[1]: Started sshd@10-10.0.0.62:22-10.0.0.1:46578.service - OpenSSH per-connection server daemon (10.0.0.1:46578). Jul 7 05:59:54.003519 sshd[3538]: Accepted publickey for core from 10.0.0.1 port 46578 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:59:54.004668 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:54.008473 systemd-logind[1421]: New session 11 of user core. Jul 7 05:59:54.019039 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 05:59:54.121301 sshd[3538]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:54.131269 systemd[1]: sshd@10-10.0.0.62:22-10.0.0.1:46578.service: Deactivated successfully. Jul 7 05:59:54.132663 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 05:59:54.133854 systemd-logind[1421]: Session 11 logged out. Waiting for processes to exit. Jul 7 05:59:54.134994 systemd[1]: Started sshd@11-10.0.0.62:22-10.0.0.1:46592.service - OpenSSH per-connection server daemon (10.0.0.1:46592). Jul 7 05:59:54.135686 systemd-logind[1421]: Removed session 11. Jul 7 05:59:54.173104 sshd[3552]: Accepted publickey for core from 10.0.0.1 port 46592 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:59:54.174238 sshd[3552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:54.177967 systemd-logind[1421]: New session 12 of user core. Jul 7 05:59:54.190036 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 05:59:54.463081 sshd[3552]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:54.473235 systemd[1]: sshd@11-10.0.0.62:22-10.0.0.1:46592.service: Deactivated successfully. Jul 7 05:59:54.474994 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 05:59:54.476973 systemd-logind[1421]: Session 12 logged out. Waiting for processes to exit. Jul 7 05:59:54.482158 systemd[1]: Started sshd@12-10.0.0.62:22-10.0.0.1:46608.service - OpenSSH per-connection server daemon (10.0.0.1:46608). Jul 7 05:59:54.483025 systemd-logind[1421]: Removed session 12. Jul 7 05:59:54.517806 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 46608 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:59:54.519211 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:54.522435 systemd-logind[1421]: New session 13 of user core. Jul 7 05:59:54.536211 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 05:59:55.249441 sshd[3586]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:55.257473 systemd[1]: sshd@12-10.0.0.62:22-10.0.0.1:46608.service: Deactivated successfully. Jul 7 05:59:55.260503 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 05:59:55.264960 systemd-logind[1421]: Session 13 logged out. Waiting for processes to exit. Jul 7 05:59:55.272567 systemd[1]: Started sshd@13-10.0.0.62:22-10.0.0.1:46620.service - OpenSSH per-connection server daemon (10.0.0.1:46620). Jul 7 05:59:55.273694 systemd-logind[1421]: Removed session 13. Jul 7 05:59:55.307361 sshd[3606]: Accepted publickey for core from 10.0.0.1 port 46620 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:59:55.308537 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:55.312612 systemd-logind[1421]: New session 14 of user core. Jul 7 05:59:55.323048 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 05:59:55.531628 sshd[3606]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:55.539669 systemd[1]: sshd@13-10.0.0.62:22-10.0.0.1:46620.service: Deactivated successfully. Jul 7 05:59:55.541639 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 05:59:55.542280 systemd-logind[1421]: Session 14 logged out. Waiting for processes to exit. Jul 7 05:59:55.547274 systemd[1]: Started sshd@14-10.0.0.62:22-10.0.0.1:46622.service - OpenSSH per-connection server daemon (10.0.0.1:46622). Jul 7 05:59:55.548338 systemd-logind[1421]: Removed session 14. Jul 7 05:59:55.583717 sshd[3618]: Accepted publickey for core from 10.0.0.1 port 46622 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:59:55.585029 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:59:55.589114 systemd-logind[1421]: New session 15 of user core. Jul 7 05:59:55.598063 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 05:59:55.700736 sshd[3618]: pam_unix(sshd:session): session closed for user core Jul 7 05:59:55.704024 systemd[1]: sshd@14-10.0.0.62:22-10.0.0.1:46622.service: Deactivated successfully. Jul 7 05:59:55.705698 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 05:59:55.707200 systemd-logind[1421]: Session 15 logged out. Waiting for processes to exit. Jul 7 05:59:55.708424 systemd-logind[1421]: Removed session 15. Jul 7 06:00:00.711670 systemd[1]: Started sshd@15-10.0.0.62:22-10.0.0.1:46632.service - OpenSSH per-connection server daemon (10.0.0.1:46632). Jul 7 06:00:00.750334 sshd[3654]: Accepted publickey for core from 10.0.0.1 port 46632 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:00:00.751540 sshd[3654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:00:00.755154 systemd-logind[1421]: New session 16 of user core. Jul 7 06:00:00.769068 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:00:00.874631 sshd[3654]: pam_unix(sshd:session): session closed for user core Jul 7 06:00:00.878248 systemd[1]: sshd@15-10.0.0.62:22-10.0.0.1:46632.service: Deactivated successfully. Jul 7 06:00:00.880032 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:00:00.880676 systemd-logind[1421]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:00:00.881765 systemd-logind[1421]: Removed session 16. Jul 7 06:00:05.888565 systemd[1]: Started sshd@16-10.0.0.62:22-10.0.0.1:42740.service - OpenSSH per-connection server daemon (10.0.0.1:42740). Jul 7 06:00:05.926795 sshd[3688]: Accepted publickey for core from 10.0.0.1 port 42740 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:00:05.927983 sshd[3688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:00:05.931500 systemd-logind[1421]: New session 17 of user core. Jul 7 06:00:05.946110 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:00:06.047225 sshd[3688]: pam_unix(sshd:session): session closed for user core Jul 7 06:00:06.050266 systemd[1]: sshd@16-10.0.0.62:22-10.0.0.1:42740.service: Deactivated successfully. Jul 7 06:00:06.051832 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:00:06.052392 systemd-logind[1421]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:00:06.053202 systemd-logind[1421]: Removed session 17. Jul 7 06:00:11.058965 systemd[1]: Started sshd@17-10.0.0.62:22-10.0.0.1:42742.service - OpenSSH per-connection server daemon (10.0.0.1:42742). Jul 7 06:00:11.097319 sshd[3724]: Accepted publickey for core from 10.0.0.1 port 42742 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:00:11.098697 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:00:11.102273 systemd-logind[1421]: New session 18 of user core. Jul 7 06:00:11.109213 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:00:11.217714 sshd[3724]: pam_unix(sshd:session): session closed for user core Jul 7 06:00:11.221023 systemd[1]: sshd@17-10.0.0.62:22-10.0.0.1:42742.service: Deactivated successfully. Jul 7 06:00:11.223287 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:00:11.223846 systemd-logind[1421]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:00:11.224705 systemd-logind[1421]: Removed session 18.