Mar 11 02:03:58.277487 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 10 23:35:49 -00 2026 Mar 11 02:03:58.277509 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:03:58.277521 kernel: BIOS-provided physical RAM map: Mar 11 02:03:58.277527 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 11 02:03:58.277533 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 11 02:03:58.277539 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 11 02:03:58.277545 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 11 02:03:58.277551 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 11 02:03:58.277557 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 11 02:03:58.277562 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 11 02:03:58.277571 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 11 02:03:58.277577 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 11 02:03:58.277582 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 11 02:03:58.277588 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 11 02:03:58.277595 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 11 02:03:58.277602 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 11 02:03:58.277611 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 11 02:03:58.277617 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 11 02:03:58.277623 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 11 02:03:58.277629 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 11 02:03:58.277635 kernel: NX (Execute Disable) protection: active Mar 11 02:03:58.277641 kernel: APIC: Static calls initialized Mar 11 02:03:58.277647 kernel: efi: EFI v2.7 by EDK II Mar 11 02:03:58.277653 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 11 02:03:58.277659 kernel: SMBIOS 2.8 present. Mar 11 02:03:58.277665 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 11 02:03:58.277671 kernel: Hypervisor detected: KVM Mar 11 02:03:58.277680 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 11 02:03:58.277686 kernel: kvm-clock: using sched offset of 9424482036 cycles Mar 11 02:03:58.277693 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 11 02:03:58.277699 kernel: tsc: Detected 2445.426 MHz processor Mar 11 02:03:58.277706 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 11 02:03:58.277712 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 11 02:03:58.277718 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 11 02:03:58.277725 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 11 02:03:58.277731 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 11 02:03:58.277741 kernel: Using GB pages for direct mapping Mar 11 02:03:58.277747 kernel: Secure boot disabled Mar 11 02:03:58.277753 kernel: ACPI: Early table checksum verification disabled Mar 11 02:03:58.277760 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 11 02:03:58.277770 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 11 02:03:58.277776 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:03:58.277783 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:03:58.277792 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 11 02:03:58.277799 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:03:58.277805 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:03:58.277812 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:03:58.277819 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:03:58.277825 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 11 02:03:58.277832 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 11 02:03:58.277841 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 11 02:03:58.277848 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 11 02:03:58.277854 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 11 02:03:58.277861 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 11 02:03:58.277867 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 11 02:03:58.277874 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 11 02:03:58.277880 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 11 02:03:58.277887 kernel: No NUMA configuration found Mar 11 02:03:58.277893 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 11 02:03:58.277903 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 11 02:03:58.277909 kernel: Zone ranges: Mar 11 02:03:58.277916 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 11 02:03:58.277922 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 11 02:03:58.277929 kernel: Normal empty Mar 11 02:03:58.277935 kernel: Movable zone start for each node Mar 11 02:03:58.277942 kernel: Early memory node ranges Mar 11 02:03:58.277948 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 11 02:03:58.277955 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 11 02:03:58.277964 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 11 02:03:58.277971 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 11 02:03:58.277977 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 11 02:03:58.277983 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 11 02:03:58.277990 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 11 02:03:58.277996 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 11 02:03:58.278003 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 11 02:03:58.278055 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 11 02:03:58.278062 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 11 02:03:58.278068 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 11 02:03:58.278079 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 11 02:03:58.278086 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 11 02:03:58.278092 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 11 02:03:58.278099 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 11 02:03:58.278105 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 11 02:03:58.278112 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 11 02:03:58.278118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 11 02:03:58.278125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 11 02:03:58.278131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 11 02:03:58.278141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 11 02:03:58.278147 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 11 02:03:58.278154 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 11 02:03:58.278160 kernel: TSC deadline timer available Mar 11 02:03:58.278167 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 11 02:03:58.278174 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 11 02:03:58.278180 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 11 02:03:58.278187 kernel: kvm-guest: setup PV sched yield Mar 11 02:03:58.278193 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 11 02:03:58.278203 kernel: Booting paravirtualized kernel on KVM Mar 11 02:03:58.278210 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 11 02:03:58.278216 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 11 02:03:58.278223 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 11 02:03:58.278230 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 11 02:03:58.278236 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 11 02:03:58.278242 kernel: kvm-guest: PV spinlocks enabled Mar 11 02:03:58.278249 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 11 02:03:58.278257 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:03:58.278267 kernel: random: crng init done Mar 11 02:03:58.278273 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 11 02:03:58.278280 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 11 02:03:58.278287 kernel: Fallback order for Node 0: 0 Mar 11 02:03:58.278293 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 11 02:03:58.278345 kernel: Policy zone: DMA32 Mar 11 02:03:58.278353 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 11 02:03:58.278360 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 11 02:03:58.278371 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 11 02:03:58.278378 kernel: ftrace: allocating 37996 entries in 149 pages Mar 11 02:03:58.278384 kernel: ftrace: allocated 149 pages with 4 groups Mar 11 02:03:58.278391 kernel: Dynamic Preempt: voluntary Mar 11 02:03:58.278397 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 11 02:03:58.278415 kernel: rcu: RCU event tracing is enabled. Mar 11 02:03:58.278425 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 11 02:03:58.278432 kernel: Trampoline variant of Tasks RCU enabled. Mar 11 02:03:58.278439 kernel: Rude variant of Tasks RCU enabled. Mar 11 02:03:58.278446 kernel: Tracing variant of Tasks RCU enabled. Mar 11 02:03:58.278453 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 11 02:03:58.278460 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 11 02:03:58.278470 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 11 02:03:58.278476 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 11 02:03:58.278483 kernel: Console: colour dummy device 80x25 Mar 11 02:03:58.278490 kernel: printk: console [ttyS0] enabled Mar 11 02:03:58.278497 kernel: ACPI: Core revision 20230628 Mar 11 02:03:58.278507 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 11 02:03:58.278514 kernel: APIC: Switch to symmetric I/O mode setup Mar 11 02:03:58.278521 kernel: x2apic enabled Mar 11 02:03:58.278528 kernel: APIC: Switched APIC routing to: physical x2apic Mar 11 02:03:58.278535 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 11 02:03:58.278542 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 11 02:03:58.278548 kernel: kvm-guest: setup PV IPIs Mar 11 02:03:58.278555 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 11 02:03:58.278562 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 11 02:03:58.278572 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 11 02:03:58.278579 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 11 02:03:58.278586 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 11 02:03:58.278592 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 11 02:03:58.278599 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 11 02:03:58.278606 kernel: Spectre V2 : Mitigation: Retpolines Mar 11 02:03:58.278613 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 11 02:03:58.278620 kernel: Speculative Store Bypass: Vulnerable Mar 11 02:03:58.278627 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 11 02:03:58.278637 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 11 02:03:58.278644 kernel: active return thunk: srso_alias_return_thunk Mar 11 02:03:58.278651 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 11 02:03:58.278658 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 11 02:03:58.278665 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 11 02:03:58.278672 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 11 02:03:58.278678 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 11 02:03:58.278685 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 11 02:03:58.278695 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 11 02:03:58.278702 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 11 02:03:58.278709 kernel: Freeing SMP alternatives memory: 32K Mar 11 02:03:58.278716 kernel: pid_max: default: 32768 minimum: 301 Mar 11 02:03:58.278722 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 11 02:03:58.278729 kernel: landlock: Up and running. Mar 11 02:03:58.278736 kernel: SELinux: Initializing. Mar 11 02:03:58.278743 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 02:03:58.278750 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 02:03:58.278759 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 11 02:03:58.278766 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:03:58.278773 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:03:58.278780 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:03:58.278787 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 11 02:03:58.278794 kernel: signal: max sigframe size: 1776 Mar 11 02:03:58.278801 kernel: rcu: Hierarchical SRCU implementation. Mar 11 02:03:58.278808 kernel: rcu: Max phase no-delay instances is 400. Mar 11 02:03:58.278815 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 11 02:03:58.278824 kernel: smp: Bringing up secondary CPUs ... Mar 11 02:03:58.278831 kernel: smpboot: x86: Booting SMP configuration: Mar 11 02:03:58.278838 kernel: .... node #0, CPUs: #1 #2 #3 Mar 11 02:03:58.278845 kernel: smp: Brought up 1 node, 4 CPUs Mar 11 02:03:58.278851 kernel: smpboot: Max logical packages: 1 Mar 11 02:03:58.278858 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 11 02:03:58.278865 kernel: devtmpfs: initialized Mar 11 02:03:58.278872 kernel: x86/mm: Memory block size: 128MB Mar 11 02:03:58.278879 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 11 02:03:58.278888 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 11 02:03:58.278895 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 11 02:03:58.278902 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 11 02:03:58.278909 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 11 02:03:58.278916 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 11 02:03:58.278923 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 11 02:03:58.278930 kernel: pinctrl core: initialized pinctrl subsystem Mar 11 02:03:58.278937 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 11 02:03:58.278944 kernel: audit: initializing netlink subsys (disabled) Mar 11 02:03:58.278953 kernel: audit: type=2000 audit(1773194634.805:1): state=initialized audit_enabled=0 res=1 Mar 11 02:03:58.278960 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 11 02:03:58.278967 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 11 02:03:58.278974 kernel: cpuidle: using governor menu Mar 11 02:03:58.278980 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 11 02:03:58.278987 kernel: dca service started, version 1.12.1 Mar 11 02:03:58.278994 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 11 02:03:58.279001 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 11 02:03:58.279044 kernel: PCI: Using configuration type 1 for base access Mar 11 02:03:58.279055 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 11 02:03:58.279062 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 11 02:03:58.279069 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 11 02:03:58.279076 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 11 02:03:58.279083 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 11 02:03:58.279090 kernel: ACPI: Added _OSI(Module Device) Mar 11 02:03:58.279097 kernel: ACPI: Added _OSI(Processor Device) Mar 11 02:03:58.279103 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 11 02:03:58.279110 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 11 02:03:58.279120 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 11 02:03:58.279127 kernel: ACPI: Interpreter enabled Mar 11 02:03:58.279134 kernel: ACPI: PM: (supports S0 S3 S5) Mar 11 02:03:58.279140 kernel: ACPI: Using IOAPIC for interrupt routing Mar 11 02:03:58.279147 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 11 02:03:58.279154 kernel: PCI: Using E820 reservations for host bridge windows Mar 11 02:03:58.279161 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 11 02:03:58.279168 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 11 02:03:58.279447 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 11 02:03:58.279622 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 11 02:03:58.279773 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 11 02:03:58.279783 kernel: PCI host bridge to bus 0000:00 Mar 11 02:03:58.279932 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 11 02:03:58.280134 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 11 02:03:58.280273 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 11 02:03:58.280495 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 11 02:03:58.280631 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 11 02:03:58.280764 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 11 02:03:58.280897 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 11 02:03:58.281113 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 11 02:03:58.281271 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 11 02:03:58.281489 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 11 02:03:58.281637 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 11 02:03:58.281780 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 11 02:03:58.281923 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 11 02:03:58.282118 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 11 02:03:58.282275 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 11 02:03:58.282491 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 11 02:03:58.282645 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 11 02:03:58.282789 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 11 02:03:58.282959 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 11 02:03:58.283199 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 11 02:03:58.283458 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 11 02:03:58.283639 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 11 02:03:58.283825 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 11 02:03:58.284147 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 11 02:03:58.284395 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 11 02:03:58.284577 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 11 02:03:58.284752 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 11 02:03:58.284937 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 11 02:03:58.285172 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 11 02:03:58.285447 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 11 02:03:58.285656 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 11 02:03:58.285847 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 11 02:03:58.286080 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 11 02:03:58.286231 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 11 02:03:58.286241 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 11 02:03:58.286249 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 11 02:03:58.286256 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 11 02:03:58.286268 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 11 02:03:58.286275 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 11 02:03:58.286281 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 11 02:03:58.286288 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 11 02:03:58.286295 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 11 02:03:58.286358 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 11 02:03:58.286366 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 11 02:03:58.286373 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 11 02:03:58.286380 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 11 02:03:58.286391 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 11 02:03:58.286398 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 11 02:03:58.286405 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 11 02:03:58.286412 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 11 02:03:58.286419 kernel: iommu: Default domain type: Translated Mar 11 02:03:58.286426 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 11 02:03:58.286433 kernel: efivars: Registered efivars operations Mar 11 02:03:58.286440 kernel: PCI: Using ACPI for IRQ routing Mar 11 02:03:58.286446 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 11 02:03:58.286456 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 11 02:03:58.286463 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 11 02:03:58.286470 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 11 02:03:58.286477 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 11 02:03:58.286630 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 11 02:03:58.286802 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 11 02:03:58.286979 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 11 02:03:58.286993 kernel: vgaarb: loaded Mar 11 02:03:58.287054 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 11 02:03:58.287072 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 11 02:03:58.287083 kernel: clocksource: Switched to clocksource kvm-clock Mar 11 02:03:58.287094 kernel: VFS: Disk quotas dquot_6.6.0 Mar 11 02:03:58.287105 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 11 02:03:58.287115 kernel: pnp: PnP ACPI init Mar 11 02:03:58.287469 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 11 02:03:58.287487 kernel: pnp: PnP ACPI: found 6 devices Mar 11 02:03:58.287498 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 11 02:03:58.287516 kernel: NET: Registered PF_INET protocol family Mar 11 02:03:58.287527 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 11 02:03:58.287538 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 11 02:03:58.287548 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 11 02:03:58.287559 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 11 02:03:58.287570 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 11 02:03:58.287581 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 11 02:03:58.287591 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 02:03:58.287602 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 02:03:58.287616 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 11 02:03:58.287627 kernel: NET: Registered PF_XDP protocol family Mar 11 02:03:58.287808 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 11 02:03:58.287986 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 11 02:03:58.288210 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 11 02:03:58.288457 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 11 02:03:58.288621 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 11 02:03:58.288790 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 11 02:03:58.288951 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 11 02:03:58.289172 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 11 02:03:58.289189 kernel: PCI: CLS 0 bytes, default 64 Mar 11 02:03:58.289199 kernel: Initialise system trusted keyrings Mar 11 02:03:58.289210 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 11 02:03:58.289222 kernel: Key type asymmetric registered Mar 11 02:03:58.289234 kernel: Asymmetric key parser 'x509' registered Mar 11 02:03:58.289246 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 11 02:03:58.289265 kernel: io scheduler mq-deadline registered Mar 11 02:03:58.289278 kernel: io scheduler kyber registered Mar 11 02:03:58.289291 kernel: io scheduler bfq registered Mar 11 02:03:58.289387 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 11 02:03:58.289405 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 11 02:03:58.289417 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 11 02:03:58.289429 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 11 02:03:58.289440 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 11 02:03:58.289452 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 11 02:03:58.289469 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 11 02:03:58.289481 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 11 02:03:58.289493 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 11 02:03:58.289702 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 11 02:03:58.289730 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 11 02:03:58.289879 kernel: rtc_cmos 00:04: registered as rtc0 Mar 11 02:03:58.290073 kernel: rtc_cmos 00:04: setting system clock to 2026-03-11T02:03:57 UTC (1773194637) Mar 11 02:03:58.290215 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 11 02:03:58.290230 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 11 02:03:58.290237 kernel: efifb: probing for efifb Mar 11 02:03:58.290244 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 11 02:03:58.290251 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 11 02:03:58.290258 kernel: efifb: scrolling: redraw Mar 11 02:03:58.290265 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 11 02:03:58.290272 kernel: Console: switching to colour frame buffer device 100x37 Mar 11 02:03:58.290279 kernel: fb0: EFI VGA frame buffer device Mar 11 02:03:58.290393 kernel: pstore: Using crash dump compression: deflate Mar 11 02:03:58.290405 kernel: pstore: Registered efi_pstore as persistent store backend Mar 11 02:03:58.290412 kernel: NET: Registered PF_INET6 protocol family Mar 11 02:03:58.290419 kernel: Segment Routing with IPv6 Mar 11 02:03:58.290426 kernel: In-situ OAM (IOAM) with IPv6 Mar 11 02:03:58.290433 kernel: NET: Registered PF_PACKET protocol family Mar 11 02:03:58.290440 kernel: Key type dns_resolver registered Mar 11 02:03:58.290447 kernel: IPI shorthand broadcast: enabled Mar 11 02:03:58.290473 kernel: sched_clock: Marking stable (1980027723, 440875143)->(2831866027, -410963161) Mar 11 02:03:58.290484 kernel: registered taskstats version 1 Mar 11 02:03:58.290494 kernel: Loading compiled-in X.509 certificates Mar 11 02:03:58.290501 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6607fbe6d184c26ff6db73f5ff7c44b69c5a8579' Mar 11 02:03:58.290508 kernel: Key type .fscrypt registered Mar 11 02:03:58.290515 kernel: Key type fscrypt-provisioning registered Mar 11 02:03:58.290522 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 11 02:03:58.290529 kernel: ima: Allocated hash algorithm: sha1 Mar 11 02:03:58.290537 kernel: ima: No architecture policies found Mar 11 02:03:58.290544 kernel: clk: Disabling unused clocks Mar 11 02:03:58.290551 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 11 02:03:58.290561 kernel: Write protecting the kernel read-only data: 36864k Mar 11 02:03:58.290569 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 11 02:03:58.290576 kernel: Run /init as init process Mar 11 02:03:58.290583 kernel: with arguments: Mar 11 02:03:58.290590 kernel: /init Mar 11 02:03:58.290597 kernel: with environment: Mar 11 02:03:58.290604 kernel: HOME=/ Mar 11 02:03:58.290616 kernel: TERM=linux Mar 11 02:03:58.290632 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 02:03:58.290650 systemd[1]: Detected virtualization kvm. Mar 11 02:03:58.290661 systemd[1]: Detected architecture x86-64. Mar 11 02:03:58.290673 systemd[1]: Running in initrd. Mar 11 02:03:58.290684 systemd[1]: No hostname configured, using default hostname. Mar 11 02:03:58.290696 systemd[1]: Hostname set to . Mar 11 02:03:58.290707 systemd[1]: Initializing machine ID from VM UUID. Mar 11 02:03:58.290719 systemd[1]: Queued start job for default target initrd.target. Mar 11 02:03:58.290734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:03:58.290746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:03:58.290758 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 11 02:03:58.290770 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 02:03:58.290782 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 11 02:03:58.290800 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 11 02:03:58.290814 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 11 02:03:58.290826 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 11 02:03:58.290838 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:03:58.290850 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:03:58.290861 systemd[1]: Reached target paths.target - Path Units. Mar 11 02:03:58.290873 systemd[1]: Reached target slices.target - Slice Units. Mar 11 02:03:58.290888 systemd[1]: Reached target swap.target - Swaps. Mar 11 02:03:58.290900 systemd[1]: Reached target timers.target - Timer Units. Mar 11 02:03:58.290912 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 02:03:58.290923 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 02:03:58.290935 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 11 02:03:58.290946 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 11 02:03:58.290958 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:03:58.290970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 02:03:58.290985 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:03:58.290996 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 02:03:58.291050 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 11 02:03:58.291063 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 02:03:58.291075 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 11 02:03:58.291086 systemd[1]: Starting systemd-fsck-usr.service... Mar 11 02:03:58.291098 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 02:03:58.291110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 02:03:58.291151 systemd-journald[195]: Collecting audit messages is disabled. Mar 11 02:03:58.291180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:03:58.291192 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 11 02:03:58.291204 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:03:58.291216 systemd-journald[195]: Journal started Mar 11 02:03:58.291242 systemd-journald[195]: Runtime Journal (/run/log/journal/5f2e800162c245e48b9ee145a2d31d46) is 6.0M, max 48.3M, 42.2M free. Mar 11 02:03:58.298377 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 02:03:58.298679 systemd[1]: Finished systemd-fsck-usr.service. Mar 11 02:03:58.311580 systemd-modules-load[196]: Inserted module 'overlay' Mar 11 02:03:58.317485 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 02:03:58.329664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 02:03:58.333536 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 02:03:58.338545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 02:03:58.351493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:03:58.370619 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:03:58.377116 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 11 02:03:58.378648 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:03:58.395979 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 11 02:03:58.398061 kernel: Bridge firewalling registered Mar 11 02:03:58.398427 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 02:03:58.398843 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:03:58.401748 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:03:58.434762 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:03:58.449530 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 02:03:58.451635 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:03:58.460814 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 11 02:03:58.488185 dracut-cmdline[231]: dracut-dracut-053 Mar 11 02:03:58.491657 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:03:58.528686 systemd-resolved[228]: Positive Trust Anchors: Mar 11 02:03:58.528728 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 02:03:58.528756 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 02:03:58.561831 systemd-resolved[228]: Defaulting to hostname 'linux'. Mar 11 02:03:58.567622 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 02:03:58.575502 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:03:58.613429 kernel: SCSI subsystem initialized Mar 11 02:03:58.624462 kernel: Loading iSCSI transport class v2.0-870. Mar 11 02:03:58.639439 kernel: iscsi: registered transport (tcp) Mar 11 02:03:58.666464 kernel: iscsi: registered transport (qla4xxx) Mar 11 02:03:58.666538 kernel: QLogic iSCSI HBA Driver Mar 11 02:03:58.729536 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 11 02:03:58.745543 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 11 02:03:58.785421 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 11 02:03:58.785499 kernel: device-mapper: uevent: version 1.0.3 Mar 11 02:03:58.789491 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 11 02:03:58.842428 kernel: raid6: avx2x4 gen() 28091 MB/s Mar 11 02:03:58.860432 kernel: raid6: avx2x2 gen() 30810 MB/s Mar 11 02:03:58.880844 kernel: raid6: avx2x1 gen() 23418 MB/s Mar 11 02:03:58.880925 kernel: raid6: using algorithm avx2x2 gen() 30810 MB/s Mar 11 02:03:58.901569 kernel: raid6: .... xor() 26973 MB/s, rmw enabled Mar 11 02:03:58.901636 kernel: raid6: using avx2x2 recovery algorithm Mar 11 02:03:58.925402 kernel: xor: automatically using best checksumming function avx Mar 11 02:03:59.113438 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 11 02:03:59.128772 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 11 02:03:59.147732 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:03:59.172714 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 11 02:03:59.190171 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:03:59.207555 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 11 02:03:59.230905 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Mar 11 02:03:59.276424 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 02:03:59.292614 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 02:03:59.396507 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:03:59.411714 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 11 02:03:59.437638 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 11 02:03:59.443165 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 02:03:59.456450 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:03:59.465361 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 02:03:59.482452 kernel: cryptd: max_cpu_qlen set to 1000 Mar 11 02:03:59.488612 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 11 02:03:59.488592 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 11 02:03:59.509129 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 11 02:03:59.515434 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 11 02:03:59.527447 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 02:03:59.562493 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 11 02:03:59.562588 kernel: GPT:9289727 != 19775487 Mar 11 02:03:59.562638 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 11 02:03:59.562692 kernel: GPT:9289727 != 19775487 Mar 11 02:03:59.562714 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 11 02:03:59.562757 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:03:59.527728 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:03:59.545944 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:03:59.547087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 02:03:59.547520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:03:59.547697 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:03:59.596520 kernel: libata version 3.00 loaded. Mar 11 02:03:59.603897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:03:59.622817 kernel: AVX2 version of gcm_enc/dec engaged. Mar 11 02:03:59.622849 kernel: AES CTR mode by8 optimization enabled Mar 11 02:03:59.622868 kernel: ahci 0000:00:1f.2: version 3.0 Mar 11 02:03:59.623209 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 11 02:03:59.651647 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Mar 11 02:03:59.651715 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 11 02:03:59.651994 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 11 02:03:59.652270 kernel: BTRFS: device fsid 1c1071f5-2e45-4924-9ec8-a67042aa7fbc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (475) Mar 11 02:03:59.650840 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 11 02:03:59.666246 kernel: scsi host0: ahci Mar 11 02:03:59.675217 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:03:59.715915 kernel: scsi host1: ahci Mar 11 02:03:59.716254 kernel: scsi host2: ahci Mar 11 02:03:59.716608 kernel: scsi host3: ahci Mar 11 02:03:59.716821 kernel: scsi host4: ahci Mar 11 02:03:59.717177 kernel: scsi host5: ahci Mar 11 02:03:59.717521 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 11 02:03:59.717542 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 11 02:03:59.717558 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 11 02:03:59.717577 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 11 02:03:59.717594 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 11 02:03:59.717608 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 11 02:03:59.716607 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 11 02:03:59.734401 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 02:03:59.746243 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 11 02:03:59.755534 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 11 02:03:59.780595 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 11 02:03:59.792776 disk-uuid[559]: Primary Header is updated. Mar 11 02:03:59.792776 disk-uuid[559]: Secondary Entries is updated. Mar 11 02:03:59.792776 disk-uuid[559]: Secondary Header is updated. Mar 11 02:03:59.804132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:03:59.802637 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:03:59.816085 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:03:59.824527 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:03:59.828485 kernel: block device autoloading is deprecated and will be removed. Mar 11 02:03:59.851004 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:04:00.035146 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 11 02:04:00.035238 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 11 02:04:00.042446 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 11 02:04:00.047424 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 11 02:04:00.047535 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 11 02:04:00.052096 kernel: ata3.00: applying bridge limits Mar 11 02:04:00.055478 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 11 02:04:00.060497 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 11 02:04:00.060533 kernel: ata3.00: configured for UDMA/100 Mar 11 02:04:00.069466 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 11 02:04:00.142871 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 11 02:04:00.143674 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 11 02:04:00.163456 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 11 02:04:00.833543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:04:00.836836 disk-uuid[560]: The operation has completed successfully. Mar 11 02:04:00.893148 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 11 02:04:00.893460 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 11 02:04:00.937912 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 11 02:04:00.961118 sh[598]: Success Mar 11 02:04:00.992646 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 11 02:04:01.125091 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 11 02:04:01.159276 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 11 02:04:01.176874 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 11 02:04:01.212005 kernel: BTRFS info (device dm-0): first mount of filesystem 1c1071f5-2e45-4924-9ec8-a67042aa7fbc Mar 11 02:04:01.212102 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:04:01.212114 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 11 02:04:01.212125 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 11 02:04:01.212135 kernel: BTRFS info (device dm-0): using free space tree Mar 11 02:04:01.248755 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 11 02:04:01.255826 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 11 02:04:01.272680 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 11 02:04:01.279387 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 11 02:04:01.314524 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:04:01.314575 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:04:01.314595 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:04:01.314613 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:04:01.331691 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 11 02:04:01.341399 kernel: BTRFS info (device vda6): last unmount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:04:01.349985 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 11 02:04:01.365742 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 11 02:04:01.572149 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 02:04:01.586721 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 02:04:01.611938 ignition[695]: Ignition 2.19.0 Mar 11 02:04:01.612793 ignition[695]: Stage: fetch-offline Mar 11 02:04:01.612841 ignition[695]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:04:01.612853 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:04:01.612952 ignition[695]: parsed url from cmdline: "" Mar 11 02:04:01.612957 ignition[695]: no config URL provided Mar 11 02:04:01.612964 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 02:04:01.612976 ignition[695]: no config at "/usr/lib/ignition/user.ign" Mar 11 02:04:01.646642 systemd-networkd[785]: lo: Link UP Mar 11 02:04:01.613145 ignition[695]: op(1): [started] loading QEMU firmware config module Mar 11 02:04:01.646650 systemd-networkd[785]: lo: Gained carrier Mar 11 02:04:01.613152 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 11 02:04:01.652556 systemd-networkd[785]: Enumeration completed Mar 11 02:04:01.630669 ignition[695]: op(1): [finished] loading QEMU firmware config module Mar 11 02:04:01.652771 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 02:04:01.654414 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:04:01.654420 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 02:04:01.658405 systemd-networkd[785]: eth0: Link UP Mar 11 02:04:01.658411 systemd-networkd[785]: eth0: Gained carrier Mar 11 02:04:01.658519 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:04:01.659653 systemd[1]: Reached target network.target - Network. Mar 11 02:04:01.748689 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 02:04:01.840095 ignition[695]: parsing config with SHA512: b32bc91c865d669e2cbcbb50503fb8b9eee18f45ea339cc321d4ce221029bc4acde98a54f9cfc1d79d9fd7cfe67a6e497b31f43f797a420d5b5d1e2899a24609 Mar 11 02:04:01.844704 unknown[695]: fetched base config from "system" Mar 11 02:04:01.844718 unknown[695]: fetched user config from "qemu" Mar 11 02:04:01.846575 ignition[695]: fetch-offline: fetch-offline passed Mar 11 02:04:01.849504 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 02:04:01.846708 ignition[695]: Ignition finished successfully Mar 11 02:04:01.856726 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 11 02:04:01.874251 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 11 02:04:01.938446 ignition[791]: Ignition 2.19.0 Mar 11 02:04:01.938502 ignition[791]: Stage: kargs Mar 11 02:04:01.938775 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:04:01.938792 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:04:01.939672 ignition[791]: kargs: kargs passed Mar 11 02:04:01.939727 ignition[791]: Ignition finished successfully Mar 11 02:04:01.961813 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 11 02:04:01.981989 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 11 02:04:02.016471 ignition[799]: Ignition 2.19.0 Mar 11 02:04:02.016514 ignition[799]: Stage: disks Mar 11 02:04:02.016721 ignition[799]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:04:02.016735 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:04:02.017565 ignition[799]: disks: disks passed Mar 11 02:04:02.017616 ignition[799]: Ignition finished successfully Mar 11 02:04:02.042957 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 11 02:04:02.053213 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 11 02:04:02.063863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 11 02:04:02.088883 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 02:04:02.105526 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 02:04:02.111496 systemd[1]: Reached target basic.target - Basic System. Mar 11 02:04:02.144934 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 11 02:04:02.201187 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 11 02:04:02.210236 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 11 02:04:02.233880 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 11 02:04:02.541420 kernel: EXT4-fs (vda9): mounted filesystem ec53a244-36b1-4b02-8fe8-880c05c7af60 r/w with ordered data mode. Quota mode: none. Mar 11 02:04:02.543652 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 11 02:04:02.555182 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 11 02:04:02.579602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 02:04:02.590403 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 11 02:04:02.601625 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Mar 11 02:04:02.602090 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 11 02:04:02.621662 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:04:02.621702 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:04:02.621714 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:04:02.621725 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:04:02.602214 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 11 02:04:02.602258 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 02:04:02.642064 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 02:04:02.649423 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 11 02:04:02.670633 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 11 02:04:02.737250 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Mar 11 02:04:02.747508 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Mar 11 02:04:02.757141 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Mar 11 02:04:02.766746 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Mar 11 02:04:02.935701 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 11 02:04:02.961558 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 11 02:04:02.964575 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 11 02:04:02.991106 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 11 02:04:02.999786 kernel: BTRFS info (device vda6): last unmount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:04:03.017182 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 11 02:04:03.042875 ignition[930]: INFO : Ignition 2.19.0 Mar 11 02:04:03.042875 ignition[930]: INFO : Stage: mount Mar 11 02:04:03.049619 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:04:03.049619 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:04:03.049619 ignition[930]: INFO : mount: mount passed Mar 11 02:04:03.049619 ignition[930]: INFO : Ignition finished successfully Mar 11 02:04:03.071787 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 11 02:04:03.092612 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 11 02:04:03.102520 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 02:04:03.119710 systemd-networkd[785]: eth0: Gained IPv6LL Mar 11 02:04:03.136917 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Mar 11 02:04:03.136948 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:04:03.136963 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:04:03.136991 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:04:03.145374 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:04:03.148803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 02:04:03.189085 ignition[961]: INFO : Ignition 2.19.0 Mar 11 02:04:03.189085 ignition[961]: INFO : Stage: files Mar 11 02:04:03.196407 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:04:03.196407 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:04:03.196407 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 11 02:04:03.196407 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 11 02:04:03.196407 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 11 02:04:03.224896 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 11 02:04:03.230749 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 11 02:04:03.230749 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 11 02:04:03.226634 unknown[961]: wrote ssh authorized keys file for user: core Mar 11 02:04:03.248002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 02:04:03.256969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 11 02:04:03.326692 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 11 02:04:03.444524 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 02:04:03.444524 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 11 02:04:03.459435 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 11 02:04:03.459435 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 11 02:04:03.459435 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 11 02:04:03.459435 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 02:04:03.487883 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 02:04:03.494157 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 02:04:03.500517 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 02:04:03.507896 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 02:04:03.514725 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 02:04:03.521367 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:04:03.521367 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:04:03.521367 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:04:03.521367 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 11 02:04:03.828676 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 11 02:04:04.611821 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:04:04.611821 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 11 02:04:04.632290 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 02:04:04.640711 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 02:04:04.640711 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 11 02:04:04.640711 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 11 02:04:04.662587 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 02:04:04.670581 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 02:04:04.670581 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 11 02:04:04.670581 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 11 02:04:04.745771 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 02:04:04.764622 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 02:04:04.772728 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 11 02:04:04.772728 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 11 02:04:04.772728 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 11 02:04:04.772728 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 11 02:04:04.772728 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 11 02:04:04.772728 ignition[961]: INFO : files: files passed Mar 11 02:04:04.772728 ignition[961]: INFO : Ignition finished successfully Mar 11 02:04:04.818280 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 11 02:04:04.852847 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 11 02:04:04.856961 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 11 02:04:04.875613 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 11 02:04:04.875799 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 11 02:04:04.892196 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Mar 11 02:04:04.909592 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:04:04.909592 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:04:04.892268 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 02:04:04.931457 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:04:04.899729 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 11 02:04:04.949912 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 11 02:04:05.030235 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 11 02:04:05.030512 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 11 02:04:05.043655 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 11 02:04:05.054486 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 11 02:04:05.063218 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 11 02:04:05.075639 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 11 02:04:05.104404 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 02:04:05.131222 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 11 02:04:05.220964 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:04:05.229415 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:04:05.244080 systemd[1]: Stopped target timers.target - Timer Units. Mar 11 02:04:05.256646 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 11 02:04:05.256798 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 02:04:05.269238 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 11 02:04:05.279693 systemd[1]: Stopped target basic.target - Basic System. Mar 11 02:04:05.292214 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 11 02:04:05.304111 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 02:04:05.315766 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 11 02:04:05.328616 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 11 02:04:05.341646 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 02:04:05.348568 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 11 02:04:05.354690 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 11 02:04:05.367123 systemd[1]: Stopped target swap.target - Swaps. Mar 11 02:04:05.371651 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 11 02:04:05.371794 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 11 02:04:05.383611 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:04:05.396864 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:04:05.404226 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 11 02:04:05.404580 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:04:05.415688 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 11 02:04:05.415915 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 11 02:04:05.424989 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 11 02:04:05.425285 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 02:04:05.431870 systemd[1]: Stopped target paths.target - Path Units. Mar 11 02:04:05.439084 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 11 02:04:05.439568 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:04:05.448447 systemd[1]: Stopped target slices.target - Slice Units. Mar 11 02:04:05.456374 systemd[1]: Stopped target sockets.target - Socket Units. Mar 11 02:04:05.566435 ignition[1015]: INFO : Ignition 2.19.0 Mar 11 02:04:05.566435 ignition[1015]: INFO : Stage: umount Mar 11 02:04:05.566435 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:04:05.566435 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:04:05.464999 systemd[1]: iscsid.socket: Deactivated successfully. Mar 11 02:04:05.603928 ignition[1015]: INFO : umount: umount passed Mar 11 02:04:05.603928 ignition[1015]: INFO : Ignition finished successfully Mar 11 02:04:05.465201 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 02:04:05.473423 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 11 02:04:05.473677 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 02:04:05.486019 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 11 02:04:05.486476 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 02:04:05.497385 systemd[1]: ignition-files.service: Deactivated successfully. Mar 11 02:04:05.497622 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 11 02:04:05.526712 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 11 02:04:05.535528 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 11 02:04:05.543250 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 11 02:04:05.544409 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:04:05.551650 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 11 02:04:05.551895 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 02:04:05.567853 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 11 02:04:05.568011 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 11 02:04:05.575518 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 11 02:04:05.575741 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 11 02:04:05.583881 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 11 02:04:05.587969 systemd[1]: Stopped target network.target - Network. Mar 11 02:04:05.596089 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 11 02:04:05.596215 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 11 02:04:05.603935 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 11 02:04:05.604090 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 11 02:04:05.611566 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 11 02:04:05.611650 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 11 02:04:05.619650 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 11 02:04:05.619745 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 11 02:04:05.628233 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 11 02:04:05.636593 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 11 02:04:05.646287 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 11 02:04:05.646584 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 11 02:04:05.647510 systemd-networkd[785]: eth0: DHCPv6 lease lost Mar 11 02:04:05.655193 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 11 02:04:05.655486 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 11 02:04:05.667751 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 11 02:04:05.668115 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 11 02:04:05.676279 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 11 02:04:05.676485 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:04:05.682980 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 11 02:04:05.683124 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 11 02:04:05.717196 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 11 02:04:05.723590 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 11 02:04:05.723723 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 02:04:05.734263 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 11 02:04:05.734554 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:04:05.742835 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 11 02:04:05.742946 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 11 02:04:05.752219 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 11 02:04:05.984105 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 11 02:04:05.752451 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:04:05.765621 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:04:05.795870 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 11 02:04:05.796197 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:04:05.804266 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 11 02:04:05.804571 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 11 02:04:05.814502 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 11 02:04:05.814617 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 11 02:04:05.822905 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 11 02:04:05.823117 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:04:05.826988 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 11 02:04:05.827118 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 11 02:04:05.831113 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 11 02:04:05.831176 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 11 02:04:05.839778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 02:04:05.839866 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:04:05.871659 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 11 02:04:05.884425 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 11 02:04:05.884592 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:04:05.893971 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 11 02:04:05.894160 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 02:04:05.896186 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 11 02:04:05.896278 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:04:05.898180 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 02:04:05.898272 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:04:05.901449 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 11 02:04:05.901667 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 11 02:04:05.903981 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 11 02:04:05.910438 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 11 02:04:05.929889 systemd[1]: Switching root. Mar 11 02:04:06.739811 systemd-journald[195]: Journal stopped Mar 11 02:04:08.548463 kernel: SELinux: policy capability network_peer_controls=1 Mar 11 02:04:08.548561 kernel: SELinux: policy capability open_perms=1 Mar 11 02:04:08.548585 kernel: SELinux: policy capability extended_socket_class=1 Mar 11 02:04:08.548604 kernel: SELinux: policy capability always_check_network=0 Mar 11 02:04:08.548622 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 11 02:04:08.548639 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 11 02:04:08.548664 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 11 02:04:08.548683 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 11 02:04:08.548705 kernel: audit: type=1403 audit(1773194646.874:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 11 02:04:08.548726 systemd[1]: Successfully loaded SELinux policy in 73.877ms. Mar 11 02:04:08.548774 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.295ms. Mar 11 02:04:08.548799 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 02:04:08.548818 systemd[1]: Detected virtualization kvm. Mar 11 02:04:08.548836 systemd[1]: Detected architecture x86-64. Mar 11 02:04:08.548858 systemd[1]: Detected first boot. Mar 11 02:04:08.548884 systemd[1]: Initializing machine ID from VM UUID. Mar 11 02:04:08.548905 zram_generator::config[1064]: No configuration found. Mar 11 02:04:08.548926 systemd[1]: Populated /etc with preset unit settings. Mar 11 02:04:08.548946 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 11 02:04:08.548965 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 11 02:04:08.548985 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 11 02:04:08.549004 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 11 02:04:08.549024 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 11 02:04:08.549098 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 11 02:04:08.549121 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 11 02:04:08.549142 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 11 02:04:08.549162 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 11 02:04:08.549184 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 11 02:04:08.549203 systemd[1]: Created slice user.slice - User and Session Slice. Mar 11 02:04:08.549225 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:04:08.549244 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:04:08.549264 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 11 02:04:08.549290 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 11 02:04:08.549391 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 11 02:04:08.549416 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 02:04:08.549435 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 11 02:04:08.549456 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:04:08.549476 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 11 02:04:08.549495 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 11 02:04:08.549515 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 11 02:04:08.549541 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 11 02:04:08.549562 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:04:08.549589 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 02:04:08.549609 systemd[1]: Reached target slices.target - Slice Units. Mar 11 02:04:08.549630 systemd[1]: Reached target swap.target - Swaps. Mar 11 02:04:08.549649 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 11 02:04:08.549669 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 11 02:04:08.549687 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:04:08.549712 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 02:04:08.549732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:04:08.549755 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 11 02:04:08.549775 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 11 02:04:08.549794 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 11 02:04:08.549813 systemd[1]: Mounting media.mount - External Media Directory... Mar 11 02:04:08.549833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:04:08.549853 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 11 02:04:08.549872 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 11 02:04:08.549895 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 11 02:04:08.549917 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 11 02:04:08.549937 systemd[1]: Reached target machines.target - Containers. Mar 11 02:04:08.549956 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 11 02:04:08.549976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 02:04:08.549995 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 02:04:08.550014 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 11 02:04:08.550084 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 02:04:08.550116 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 02:04:08.550144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 02:04:08.550165 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 11 02:04:08.550185 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 02:04:08.550205 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 11 02:04:08.550223 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 11 02:04:08.550244 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 11 02:04:08.550262 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 11 02:04:08.550284 systemd[1]: Stopped systemd-fsck-usr.service. Mar 11 02:04:08.550388 kernel: fuse: init (API version 7.39) Mar 11 02:04:08.550410 kernel: loop: module loaded Mar 11 02:04:08.550428 kernel: ACPI: bus type drm_connector registered Mar 11 02:04:08.550448 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 02:04:08.550469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 02:04:08.550488 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 11 02:04:08.550541 systemd-journald[1141]: Collecting audit messages is disabled. Mar 11 02:04:08.550584 systemd-journald[1141]: Journal started Mar 11 02:04:08.550615 systemd-journald[1141]: Runtime Journal (/run/log/journal/5f2e800162c245e48b9ee145a2d31d46) is 6.0M, max 48.3M, 42.2M free. Mar 11 02:04:07.852454 systemd[1]: Queued start job for default target multi-user.target. Mar 11 02:04:07.882853 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 11 02:04:07.884202 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 11 02:04:07.884873 systemd[1]: systemd-journald.service: Consumed 2.374s CPU time. Mar 11 02:04:08.557076 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 11 02:04:08.567984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 02:04:08.578004 systemd[1]: verity-setup.service: Deactivated successfully. Mar 11 02:04:08.578107 systemd[1]: Stopped verity-setup.service. Mar 11 02:04:08.588448 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:04:08.594962 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 02:04:08.599957 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 11 02:04:08.604664 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 11 02:04:08.609928 systemd[1]: Mounted media.mount - External Media Directory. Mar 11 02:04:08.614478 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 11 02:04:08.619463 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 11 02:04:08.624551 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 11 02:04:08.628922 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 11 02:04:08.634402 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:04:08.640297 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 11 02:04:08.640896 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 11 02:04:08.646433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 02:04:08.646683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 02:04:08.651975 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 02:04:08.652363 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 02:04:08.657892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 02:04:08.658189 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 02:04:08.664366 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 11 02:04:08.664611 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 11 02:04:08.669898 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 02:04:08.670205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 02:04:08.675402 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 02:04:08.680634 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 11 02:04:08.687130 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 11 02:04:08.713873 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 11 02:04:08.734579 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 11 02:04:08.741423 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 11 02:04:08.746019 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 11 02:04:08.746125 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 02:04:08.752961 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 11 02:04:08.761430 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 11 02:04:08.768912 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 11 02:04:08.772990 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 02:04:08.774959 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 11 02:04:08.781712 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 11 02:04:08.786788 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 02:04:08.789996 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 11 02:04:08.795604 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 02:04:08.804739 systemd-journald[1141]: Time spent on flushing to /var/log/journal/5f2e800162c245e48b9ee145a2d31d46 is 86.483ms for 982 entries. Mar 11 02:04:08.804739 systemd-journald[1141]: System Journal (/var/log/journal/5f2e800162c245e48b9ee145a2d31d46) is 8.0M, max 195.6M, 187.6M free. Mar 11 02:04:08.917419 systemd-journald[1141]: Received client request to flush runtime journal. Mar 11 02:04:08.917485 kernel: loop0: detected capacity change from 0 to 140768 Mar 11 02:04:08.805561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:04:08.819533 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 11 02:04:08.826677 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 02:04:08.845676 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:04:08.854728 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 11 02:04:08.859920 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 11 02:04:08.869676 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 11 02:04:08.874857 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 11 02:04:08.880550 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:04:08.893818 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 11 02:04:08.910633 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 11 02:04:08.931606 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 11 02:04:08.932693 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 11 02:04:08.938381 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 11 02:04:08.944100 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Mar 11 02:04:08.944121 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Mar 11 02:04:08.958820 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 11 02:04:08.960867 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 02:04:08.983750 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 11 02:04:08.991849 kernel: loop1: detected capacity change from 0 to 219192 Mar 11 02:04:08.992781 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 11 02:04:08.994469 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 11 02:04:09.037258 kernel: loop2: detected capacity change from 0 to 142488 Mar 11 02:04:09.034190 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 11 02:04:09.047559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 02:04:09.079294 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Mar 11 02:04:09.079409 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Mar 11 02:04:09.088174 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:04:09.118548 kernel: loop3: detected capacity change from 0 to 140768 Mar 11 02:04:09.147581 kernel: loop4: detected capacity change from 0 to 219192 Mar 11 02:04:09.170548 kernel: loop5: detected capacity change from 0 to 142488 Mar 11 02:04:09.193261 (sd-merge)[1206]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 11 02:04:09.194767 (sd-merge)[1206]: Merged extensions into '/usr'. Mar 11 02:04:09.204633 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Mar 11 02:04:09.204685 systemd[1]: Reloading... Mar 11 02:04:09.302377 zram_generator::config[1232]: No configuration found. Mar 11 02:04:09.344029 ldconfig[1173]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 11 02:04:09.451415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:04:09.500111 systemd[1]: Reloading finished in 294 ms. Mar 11 02:04:09.538741 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 11 02:04:09.543644 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 11 02:04:09.549224 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 11 02:04:09.573817 systemd[1]: Starting ensure-sysext.service... Mar 11 02:04:09.579791 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 02:04:09.586924 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:04:09.596230 systemd[1]: Reloading requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Mar 11 02:04:09.596288 systemd[1]: Reloading... Mar 11 02:04:09.620983 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 11 02:04:09.621840 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 11 02:04:09.623955 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 11 02:04:09.624754 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Mar 11 02:04:09.624913 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Mar 11 02:04:09.630949 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 02:04:09.630998 systemd-tmpfiles[1272]: Skipping /boot Mar 11 02:04:09.642158 systemd-udevd[1273]: Using default interface naming scheme 'v255'. Mar 11 02:04:09.658508 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 02:04:09.658569 systemd-tmpfiles[1272]: Skipping /boot Mar 11 02:04:09.677420 zram_generator::config[1299]: No configuration found. Mar 11 02:04:09.820414 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1316) Mar 11 02:04:09.889726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:04:10.000461 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 11 02:04:10.014744 kernel: ACPI: button: Power Button [PWRF] Mar 11 02:04:10.014817 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 11 02:04:10.043525 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 11 02:04:10.043941 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 11 02:04:10.044462 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 11 02:04:10.037785 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 02:04:10.053968 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 11 02:04:10.054679 systemd[1]: Reloading finished in 457 ms. Mar 11 02:04:10.133102 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:04:10.144535 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 11 02:04:10.174727 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:04:10.227408 kernel: mousedev: PS/2 mouse device common for all mice Mar 11 02:04:10.306903 systemd[1]: Finished ensure-sysext.service. Mar 11 02:04:10.319229 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:04:10.417780 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 02:04:10.443511 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 11 02:04:10.455616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 02:04:10.460227 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 02:04:10.485950 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 02:04:10.517718 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 02:04:10.542826 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 02:04:10.558526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 02:04:10.572207 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 11 02:04:10.687176 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 11 02:04:10.755793 augenrules[1391]: No rules Mar 11 02:04:10.782190 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 02:04:10.831515 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 02:04:10.928241 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 11 02:04:11.033277 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 11 02:04:11.063166 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:04:11.079560 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:04:11.081930 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 02:04:11.097697 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 11 02:04:11.115506 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 02:04:11.118007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 02:04:11.146741 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 02:04:11.147149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 02:04:11.163615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 02:04:11.167109 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 02:04:11.178621 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 02:04:11.178923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 02:04:11.193390 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 11 02:04:11.214268 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 11 02:04:11.267530 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 11 02:04:11.286400 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 02:04:11.286629 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 02:04:11.334796 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 11 02:04:11.396561 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 11 02:04:11.397749 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 11 02:04:11.398815 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 11 02:04:11.464481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:04:11.535490 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 11 02:04:11.603936 kernel: kvm_amd: TSC scaling supported Mar 11 02:04:11.604027 kernel: kvm_amd: Nested Virtualization enabled Mar 11 02:04:11.604130 kernel: kvm_amd: Nested Paging enabled Mar 11 02:04:11.609442 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 11 02:04:11.609484 kernel: kvm_amd: PMU virtualization is disabled Mar 11 02:04:11.934584 systemd-networkd[1397]: lo: Link UP Mar 11 02:04:11.934643 systemd-networkd[1397]: lo: Gained carrier Mar 11 02:04:11.941758 systemd-networkd[1397]: Enumeration completed Mar 11 02:04:11.941979 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 02:04:11.949287 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:04:11.949401 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 02:04:11.952525 systemd-networkd[1397]: eth0: Link UP Mar 11 02:04:11.952594 systemd-networkd[1397]: eth0: Gained carrier Mar 11 02:04:11.952684 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:04:11.966242 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 11 02:04:11.968282 systemd-resolved[1398]: Positive Trust Anchors: Mar 11 02:04:11.968852 systemd-resolved[1398]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 02:04:11.968979 systemd-resolved[1398]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 02:04:11.979768 systemd-resolved[1398]: Defaulting to hostname 'linux'. Mar 11 02:04:11.986164 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 02:04:11.996640 systemd[1]: Reached target network.target - Network. Mar 11 02:04:12.002289 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:04:12.025520 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 02:04:12.073573 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 11 02:04:12.091110 systemd[1]: Reached target time-set.target - System Time Set. Mar 11 02:04:12.103986 systemd-timesyncd[1399]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 11 02:04:12.105150 systemd-timesyncd[1399]: Initial clock synchronization to Wed 2026-03-11 02:04:11.834635 UTC. Mar 11 02:04:12.225446 kernel: EDAC MC: Ver: 3.0.0 Mar 11 02:04:12.277894 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 11 02:04:12.313931 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 11 02:04:12.358928 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 02:04:12.431263 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 11 02:04:12.449847 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:04:12.466792 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 02:04:12.481625 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 11 02:04:12.496231 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 11 02:04:12.514923 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 11 02:04:12.528684 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 11 02:04:12.547281 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 11 02:04:12.566793 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 11 02:04:12.568941 systemd[1]: Reached target paths.target - Path Units. Mar 11 02:04:12.574723 systemd[1]: Reached target timers.target - Timer Units. Mar 11 02:04:12.596454 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 11 02:04:12.621613 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 11 02:04:12.646820 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 11 02:04:12.668735 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 11 02:04:12.687902 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 11 02:04:12.697168 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 02:04:12.726578 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 02:04:12.707727 systemd[1]: Reached target basic.target - Basic System. Mar 11 02:04:12.723705 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 11 02:04:12.723759 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 11 02:04:12.736605 systemd[1]: Starting containerd.service - containerd container runtime... Mar 11 02:04:12.769778 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 11 02:04:12.805791 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 11 02:04:12.819813 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 11 02:04:12.825693 jq[1437]: false Mar 11 02:04:12.830696 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 11 02:04:12.844443 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 11 02:04:12.865297 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 11 02:04:12.896399 extend-filesystems[1438]: Found loop3 Mar 11 02:04:12.896399 extend-filesystems[1438]: Found loop4 Mar 11 02:04:12.896399 extend-filesystems[1438]: Found loop5 Mar 11 02:04:12.896399 extend-filesystems[1438]: Found sr0 Mar 11 02:04:12.952019 extend-filesystems[1438]: Found vda Mar 11 02:04:12.952019 extend-filesystems[1438]: Found vda1 Mar 11 02:04:12.952019 extend-filesystems[1438]: Found vda2 Mar 11 02:04:12.952019 extend-filesystems[1438]: Found vda3 Mar 11 02:04:12.952019 extend-filesystems[1438]: Found usr Mar 11 02:04:12.952019 extend-filesystems[1438]: Found vda4 Mar 11 02:04:12.952019 extend-filesystems[1438]: Found vda6 Mar 11 02:04:12.952019 extend-filesystems[1438]: Found vda7 Mar 11 02:04:12.952019 extend-filesystems[1438]: Found vda9 Mar 11 02:04:12.952019 extend-filesystems[1438]: Checking size of /dev/vda9 Mar 11 02:04:12.933009 dbus-daemon[1436]: [system] SELinux support is enabled Mar 11 02:04:12.911172 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 11 02:04:12.941667 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 11 02:04:13.083752 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 11 02:04:13.111793 extend-filesystems[1438]: Resized partition /dev/vda9 Mar 11 02:04:13.112832 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 11 02:04:13.113709 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 11 02:04:13.123616 systemd[1]: Starting update-engine.service - Update Engine... Mar 11 02:04:13.139357 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024) Mar 11 02:04:13.216795 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 11 02:04:13.216845 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1315) Mar 11 02:04:13.164725 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 11 02:04:13.200849 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 11 02:04:13.236686 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 11 02:04:13.257746 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 11 02:04:13.258170 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 11 02:04:13.258799 systemd[1]: motdgen.service: Deactivated successfully. Mar 11 02:04:13.259159 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 11 02:04:13.274484 jq[1457]: true Mar 11 02:04:13.277226 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 11 02:04:13.277685 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 11 02:04:13.338087 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 11 02:04:13.370092 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 11 02:04:13.390217 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 11 02:04:13.400616 update_engine[1454]: I20260311 02:04:13.391952 1454 main.cc:92] Flatcar Update Engine starting Mar 11 02:04:13.390387 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 11 02:04:13.441912 update_engine[1454]: I20260311 02:04:13.430795 1454 update_check_scheduler.cc:74] Next update check in 2m36s Mar 11 02:04:13.416983 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 11 02:04:13.417027 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 11 02:04:13.454737 systemd[1]: Started update-engine.service - Update Engine. Mar 11 02:04:13.469684 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Mar 11 02:04:13.517496 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 11 02:04:13.517596 tar[1461]: linux-amd64/LICENSE Mar 11 02:04:13.517596 tar[1461]: linux-amd64/helm Mar 11 02:04:13.469801 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 11 02:04:13.518125 jq[1462]: true Mar 11 02:04:13.478578 systemd-logind[1450]: New seat seat0. Mar 11 02:04:13.519052 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 11 02:04:13.519052 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 11 02:04:13.519052 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 11 02:04:13.494078 systemd[1]: Started systemd-logind.service - User Login Management. Mar 11 02:04:13.546025 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 11 02:04:13.546532 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Mar 11 02:04:13.521738 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 11 02:04:13.541436 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 11 02:04:13.541978 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 11 02:04:13.558725 systemd-networkd[1397]: eth0: Gained IPv6LL Mar 11 02:04:13.586822 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 11 02:04:13.599099 systemd[1]: Reached target network-online.target - Network is Online. Mar 11 02:04:13.635856 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 11 02:04:13.697490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:04:13.715849 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Mar 11 02:04:13.726669 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 11 02:04:13.747705 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 11 02:04:13.787823 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 11 02:04:13.853530 kernel: hrtimer: interrupt took 2925511 ns Mar 11 02:04:13.905477 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 11 02:04:13.922753 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 11 02:04:13.923899 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:36962.service - OpenSSH per-connection server daemon (10.0.0.1:36962). Mar 11 02:04:13.932974 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 11 02:04:13.979130 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 11 02:04:13.998134 systemd[1]: issuegen.service: Deactivated successfully. Mar 11 02:04:13.999512 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 11 02:04:14.006568 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 11 02:04:14.007193 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 11 02:04:14.023682 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 11 02:04:14.043265 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 11 02:04:14.252217 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 11 02:04:14.370960 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 11 02:04:14.403531 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 11 02:04:14.439892 systemd[1]: Reached target getty.target - Login Prompts. Mar 11 02:04:14.552173 containerd[1464]: time="2026-03-11T02:04:14.551926067Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 11 02:04:14.570131 sshd[1522]: Accepted publickey for core from 10.0.0.1 port 36962 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:04:14.581497 sshd[1522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:04:14.641085 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 11 02:04:14.736049 containerd[1464]: time="2026-03-11T02:04:14.731569773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:04:14.743403 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 11 02:04:14.762807 containerd[1464]: time="2026-03-11T02:04:14.757118324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:04:14.762807 containerd[1464]: time="2026-03-11T02:04:14.757164892Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 11 02:04:14.762807 containerd[1464]: time="2026-03-11T02:04:14.757186575Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 11 02:04:14.762807 containerd[1464]: time="2026-03-11T02:04:14.757640145Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 11 02:04:14.762807 containerd[1464]: time="2026-03-11T02:04:14.757669706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 11 02:04:14.762807 containerd[1464]: time="2026-03-11T02:04:14.757777358Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:04:14.762807 containerd[1464]: time="2026-03-11T02:04:14.757798906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:04:14.762807 containerd[1464]: time="2026-03-11T02:04:14.758092528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:04:14.762807 containerd[1464]: time="2026-03-11T02:04:14.758116536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 11 02:04:14.762807 containerd[1464]: time="2026-03-11T02:04:14.758136596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:04:14.762807 containerd[1464]: time="2026-03-11T02:04:14.758151163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 11 02:04:14.763253 containerd[1464]: time="2026-03-11T02:04:14.758626767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:04:14.763253 containerd[1464]: time="2026-03-11T02:04:14.758995232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:04:14.763253 containerd[1464]: time="2026-03-11T02:04:14.759183399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:04:14.763253 containerd[1464]: time="2026-03-11T02:04:14.759206775Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 11 02:04:14.769866 containerd[1464]: time="2026-03-11T02:04:14.769242389Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 11 02:04:14.769866 containerd[1464]: time="2026-03-11T02:04:14.769489958Z" level=info msg="metadata content store policy set" policy=shared Mar 11 02:04:14.773374 systemd-logind[1450]: New session 1 of user core. Mar 11 02:04:14.796483 containerd[1464]: time="2026-03-11T02:04:14.795994948Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 11 02:04:14.796483 containerd[1464]: time="2026-03-11T02:04:14.796082677Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.797918551Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.797955715Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.797978459Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.798488593Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.798766384Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.798923648Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.798947422Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.798967697Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.798986610Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.799003860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.799020848Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.799121966Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.799145400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 11 02:04:14.801142 containerd[1464]: time="2026-03-11T02:04:14.799164498Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799231758Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799254988Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799364868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799432711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799461279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799481038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799499378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799519575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799538681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799559101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799577888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799599329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799615928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.801664 containerd[1464]: time="2026-03-11T02:04:14.799634491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799653481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799676751Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799706292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799725253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799743427Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799800701Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799827996Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799845440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799863828Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799878055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799897891Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799919954Z" level=info msg="NRI interface is disabled by configuration." Mar 11 02:04:14.802126 containerd[1464]: time="2026-03-11T02:04:14.799936903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 11 02:04:14.802669 containerd[1464]: time="2026-03-11T02:04:14.800471716Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 11 02:04:14.802669 containerd[1464]: time="2026-03-11T02:04:14.800556877Z" level=info msg="Connect containerd service" Mar 11 02:04:14.802669 containerd[1464]: time="2026-03-11T02:04:14.800620596Z" level=info msg="using legacy CRI server" Mar 11 02:04:14.802669 containerd[1464]: time="2026-03-11T02:04:14.800634550Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 11 02:04:14.802669 containerd[1464]: time="2026-03-11T02:04:14.800750790Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 11 02:04:14.810360 containerd[1464]: time="2026-03-11T02:04:14.808267594Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 11 02:04:14.810360 containerd[1464]: time="2026-03-11T02:04:14.808641538Z" level=info msg="Start subscribing containerd event" Mar 11 02:04:14.810360 containerd[1464]: time="2026-03-11T02:04:14.808814992Z" level=info msg="Start recovering state" Mar 11 02:04:14.810360 containerd[1464]: time="2026-03-11T02:04:14.810012562Z" level=info msg="Start event monitor" Mar 11 02:04:14.810360 containerd[1464]: time="2026-03-11T02:04:14.810042162Z" level=info msg="Start snapshots syncer" Mar 11 02:04:14.810360 containerd[1464]: time="2026-03-11T02:04:14.810056553Z" level=info msg="Start cni network conf syncer for default" Mar 11 02:04:14.810360 containerd[1464]: time="2026-03-11T02:04:14.810066599Z" level=info msg="Start streaming server" Mar 11 02:04:14.811981 containerd[1464]: time="2026-03-11T02:04:14.811063340Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 11 02:04:14.811981 containerd[1464]: time="2026-03-11T02:04:14.811185529Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 11 02:04:14.811981 containerd[1464]: time="2026-03-11T02:04:14.811357652Z" level=info msg="containerd successfully booted in 0.265071s" Mar 11 02:04:14.815390 systemd[1]: Started containerd.service - containerd container runtime. Mar 11 02:04:14.840818 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 11 02:04:14.920184 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 11 02:04:15.238238 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 11 02:04:16.181511 systemd[1546]: Queued start job for default target default.target. Mar 11 02:04:16.210818 systemd[1546]: Created slice app.slice - User Application Slice. Mar 11 02:04:16.211053 systemd[1546]: Reached target paths.target - Paths. Mar 11 02:04:16.213004 systemd[1546]: Reached target timers.target - Timers. Mar 11 02:04:16.227484 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 11 02:04:16.300002 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 11 02:04:16.301211 systemd[1546]: Reached target sockets.target - Sockets. Mar 11 02:04:16.301238 systemd[1546]: Reached target basic.target - Basic System. Mar 11 02:04:16.301459 systemd[1546]: Reached target default.target - Main User Target. Mar 11 02:04:16.301618 systemd[1546]: Startup finished in 1.023s. Mar 11 02:04:16.303464 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 11 02:04:16.412555 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 11 02:04:16.597629 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:36974.service - OpenSSH per-connection server daemon (10.0.0.1:36974). Mar 11 02:04:16.741129 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 36974 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:04:16.760597 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:04:16.778968 systemd-logind[1450]: New session 2 of user core. Mar 11 02:04:16.798429 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 11 02:04:16.833547 tar[1461]: linux-amd64/README.md Mar 11 02:04:16.917193 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 11 02:04:16.957765 sshd[1557]: pam_unix(sshd:session): session closed for user core Mar 11 02:04:16.977910 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:36974.service: Deactivated successfully. Mar 11 02:04:16.984227 systemd[1]: session-2.scope: Deactivated successfully. Mar 11 02:04:16.990593 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Mar 11 02:04:17.012038 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:36980.service - OpenSSH per-connection server daemon (10.0.0.1:36980). Mar 11 02:04:17.028911 systemd-logind[1450]: Removed session 2. Mar 11 02:04:17.120524 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 36980 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:04:17.130660 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:04:17.148552 systemd-logind[1450]: New session 3 of user core. Mar 11 02:04:17.168791 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 11 02:04:17.298184 sshd[1567]: pam_unix(sshd:session): session closed for user core Mar 11 02:04:17.313948 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:36980.service: Deactivated successfully. Mar 11 02:04:17.317761 systemd[1]: session-3.scope: Deactivated successfully. Mar 11 02:04:17.322712 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Mar 11 02:04:17.338254 systemd-logind[1450]: Removed session 3. Mar 11 02:04:18.147556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:04:18.162807 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 11 02:04:18.174643 systemd[1]: Startup finished in 2.154s (kernel) + 9.000s (initrd) + 11.368s (userspace) = 22.523s. Mar 11 02:04:18.176495 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:04:20.118091 kubelet[1578]: E0311 02:04:20.104559 1578 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:04:20.129278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:04:20.130084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:04:20.134202 systemd[1]: kubelet.service: Consumed 2.969s CPU time. Mar 11 02:04:27.248970 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:40984.service - OpenSSH per-connection server daemon (10.0.0.1:40984). Mar 11 02:04:27.333042 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 40984 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:04:27.336009 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:04:27.358700 systemd-logind[1450]: New session 4 of user core. Mar 11 02:04:27.370807 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 11 02:04:27.468623 sshd[1592]: pam_unix(sshd:session): session closed for user core Mar 11 02:04:27.483459 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:40984.service: Deactivated successfully. Mar 11 02:04:27.492898 systemd[1]: session-4.scope: Deactivated successfully. Mar 11 02:04:27.499954 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Mar 11 02:04:27.518177 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:40996.service - OpenSSH per-connection server daemon (10.0.0.1:40996). Mar 11 02:04:27.526944 systemd-logind[1450]: Removed session 4. Mar 11 02:04:27.583792 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 40996 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:04:27.587266 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:04:27.609477 systemd-logind[1450]: New session 5 of user core. Mar 11 02:04:27.620711 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 11 02:04:27.689625 sshd[1599]: pam_unix(sshd:session): session closed for user core Mar 11 02:04:27.718961 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:40996.service: Deactivated successfully. Mar 11 02:04:27.729727 systemd[1]: session-5.scope: Deactivated successfully. Mar 11 02:04:27.733991 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Mar 11 02:04:27.764511 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:41006.service - OpenSSH per-connection server daemon (10.0.0.1:41006). Mar 11 02:04:27.768887 systemd-logind[1450]: Removed session 5. Mar 11 02:04:27.856578 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 41006 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:04:27.858973 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:04:27.877698 systemd-logind[1450]: New session 6 of user core. Mar 11 02:04:27.892484 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 11 02:04:27.978935 sshd[1606]: pam_unix(sshd:session): session closed for user core Mar 11 02:04:27.999810 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:41006.service: Deactivated successfully. Mar 11 02:04:28.005595 systemd[1]: session-6.scope: Deactivated successfully. Mar 11 02:04:28.012775 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Mar 11 02:04:28.035895 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:41010.service - OpenSSH per-connection server daemon (10.0.0.1:41010). Mar 11 02:04:28.045120 systemd-logind[1450]: Removed session 6. Mar 11 02:04:28.127872 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 41010 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:04:28.126195 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:04:28.147368 systemd-logind[1450]: New session 7 of user core. Mar 11 02:04:28.169184 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 11 02:04:28.273872 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 11 02:04:28.274594 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:04:29.463915 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 11 02:04:29.464128 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 11 02:04:30.161691 dockerd[1635]: time="2026-03-11T02:04:30.161447476Z" level=info msg="Starting up" Mar 11 02:04:30.164168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 11 02:04:30.602597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:04:31.509625 dockerd[1635]: time="2026-03-11T02:04:31.509516116Z" level=info msg="Loading containers: start." Mar 11 02:04:31.525823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:04:31.531661 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:04:31.994402 kubelet[1668]: E0311 02:04:31.992041 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:04:32.000176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:04:32.000615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:04:32.001457 systemd[1]: kubelet.service: Consumed 1.385s CPU time. Mar 11 02:04:32.172818 kernel: Initializing XFRM netlink socket Mar 11 02:04:32.595214 systemd-networkd[1397]: docker0: Link UP Mar 11 02:04:32.630143 dockerd[1635]: time="2026-03-11T02:04:32.630021538Z" level=info msg="Loading containers: done." Mar 11 02:04:32.704379 dockerd[1635]: time="2026-03-11T02:04:32.703875585Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 11 02:04:32.704379 dockerd[1635]: time="2026-03-11T02:04:32.704026586Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 11 02:04:32.704379 dockerd[1635]: time="2026-03-11T02:04:32.704171360Z" level=info msg="Daemon has completed initialization" Mar 11 02:04:32.880474 dockerd[1635]: time="2026-03-11T02:04:32.878647532Z" level=info msg="API listen on /run/docker.sock" Mar 11 02:04:32.880704 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 11 02:04:34.510472 containerd[1464]: time="2026-03-11T02:04:34.510236538Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 11 02:04:35.624431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601225924.mount: Deactivated successfully. Mar 11 02:04:36.967027 containerd[1464]: time="2026-03-11T02:04:36.966883818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:36.968075 containerd[1464]: time="2026-03-11T02:04:36.967900923Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 11 02:04:36.969172 containerd[1464]: time="2026-03-11T02:04:36.969095397Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:36.973423 containerd[1464]: time="2026-03-11T02:04:36.973281917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:36.975150 containerd[1464]: time="2026-03-11T02:04:36.975066676Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.463566378s" Mar 11 02:04:36.975150 containerd[1464]: time="2026-03-11T02:04:36.975136839Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 11 02:04:36.978126 containerd[1464]: time="2026-03-11T02:04:36.977916594Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 11 02:04:38.394167 containerd[1464]: time="2026-03-11T02:04:38.394066803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:38.395047 containerd[1464]: time="2026-03-11T02:04:38.394924709Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 11 02:04:38.399525 containerd[1464]: time="2026-03-11T02:04:38.399433227Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:38.403139 containerd[1464]: time="2026-03-11T02:04:38.403081234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:38.404487 containerd[1464]: time="2026-03-11T02:04:38.404420382Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.426425428s" Mar 11 02:04:38.404726 containerd[1464]: time="2026-03-11T02:04:38.404675294Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 11 02:04:38.405528 containerd[1464]: time="2026-03-11T02:04:38.405470140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 11 02:04:39.316025 containerd[1464]: time="2026-03-11T02:04:39.315911481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:39.317432 containerd[1464]: time="2026-03-11T02:04:39.317243769Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 11 02:04:39.319253 containerd[1464]: time="2026-03-11T02:04:39.319097715Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:39.324122 containerd[1464]: time="2026-03-11T02:04:39.323972364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:39.325543 containerd[1464]: time="2026-03-11T02:04:39.325448759Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 919.914345ms" Mar 11 02:04:39.325543 containerd[1464]: time="2026-03-11T02:04:39.325476853Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 11 02:04:39.326654 containerd[1464]: time="2026-03-11T02:04:39.326562672Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 11 02:04:40.431066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2175349537.mount: Deactivated successfully. Mar 11 02:04:40.752957 containerd[1464]: time="2026-03-11T02:04:40.752646907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:40.754438 containerd[1464]: time="2026-03-11T02:04:40.754266131Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 11 02:04:40.755658 containerd[1464]: time="2026-03-11T02:04:40.755593917Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:40.758273 containerd[1464]: time="2026-03-11T02:04:40.758187699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:40.759182 containerd[1464]: time="2026-03-11T02:04:40.759138368Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.432497101s" Mar 11 02:04:40.759231 containerd[1464]: time="2026-03-11T02:04:40.759187825Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 11 02:04:40.760372 containerd[1464]: time="2026-03-11T02:04:40.760073900Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 11 02:04:41.429553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116853099.mount: Deactivated successfully. Mar 11 02:04:42.077237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 11 02:04:42.085894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:04:42.315720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:04:42.319047 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:04:42.415754 kubelet[1935]: E0311 02:04:42.415426 1935 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:04:42.421767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:04:42.422237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:04:42.731232 containerd[1464]: time="2026-03-11T02:04:42.731067838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:42.732646 containerd[1464]: time="2026-03-11T02:04:42.732584579Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 11 02:04:42.734061 containerd[1464]: time="2026-03-11T02:04:42.733993956Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:42.737854 containerd[1464]: time="2026-03-11T02:04:42.737782584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:42.739428 containerd[1464]: time="2026-03-11T02:04:42.739367507Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.979262617s" Mar 11 02:04:42.739488 containerd[1464]: time="2026-03-11T02:04:42.739434206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 11 02:04:42.740392 containerd[1464]: time="2026-03-11T02:04:42.740285779Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 11 02:04:43.158711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3504006989.mount: Deactivated successfully. Mar 11 02:04:43.168526 containerd[1464]: time="2026-03-11T02:04:43.168105131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:43.169276 containerd[1464]: time="2026-03-11T02:04:43.169233367Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 11 02:04:43.171283 containerd[1464]: time="2026-03-11T02:04:43.171218859Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:43.176015 containerd[1464]: time="2026-03-11T02:04:43.175927940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:43.177498 containerd[1464]: time="2026-03-11T02:04:43.177435196Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 437.010793ms" Mar 11 02:04:43.177498 containerd[1464]: time="2026-03-11T02:04:43.177491436Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 11 02:04:43.178292 containerd[1464]: time="2026-03-11T02:04:43.178181970Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 11 02:04:44.567201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476851348.mount: Deactivated successfully. Mar 11 02:04:46.181017 containerd[1464]: time="2026-03-11T02:04:46.180689343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:46.182772 containerd[1464]: time="2026-03-11T02:04:46.181964687Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 11 02:04:46.183707 containerd[1464]: time="2026-03-11T02:04:46.183637688Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:46.188156 containerd[1464]: time="2026-03-11T02:04:46.187957262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:04:46.190639 containerd[1464]: time="2026-03-11T02:04:46.190507881Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 3.012256019s" Mar 11 02:04:46.190639 containerd[1464]: time="2026-03-11T02:04:46.190618063Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 11 02:04:49.783116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:04:49.798821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:04:49.842709 systemd[1]: Reloading requested from client PID 2042 ('systemctl') (unit session-7.scope)... Mar 11 02:04:49.842764 systemd[1]: Reloading... Mar 11 02:04:50.020512 zram_generator::config[2081]: No configuration found. Mar 11 02:04:50.184538 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:04:50.379857 systemd[1]: Reloading finished in 536 ms. Mar 11 02:04:50.466058 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 11 02:04:50.466225 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 11 02:04:50.466847 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:04:50.481946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:04:50.697743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:04:50.698018 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 02:04:50.847727 kubelet[2128]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 02:04:50.847727 kubelet[2128]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:04:50.848229 kubelet[2128]: I0311 02:04:50.847753 2128 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 02:04:52.626386 kubelet[2128]: I0311 02:04:52.625093 2128 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 11 02:04:52.626386 kubelet[2128]: I0311 02:04:52.625171 2128 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 02:04:52.626386 kubelet[2128]: I0311 02:04:52.625232 2128 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 02:04:52.626386 kubelet[2128]: I0311 02:04:52.625245 2128 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 02:04:52.627726 kubelet[2128]: I0311 02:04:52.626792 2128 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 02:04:52.676540 kubelet[2128]: I0311 02:04:52.676279 2128 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 02:04:52.677795 kubelet[2128]: E0311 02:04:52.677752 2128 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 02:04:52.682962 kubelet[2128]: E0311 02:04:52.682873 2128 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 02:04:52.683014 kubelet[2128]: I0311 02:04:52.682976 2128 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 02:04:52.691481 kubelet[2128]: I0311 02:04:52.691401 2128 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 02:04:52.694344 kubelet[2128]: I0311 02:04:52.694221 2128 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 02:04:52.694701 kubelet[2128]: I0311 02:04:52.694382 2128 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 02:04:52.694701 kubelet[2128]: I0311 02:04:52.694664 2128 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 02:04:52.694701 kubelet[2128]: I0311 02:04:52.694674 2128 container_manager_linux.go:306] "Creating device plugin manager" Mar 11 02:04:52.695640 kubelet[2128]: I0311 02:04:52.694797 2128 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 02:04:52.698803 kubelet[2128]: I0311 02:04:52.698657 2128 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:04:52.699363 kubelet[2128]: I0311 02:04:52.699166 2128 kubelet.go:475] "Attempting to sync node with API server" Mar 11 02:04:52.699363 kubelet[2128]: I0311 02:04:52.699245 2128 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 02:04:52.699363 kubelet[2128]: I0311 02:04:52.699281 2128 kubelet.go:387] "Adding apiserver pod source" Mar 11 02:04:52.699493 kubelet[2128]: I0311 02:04:52.699373 2128 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 02:04:52.701056 kubelet[2128]: E0311 02:04:52.700705 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 02:04:52.701056 kubelet[2128]: E0311 02:04:52.700705 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 02:04:52.703772 kubelet[2128]: I0311 02:04:52.703719 2128 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 02:04:52.704679 kubelet[2128]: I0311 02:04:52.704623 2128 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 02:04:52.704679 kubelet[2128]: I0311 02:04:52.704677 2128 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 02:04:52.704980 kubelet[2128]: W0311 02:04:52.704924 2128 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 11 02:04:52.712006 kubelet[2128]: I0311 02:04:52.711908 2128 server.go:1262] "Started kubelet" Mar 11 02:04:52.712993 kubelet[2128]: I0311 02:04:52.712261 2128 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 02:04:52.728844 kubelet[2128]: I0311 02:04:52.726891 2128 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 02:04:52.728844 kubelet[2128]: I0311 02:04:52.727067 2128 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 02:04:52.731941 kubelet[2128]: E0311 02:04:52.728614 2128 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189ba72da0b14177 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-11 02:04:52.711760247 +0000 UTC m=+1.991937193,LastTimestamp:2026-03-11 02:04:52.711760247 +0000 UTC m=+1.991937193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 11 02:04:52.735506 kubelet[2128]: I0311 02:04:52.735408 2128 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 02:04:52.736535 kubelet[2128]: I0311 02:04:52.736510 2128 server.go:310] "Adding debug handlers to kubelet server" Mar 11 02:04:52.739228 kubelet[2128]: I0311 02:04:52.739024 2128 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 02:04:52.742405 kubelet[2128]: I0311 02:04:52.742259 2128 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 02:04:52.755225 kubelet[2128]: I0311 02:04:52.755198 2128 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 11 02:04:52.755636 kubelet[2128]: I0311 02:04:52.755617 2128 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 02:04:52.755893 kubelet[2128]: I0311 02:04:52.755764 2128 reconciler.go:29] "Reconciler: start to sync state" Mar 11 02:04:52.756761 kubelet[2128]: E0311 02:04:52.756734 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 02:04:52.758377 kubelet[2128]: E0311 02:04:52.757704 2128 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:04:52.759229 kubelet[2128]: I0311 02:04:52.759205 2128 factory.go:223] Registration of the systemd container factory successfully Mar 11 02:04:52.759531 kubelet[2128]: I0311 02:04:52.759510 2128 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 02:04:52.761843 kubelet[2128]: E0311 02:04:52.761789 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" Mar 11 02:04:52.762161 kubelet[2128]: I0311 02:04:52.762083 2128 factory.go:223] Registration of the containerd container factory successfully Mar 11 02:04:52.763806 kubelet[2128]: E0311 02:04:52.763730 2128 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 02:04:52.768466 kubelet[2128]: I0311 02:04:52.768099 2128 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 02:04:52.804104 kubelet[2128]: I0311 02:04:52.804075 2128 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 02:04:52.804261 kubelet[2128]: I0311 02:04:52.804246 2128 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 02:04:52.804547 kubelet[2128]: I0311 02:04:52.804533 2128 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:04:52.809652 kubelet[2128]: I0311 02:04:52.809634 2128 policy_none.go:49] "None policy: Start" Mar 11 02:04:52.809730 kubelet[2128]: I0311 02:04:52.809714 2128 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 02:04:52.809807 kubelet[2128]: I0311 02:04:52.809791 2128 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 02:04:52.811275 kubelet[2128]: I0311 02:04:52.811226 2128 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 02:04:52.812213 kubelet[2128]: I0311 02:04:52.811280 2128 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 11 02:04:52.812213 kubelet[2128]: I0311 02:04:52.811379 2128 kubelet.go:2428] "Starting kubelet main sync loop" Mar 11 02:04:52.812213 kubelet[2128]: E0311 02:04:52.811471 2128 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 02:04:52.813199 kubelet[2128]: E0311 02:04:52.813137 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 02:04:52.814830 kubelet[2128]: I0311 02:04:52.814814 2128 policy_none.go:47] "Start" Mar 11 02:04:52.823281 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 11 02:04:52.838579 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 11 02:04:52.842711 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 11 02:04:52.852708 kubelet[2128]: E0311 02:04:52.852628 2128 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 02:04:52.853188 kubelet[2128]: I0311 02:04:52.853107 2128 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 02:04:52.853188 kubelet[2128]: I0311 02:04:52.853163 2128 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 02:04:52.853753 kubelet[2128]: I0311 02:04:52.853668 2128 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 02:04:52.854853 kubelet[2128]: E0311 02:04:52.854754 2128 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 02:04:52.854965 kubelet[2128]: E0311 02:04:52.854914 2128 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 11 02:04:52.930286 systemd[1]: Created slice kubepods-burstable-podabc6fa5580f6f82863a0c6a13af2f188.slice - libcontainer container kubepods-burstable-podabc6fa5580f6f82863a0c6a13af2f188.slice. Mar 11 02:04:52.949236 kubelet[2128]: E0311 02:04:52.949166 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:04:52.953576 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 11 02:04:52.956279 kubelet[2128]: I0311 02:04:52.956199 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:04:52.956279 kubelet[2128]: I0311 02:04:52.956263 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 11 02:04:52.956457 kubelet[2128]: I0311 02:04:52.956287 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:04:52.956502 kubelet[2128]: I0311 02:04:52.956407 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:04:52.956502 kubelet[2128]: I0311 02:04:52.956488 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abc6fa5580f6f82863a0c6a13af2f188-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abc6fa5580f6f82863a0c6a13af2f188\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:04:52.956546 kubelet[2128]: I0311 02:04:52.956507 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abc6fa5580f6f82863a0c6a13af2f188-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abc6fa5580f6f82863a0c6a13af2f188\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:04:52.956546 kubelet[2128]: I0311 02:04:52.956526 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abc6fa5580f6f82863a0c6a13af2f188-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abc6fa5580f6f82863a0c6a13af2f188\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:04:52.956585 kubelet[2128]: I0311 02:04:52.956545 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:04:52.956585 kubelet[2128]: I0311 02:04:52.956566 2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:04:52.956779 kubelet[2128]: I0311 02:04:52.956699 2128 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:04:52.957539 kubelet[2128]: E0311 02:04:52.957489 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Mar 11 02:04:52.963215 kubelet[2128]: E0311 02:04:52.963150 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" Mar 11 02:04:52.963262 kubelet[2128]: E0311 02:04:52.963194 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:04:52.966226 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 11 02:04:52.969588 kubelet[2128]: E0311 02:04:52.969523 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:04:53.184254 kubelet[2128]: I0311 02:04:53.183554 2128 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:04:53.184597 kubelet[2128]: E0311 02:04:53.184570 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Mar 11 02:04:53.268018 kubelet[2128]: E0311 02:04:53.266607 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:53.278938 containerd[1464]: time="2026-03-11T02:04:53.278547864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abc6fa5580f6f82863a0c6a13af2f188,Namespace:kube-system,Attempt:0,}" Mar 11 02:04:53.283711 kubelet[2128]: E0311 02:04:53.280755 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:53.287701 containerd[1464]: time="2026-03-11T02:04:53.285685007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 11 02:04:53.293084 kubelet[2128]: E0311 02:04:53.292974 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:53.294532 containerd[1464]: time="2026-03-11T02:04:53.294293334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 11 02:04:53.376034 kubelet[2128]: E0311 02:04:53.374478 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" Mar 11 02:04:53.630139 kubelet[2128]: E0311 02:04:53.629738 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 02:04:53.634522 kubelet[2128]: I0311 02:04:53.633680 2128 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:04:53.644274 kubelet[2128]: E0311 02:04:53.643895 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Mar 11 02:04:53.761105 kubelet[2128]: E0311 02:04:53.759551 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 02:04:53.830720 kubelet[2128]: E0311 02:04:53.830392 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 02:04:53.830720 kubelet[2128]: E0311 02:04:53.830396 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 02:04:54.183613 kubelet[2128]: E0311 02:04:54.182692 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="1.6s" Mar 11 02:04:54.370864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1845673475.mount: Deactivated successfully. Mar 11 02:04:54.411793 containerd[1464]: time="2026-03-11T02:04:54.411549830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:04:54.425820 containerd[1464]: time="2026-03-11T02:04:54.425526718Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 11 02:04:54.441163 containerd[1464]: time="2026-03-11T02:04:54.429192605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:04:54.443509 containerd[1464]: time="2026-03-11T02:04:54.442582741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 02:04:54.450573 containerd[1464]: time="2026-03-11T02:04:54.448389397Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:04:54.467700 containerd[1464]: time="2026-03-11T02:04:54.467128972Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:04:54.481499 containerd[1464]: time="2026-03-11T02:04:54.480292898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 02:04:54.495379 kubelet[2128]: I0311 02:04:54.494780 2128 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:04:54.495379 kubelet[2128]: E0311 02:04:54.495676 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Mar 11 02:04:54.504610 containerd[1464]: time="2026-03-11T02:04:54.504487927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:04:54.512217 containerd[1464]: time="2026-03-11T02:04:54.512066050Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.224167972s" Mar 11 02:04:54.517282 containerd[1464]: time="2026-03-11T02:04:54.516585879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.237622766s" Mar 11 02:04:54.519251 containerd[1464]: time="2026-03-11T02:04:54.519118824Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.224618257s" Mar 11 02:04:54.808674 kubelet[2128]: E0311 02:04:54.807922 2128 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 02:04:55.596957 containerd[1464]: time="2026-03-11T02:04:55.593977460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:04:55.596957 containerd[1464]: time="2026-03-11T02:04:55.594093585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:04:55.596957 containerd[1464]: time="2026-03-11T02:04:55.594115495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:04:55.596957 containerd[1464]: time="2026-03-11T02:04:55.594238333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:04:55.632592 kubelet[2128]: E0311 02:04:55.626807 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 02:04:55.632762 containerd[1464]: time="2026-03-11T02:04:55.631249550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:04:55.650140 containerd[1464]: time="2026-03-11T02:04:55.646759968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:04:55.650140 containerd[1464]: time="2026-03-11T02:04:55.646810551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:04:55.650140 containerd[1464]: time="2026-03-11T02:04:55.646979865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:04:55.670167 containerd[1464]: time="2026-03-11T02:04:55.666506989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:04:55.670167 containerd[1464]: time="2026-03-11T02:04:55.666739411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:04:55.670167 containerd[1464]: time="2026-03-11T02:04:55.666763035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:04:55.670167 containerd[1464]: time="2026-03-11T02:04:55.666892845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:04:55.969824 kubelet[2128]: E0311 02:04:55.783855 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="3.2s" Mar 11 02:04:56.007029 systemd[1]: Started cri-containerd-c649924d53bfe460631dac515d59a55c2686f9a1e50cf42423ad7b3cfecd9e3e.scope - libcontainer container c649924d53bfe460631dac515d59a55c2686f9a1e50cf42423ad7b3cfecd9e3e. Mar 11 02:04:56.075716 systemd[1]: Started cri-containerd-8ac6ea791b0ab68b59b86ae25262c28e09f72dcb0f8f1c51e1e05120d11729e3.scope - libcontainer container 8ac6ea791b0ab68b59b86ae25262c28e09f72dcb0f8f1c51e1e05120d11729e3. Mar 11 02:04:56.116568 systemd[1]: Started cri-containerd-be916058d4c8ce68ae7ffbcbebbb96cbca6d2af47e7e12880be820e446c3d8e8.scope - libcontainer container be916058d4c8ce68ae7ffbcbebbb96cbca6d2af47e7e12880be820e446c3d8e8. Mar 11 02:04:56.119435 kubelet[2128]: I0311 02:04:56.118747 2128 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:04:56.125564 kubelet[2128]: E0311 02:04:56.125261 2128 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Mar 11 02:04:56.304688 kubelet[2128]: E0311 02:04:56.301815 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 02:04:56.661911 kubelet[2128]: E0311 02:04:56.656250 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 02:04:56.707586 containerd[1464]: time="2026-03-11T02:04:56.705833807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abc6fa5580f6f82863a0c6a13af2f188,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ac6ea791b0ab68b59b86ae25262c28e09f72dcb0f8f1c51e1e05120d11729e3\"" Mar 11 02:04:56.714489 kubelet[2128]: E0311 02:04:56.711757 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:56.746831 containerd[1464]: time="2026-03-11T02:04:56.746736135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"c649924d53bfe460631dac515d59a55c2686f9a1e50cf42423ad7b3cfecd9e3e\"" Mar 11 02:04:56.758699 kubelet[2128]: E0311 02:04:56.748431 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:56.769026 containerd[1464]: time="2026-03-11T02:04:56.768901868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"be916058d4c8ce68ae7ffbcbebbb96cbca6d2af47e7e12880be820e446c3d8e8\"" Mar 11 02:04:56.771848 kubelet[2128]: E0311 02:04:56.770624 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:56.827435 kubelet[2128]: E0311 02:04:56.827210 2128 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 02:04:56.831816 containerd[1464]: time="2026-03-11T02:04:56.831658121Z" level=info msg="CreateContainer within sandbox \"8ac6ea791b0ab68b59b86ae25262c28e09f72dcb0f8f1c51e1e05120d11729e3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 11 02:04:56.836778 containerd[1464]: time="2026-03-11T02:04:56.836662825Z" level=info msg="CreateContainer within sandbox \"c649924d53bfe460631dac515d59a55c2686f9a1e50cf42423ad7b3cfecd9e3e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 11 02:04:56.845824 containerd[1464]: time="2026-03-11T02:04:56.845723751Z" level=info msg="CreateContainer within sandbox \"be916058d4c8ce68ae7ffbcbebbb96cbca6d2af47e7e12880be820e446c3d8e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 11 02:04:56.874745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount338054163.mount: Deactivated successfully. Mar 11 02:04:56.903501 containerd[1464]: time="2026-03-11T02:04:56.903447865Z" level=info msg="CreateContainer within sandbox \"8ac6ea791b0ab68b59b86ae25262c28e09f72dcb0f8f1c51e1e05120d11729e3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2dda92a4a422b347d305bfea2cacc5cd3e71a91a4872abe1d4873a731f610227\"" Mar 11 02:04:56.904777 containerd[1464]: time="2026-03-11T02:04:56.904655165Z" level=info msg="StartContainer for \"2dda92a4a422b347d305bfea2cacc5cd3e71a91a4872abe1d4873a731f610227\"" Mar 11 02:04:57.008275 containerd[1464]: time="2026-03-11T02:04:57.007731854Z" level=info msg="CreateContainer within sandbox \"c649924d53bfe460631dac515d59a55c2686f9a1e50cf42423ad7b3cfecd9e3e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc8435b2d9b8ee7a54773ea265c72917120af5a517a869790a4b9522283b2a31\"" Mar 11 02:04:57.009191 containerd[1464]: time="2026-03-11T02:04:57.009159513Z" level=info msg="StartContainer for \"fc8435b2d9b8ee7a54773ea265c72917120af5a517a869790a4b9522283b2a31\"" Mar 11 02:04:57.072270 containerd[1464]: time="2026-03-11T02:04:57.069945537Z" level=info msg="CreateContainer within sandbox \"be916058d4c8ce68ae7ffbcbebbb96cbca6d2af47e7e12880be820e446c3d8e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d4840f8512f190532a1863db1fc49c2c23587ce6dfc8c50624314920c8efd75d\"" Mar 11 02:04:57.075632 containerd[1464]: time="2026-03-11T02:04:57.074517772Z" level=info msg="StartContainer for \"d4840f8512f190532a1863db1fc49c2c23587ce6dfc8c50624314920c8efd75d\"" Mar 11 02:04:57.353672 systemd[1]: Started cri-containerd-fc8435b2d9b8ee7a54773ea265c72917120af5a517a869790a4b9522283b2a31.scope - libcontainer container fc8435b2d9b8ee7a54773ea265c72917120af5a517a869790a4b9522283b2a31. Mar 11 02:04:57.374801 systemd[1]: Started cri-containerd-2dda92a4a422b347d305bfea2cacc5cd3e71a91a4872abe1d4873a731f610227.scope - libcontainer container 2dda92a4a422b347d305bfea2cacc5cd3e71a91a4872abe1d4873a731f610227. Mar 11 02:04:57.444891 systemd[1]: Started cri-containerd-d4840f8512f190532a1863db1fc49c2c23587ce6dfc8c50624314920c8efd75d.scope - libcontainer container d4840f8512f190532a1863db1fc49c2c23587ce6dfc8c50624314920c8efd75d. Mar 11 02:04:57.548922 containerd[1464]: time="2026-03-11T02:04:57.544800210Z" level=info msg="StartContainer for \"2dda92a4a422b347d305bfea2cacc5cd3e71a91a4872abe1d4873a731f610227\" returns successfully" Mar 11 02:04:57.679270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995439988.mount: Deactivated successfully. Mar 11 02:04:57.700690 containerd[1464]: time="2026-03-11T02:04:57.697097111Z" level=info msg="StartContainer for \"fc8435b2d9b8ee7a54773ea265c72917120af5a517a869790a4b9522283b2a31\" returns successfully" Mar 11 02:04:57.734422 containerd[1464]: time="2026-03-11T02:04:57.733501184Z" level=info msg="StartContainer for \"d4840f8512f190532a1863db1fc49c2c23587ce6dfc8c50624314920c8efd75d\" returns successfully" Mar 11 02:04:58.077687 kubelet[2128]: E0311 02:04:58.077617 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:04:58.078234 kubelet[2128]: E0311 02:04:58.078002 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:58.089241 kubelet[2128]: E0311 02:04:58.089046 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:04:58.091420 kubelet[2128]: E0311 02:04:58.090921 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:58.091993 kubelet[2128]: E0311 02:04:58.091926 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:04:58.092168 kubelet[2128]: E0311 02:04:58.092110 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:58.654812 update_engine[1454]: I20260311 02:04:58.652598 1454 update_attempter.cc:509] Updating boot flags... Mar 11 02:04:58.851587 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2422) Mar 11 02:04:59.180140 kubelet[2128]: E0311 02:04:59.180063 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:04:59.180887 kubelet[2128]: E0311 02:04:59.180425 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:59.192813 kubelet[2128]: E0311 02:04:59.192746 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:04:59.192992 kubelet[2128]: E0311 02:04:59.192928 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:59.196501 kubelet[2128]: E0311 02:04:59.196093 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:04:59.196563 kubelet[2128]: E0311 02:04:59.196554 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:04:59.346208 kubelet[2128]: I0311 02:04:59.343961 2128 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:05:00.332013 kubelet[2128]: E0311 02:05:00.331834 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:05:00.332013 kubelet[2128]: E0311 02:05:00.331872 2128 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:05:00.332013 kubelet[2128]: E0311 02:05:00.332026 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:00.332013 kubelet[2128]: E0311 02:05:00.332048 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:01.028673 kubelet[2128]: E0311 02:05:01.028556 2128 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 11 02:05:01.114807 kubelet[2128]: I0311 02:05:01.114668 2128 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 11 02:05:01.114940 kubelet[2128]: E0311 02:05:01.114888 2128 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 11 02:05:01.159235 kubelet[2128]: I0311 02:05:01.158781 2128 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:05:01.167768 kubelet[2128]: E0311 02:05:01.167548 2128 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 11 02:05:01.167768 kubelet[2128]: I0311 02:05:01.167580 2128 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:01.170294 kubelet[2128]: E0311 02:05:01.170220 2128 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:01.170294 kubelet[2128]: I0311 02:05:01.170239 2128 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:05:01.172211 kubelet[2128]: E0311 02:05:01.172049 2128 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 11 02:05:01.332664 kubelet[2128]: I0311 02:05:01.332481 2128 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:05:01.335584 kubelet[2128]: E0311 02:05:01.335521 2128 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 11 02:05:01.335815 kubelet[2128]: E0311 02:05:01.335752 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:01.706815 kubelet[2128]: I0311 02:05:01.706501 2128 apiserver.go:52] "Watching apiserver" Mar 11 02:05:01.755984 kubelet[2128]: I0311 02:05:01.755840 2128 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 02:05:02.214180 kubelet[2128]: I0311 02:05:02.213930 2128 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:02.226903 kubelet[2128]: E0311 02:05:02.226818 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:02.335127 kubelet[2128]: E0311 02:05:02.334933 2128 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:03.713085 systemd[1]: Reloading requested from client PID 2436 ('systemctl') (unit session-7.scope)... Mar 11 02:05:03.713133 systemd[1]: Reloading... Mar 11 02:05:03.851232 zram_generator::config[2478]: No configuration found. Mar 11 02:05:03.985746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:05:04.088879 systemd[1]: Reloading finished in 375 ms. Mar 11 02:05:04.158834 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:05:04.171811 systemd[1]: kubelet.service: Deactivated successfully. Mar 11 02:05:04.172242 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:05:04.172424 systemd[1]: kubelet.service: Consumed 4.799s CPU time, 130.2M memory peak, 0B memory swap peak. Mar 11 02:05:04.188951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:05:04.390614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:05:04.399062 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 02:05:04.504714 kubelet[2520]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 02:05:04.504714 kubelet[2520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:05:04.505141 kubelet[2520]: I0311 02:05:04.505016 2520 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 02:05:04.518856 kubelet[2520]: I0311 02:05:04.518771 2520 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 11 02:05:04.518856 kubelet[2520]: I0311 02:05:04.518830 2520 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 02:05:04.518856 kubelet[2520]: I0311 02:05:04.518859 2520 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 02:05:04.518856 kubelet[2520]: I0311 02:05:04.518870 2520 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 02:05:04.519172 kubelet[2520]: I0311 02:05:04.519108 2520 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 02:05:04.520553 kubelet[2520]: I0311 02:05:04.520497 2520 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 11 02:05:04.523057 kubelet[2520]: I0311 02:05:04.522945 2520 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 02:05:04.528298 kubelet[2520]: E0311 02:05:04.528209 2520 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 02:05:04.528298 kubelet[2520]: I0311 02:05:04.528293 2520 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 02:05:04.538271 kubelet[2520]: I0311 02:05:04.538191 2520 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 02:05:04.538743 kubelet[2520]: I0311 02:05:04.538680 2520 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 02:05:04.538902 kubelet[2520]: I0311 02:05:04.538739 2520 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 02:05:04.539092 kubelet[2520]: I0311 02:05:04.538905 2520 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 02:05:04.539092 kubelet[2520]: I0311 02:05:04.538916 2520 container_manager_linux.go:306] "Creating device plugin manager" Mar 11 02:05:04.539092 kubelet[2520]: I0311 02:05:04.538941 2520 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 02:05:04.539281 kubelet[2520]: I0311 02:05:04.539168 2520 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:05:04.539585 kubelet[2520]: I0311 02:05:04.539525 2520 kubelet.go:475] "Attempting to sync node with API server" Mar 11 02:05:04.539585 kubelet[2520]: I0311 02:05:04.539542 2520 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 02:05:04.539585 kubelet[2520]: I0311 02:05:04.539564 2520 kubelet.go:387] "Adding apiserver pod source" Mar 11 02:05:04.539585 kubelet[2520]: I0311 02:05:04.539574 2520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 02:05:04.543708 kubelet[2520]: I0311 02:05:04.543290 2520 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 02:05:04.551497 kubelet[2520]: I0311 02:05:04.547714 2520 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 02:05:04.551497 kubelet[2520]: I0311 02:05:04.547759 2520 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 02:05:04.557618 kubelet[2520]: I0311 02:05:04.557594 2520 server.go:1262] "Started kubelet" Mar 11 02:05:04.560885 kubelet[2520]: I0311 02:05:04.559186 2520 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 02:05:04.560885 kubelet[2520]: I0311 02:05:04.560170 2520 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 02:05:04.561787 kubelet[2520]: I0311 02:05:04.561747 2520 server.go:310] "Adding debug handlers to kubelet server" Mar 11 02:05:04.562563 kubelet[2520]: I0311 02:05:04.562487 2520 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 02:05:04.564084 kubelet[2520]: I0311 02:05:04.564066 2520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 02:05:04.568526 kubelet[2520]: I0311 02:05:04.564917 2520 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 02:05:04.572501 kubelet[2520]: I0311 02:05:04.571412 2520 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 02:05:04.573642 kubelet[2520]: I0311 02:05:04.573573 2520 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 11 02:05:04.574014 kubelet[2520]: I0311 02:05:04.573932 2520 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 02:05:04.574475 kubelet[2520]: I0311 02:05:04.574211 2520 reconciler.go:29] "Reconciler: start to sync state" Mar 11 02:05:04.574884 kubelet[2520]: E0311 02:05:04.574537 2520 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 02:05:04.576982 kubelet[2520]: I0311 02:05:04.576713 2520 factory.go:223] Registration of the systemd container factory successfully Mar 11 02:05:04.576982 kubelet[2520]: I0311 02:05:04.576885 2520 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 02:05:04.582218 kubelet[2520]: I0311 02:05:04.581928 2520 factory.go:223] Registration of the containerd container factory successfully Mar 11 02:05:04.586924 kubelet[2520]: I0311 02:05:04.586832 2520 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 02:05:04.608285 kubelet[2520]: I0311 02:05:04.608186 2520 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 02:05:04.608285 kubelet[2520]: I0311 02:05:04.608249 2520 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 11 02:05:04.608765 kubelet[2520]: I0311 02:05:04.608728 2520 kubelet.go:2428] "Starting kubelet main sync loop" Mar 11 02:05:04.610268 kubelet[2520]: E0311 02:05:04.609037 2520 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 02:05:04.648265 kubelet[2520]: I0311 02:05:04.648027 2520 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 02:05:04.648265 kubelet[2520]: I0311 02:05:04.648080 2520 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 02:05:04.648265 kubelet[2520]: I0311 02:05:04.648099 2520 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:05:04.649075 kubelet[2520]: I0311 02:05:04.648278 2520 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 11 02:05:04.649075 kubelet[2520]: I0311 02:05:04.648289 2520 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 11 02:05:04.649075 kubelet[2520]: I0311 02:05:04.648451 2520 policy_none.go:49] "None policy: Start" Mar 11 02:05:04.649075 kubelet[2520]: I0311 02:05:04.648461 2520 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 02:05:04.649075 kubelet[2520]: I0311 02:05:04.648473 2520 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 02:05:04.649075 kubelet[2520]: I0311 02:05:04.648570 2520 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 11 02:05:04.649075 kubelet[2520]: I0311 02:05:04.648579 2520 policy_none.go:47] "Start" Mar 11 02:05:04.658207 kubelet[2520]: E0311 02:05:04.658174 2520 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 02:05:04.658634 kubelet[2520]: I0311 02:05:04.658512 2520 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 02:05:04.658634 kubelet[2520]: I0311 02:05:04.658562 2520 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 02:05:04.658790 kubelet[2520]: I0311 02:05:04.658762 2520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 02:05:04.662415 kubelet[2520]: E0311 02:05:04.660877 2520 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 02:05:04.710793 kubelet[2520]: I0311 02:05:04.710543 2520 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:04.710793 kubelet[2520]: I0311 02:05:04.710697 2520 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:05:04.711204 kubelet[2520]: I0311 02:05:04.711137 2520 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:05:04.725155 kubelet[2520]: E0311 02:05:04.725067 2520 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:04.772129 kubelet[2520]: I0311 02:05:04.772047 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:05:04.776257 kubelet[2520]: I0311 02:05:04.776149 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abc6fa5580f6f82863a0c6a13af2f188-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abc6fa5580f6f82863a0c6a13af2f188\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:05:04.776257 kubelet[2520]: I0311 02:05:04.776226 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:04.776571 kubelet[2520]: I0311 02:05:04.776260 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:04.776571 kubelet[2520]: I0311 02:05:04.776294 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:04.776571 kubelet[2520]: I0311 02:05:04.776460 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 11 02:05:04.776571 kubelet[2520]: I0311 02:05:04.776484 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abc6fa5580f6f82863a0c6a13af2f188-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abc6fa5580f6f82863a0c6a13af2f188\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:05:04.776571 kubelet[2520]: I0311 02:05:04.776507 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:04.776797 kubelet[2520]: I0311 02:05:04.776531 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:04.776797 kubelet[2520]: I0311 02:05:04.776555 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abc6fa5580f6f82863a0c6a13af2f188-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abc6fa5580f6f82863a0c6a13af2f188\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:05:04.787678 kubelet[2520]: I0311 02:05:04.786213 2520 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 11 02:05:04.787678 kubelet[2520]: I0311 02:05:04.786470 2520 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 11 02:05:05.020534 kubelet[2520]: E0311 02:05:05.020049 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:05.024415 kubelet[2520]: E0311 02:05:05.024128 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:05.028511 kubelet[2520]: E0311 02:05:05.026986 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:05.541673 kubelet[2520]: I0311 02:05:05.541501 2520 apiserver.go:52] "Watching apiserver" Mar 11 02:05:05.575227 kubelet[2520]: I0311 02:05:05.575085 2520 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 02:05:05.629763 kubelet[2520]: I0311 02:05:05.629637 2520 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:05.632979 kubelet[2520]: E0311 02:05:05.632835 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:05.633782 kubelet[2520]: E0311 02:05:05.633634 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:05.638108 kubelet[2520]: E0311 02:05:05.638015 2520 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:05:05.638238 kubelet[2520]: E0311 02:05:05.638225 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:05.693873 kubelet[2520]: I0311 02:05:05.693164 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6931412319999999 podStartE2EDuration="1.693141232s" podCreationTimestamp="2026-03-11 02:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:05:05.670457206 +0000 UTC m=+1.264629990" watchObservedRunningTime="2026-03-11 02:05:05.693141232 +0000 UTC m=+1.287314006" Mar 11 02:05:05.710921 kubelet[2520]: I0311 02:05:05.710627 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7106014379999999 podStartE2EDuration="1.710601438s" podCreationTimestamp="2026-03-11 02:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:05:05.694930118 +0000 UTC m=+1.289102953" watchObservedRunningTime="2026-03-11 02:05:05.710601438 +0000 UTC m=+1.304774342" Mar 11 02:05:06.345890 sudo[1617]: pam_unix(sudo:session): session closed for user root Mar 11 02:05:06.430789 sshd[1614]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:06.495995 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:41010.service: Deactivated successfully. Mar 11 02:05:06.534455 systemd[1]: session-7.scope: Deactivated successfully. Mar 11 02:05:06.535156 systemd[1]: session-7.scope: Consumed 8.422s CPU time, 164.7M memory peak, 0B memory swap peak. Mar 11 02:05:06.540695 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Mar 11 02:05:06.580763 systemd-logind[1450]: Removed session 7. Mar 11 02:05:06.701121 kubelet[2520]: E0311 02:05:06.696277 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:06.701121 kubelet[2520]: E0311 02:05:06.698433 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:06.711215 kubelet[2520]: E0311 02:05:06.707410 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:09.354128 kubelet[2520]: E0311 02:05:09.353944 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:09.694795 kubelet[2520]: E0311 02:05:09.694289 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:10.154806 kubelet[2520]: I0311 02:05:10.154518 2520 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 11 02:05:10.155142 containerd[1464]: time="2026-03-11T02:05:10.155039076Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 11 02:05:10.155990 kubelet[2520]: I0311 02:05:10.155446 2520 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 11 02:05:10.697373 kubelet[2520]: E0311 02:05:10.697186 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:11.145881 systemd[1]: Created slice kubepods-besteffort-pod8f684d2d_eb89_4f93_bafa_51061209e174.slice - libcontainer container kubepods-besteffort-pod8f684d2d_eb89_4f93_bafa_51061209e174.slice. Mar 11 02:05:11.162751 systemd[1]: Created slice kubepods-burstable-pod65e4b14f_88cd_445a_a10c_85b8365889a4.slice - libcontainer container kubepods-burstable-pod65e4b14f_88cd_445a_a10c_85b8365889a4.slice. Mar 11 02:05:11.232101 kubelet[2520]: I0311 02:05:11.231806 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/65e4b14f-88cd-445a-a10c-85b8365889a4-cni-plugin\") pod \"kube-flannel-ds-9px68\" (UID: \"65e4b14f-88cd-445a-a10c-85b8365889a4\") " pod="kube-flannel/kube-flannel-ds-9px68" Mar 11 02:05:11.232101 kubelet[2520]: I0311 02:05:11.231896 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/65e4b14f-88cd-445a-a10c-85b8365889a4-flannel-cfg\") pod \"kube-flannel-ds-9px68\" (UID: \"65e4b14f-88cd-445a-a10c-85b8365889a4\") " pod="kube-flannel/kube-flannel-ds-9px68" Mar 11 02:05:11.232101 kubelet[2520]: I0311 02:05:11.231917 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f684d2d-eb89-4f93-bafa-51061209e174-xtables-lock\") pod \"kube-proxy-v75g6\" (UID: \"8f684d2d-eb89-4f93-bafa-51061209e174\") " pod="kube-system/kube-proxy-v75g6" Mar 11 02:05:11.232101 kubelet[2520]: I0311 02:05:11.231932 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f684d2d-eb89-4f93-bafa-51061209e174-lib-modules\") pod \"kube-proxy-v75g6\" (UID: \"8f684d2d-eb89-4f93-bafa-51061209e174\") " pod="kube-system/kube-proxy-v75g6" Mar 11 02:05:11.232101 kubelet[2520]: I0311 02:05:11.231948 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gws77\" (UniqueName: \"kubernetes.io/projected/8f684d2d-eb89-4f93-bafa-51061209e174-kube-api-access-gws77\") pod \"kube-proxy-v75g6\" (UID: \"8f684d2d-eb89-4f93-bafa-51061209e174\") " pod="kube-system/kube-proxy-v75g6" Mar 11 02:05:11.232492 kubelet[2520]: I0311 02:05:11.232011 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65e4b14f-88cd-445a-a10c-85b8365889a4-xtables-lock\") pod \"kube-flannel-ds-9px68\" (UID: \"65e4b14f-88cd-445a-a10c-85b8365889a4\") " pod="kube-flannel/kube-flannel-ds-9px68" Mar 11 02:05:11.232492 kubelet[2520]: I0311 02:05:11.232024 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/65e4b14f-88cd-445a-a10c-85b8365889a4-run\") pod \"kube-flannel-ds-9px68\" (UID: \"65e4b14f-88cd-445a-a10c-85b8365889a4\") " pod="kube-flannel/kube-flannel-ds-9px68" Mar 11 02:05:11.232492 kubelet[2520]: I0311 02:05:11.232039 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbpj\" (UniqueName: \"kubernetes.io/projected/65e4b14f-88cd-445a-a10c-85b8365889a4-kube-api-access-brbpj\") pod \"kube-flannel-ds-9px68\" (UID: \"65e4b14f-88cd-445a-a10c-85b8365889a4\") " pod="kube-flannel/kube-flannel-ds-9px68" Mar 11 02:05:11.232492 kubelet[2520]: I0311 02:05:11.232053 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/65e4b14f-88cd-445a-a10c-85b8365889a4-cni\") pod \"kube-flannel-ds-9px68\" (UID: \"65e4b14f-88cd-445a-a10c-85b8365889a4\") " pod="kube-flannel/kube-flannel-ds-9px68" Mar 11 02:05:11.232492 kubelet[2520]: I0311 02:05:11.232065 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f684d2d-eb89-4f93-bafa-51061209e174-kube-proxy\") pod \"kube-proxy-v75g6\" (UID: \"8f684d2d-eb89-4f93-bafa-51061209e174\") " pod="kube-system/kube-proxy-v75g6" Mar 11 02:05:11.461146 kubelet[2520]: E0311 02:05:11.460862 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:11.463547 containerd[1464]: time="2026-03-11T02:05:11.462113741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v75g6,Uid:8f684d2d-eb89-4f93-bafa-51061209e174,Namespace:kube-system,Attempt:0,}" Mar 11 02:05:11.477114 kubelet[2520]: E0311 02:05:11.477031 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:11.477756 containerd[1464]: time="2026-03-11T02:05:11.477646912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9px68,Uid:65e4b14f-88cd-445a-a10c-85b8365889a4,Namespace:kube-flannel,Attempt:0,}" Mar 11 02:05:11.516511 containerd[1464]: time="2026-03-11T02:05:11.515938997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:05:11.516511 containerd[1464]: time="2026-03-11T02:05:11.516450589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:05:11.516511 containerd[1464]: time="2026-03-11T02:05:11.516476928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:05:11.516826 containerd[1464]: time="2026-03-11T02:05:11.516590671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:05:11.539885 containerd[1464]: time="2026-03-11T02:05:11.539050767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:05:11.539885 containerd[1464]: time="2026-03-11T02:05:11.539739088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:05:11.545399 containerd[1464]: time="2026-03-11T02:05:11.545032801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:05:11.545694 containerd[1464]: time="2026-03-11T02:05:11.545430449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:05:11.560782 systemd[1]: Started cri-containerd-90628bfa2d613a995e92fc590cb71533c69860ed5f0cb403054015164fc4b36e.scope - libcontainer container 90628bfa2d613a995e92fc590cb71533c69860ed5f0cb403054015164fc4b36e. Mar 11 02:05:11.582688 systemd[1]: Started cri-containerd-fdef50dba91764b67882f312c07eda2f86924f8d1a0ae9a52d13f830c646e304.scope - libcontainer container fdef50dba91764b67882f312c07eda2f86924f8d1a0ae9a52d13f830c646e304. Mar 11 02:05:11.632957 containerd[1464]: time="2026-03-11T02:05:11.632732439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v75g6,Uid:8f684d2d-eb89-4f93-bafa-51061209e174,Namespace:kube-system,Attempt:0,} returns sandbox id \"90628bfa2d613a995e92fc590cb71533c69860ed5f0cb403054015164fc4b36e\"" Mar 11 02:05:11.633916 kubelet[2520]: E0311 02:05:11.633688 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:11.649986 containerd[1464]: time="2026-03-11T02:05:11.649599005Z" level=info msg="CreateContainer within sandbox \"90628bfa2d613a995e92fc590cb71533c69860ed5f0cb403054015164fc4b36e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 11 02:05:11.665472 containerd[1464]: time="2026-03-11T02:05:11.665200572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9px68,Uid:65e4b14f-88cd-445a-a10c-85b8365889a4,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"fdef50dba91764b67882f312c07eda2f86924f8d1a0ae9a52d13f830c646e304\"" Mar 11 02:05:11.669622 kubelet[2520]: E0311 02:05:11.666612 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:11.669830 containerd[1464]: time="2026-03-11T02:05:11.668416499Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 11 02:05:11.718264 containerd[1464]: time="2026-03-11T02:05:11.717986666Z" level=info msg="CreateContainer within sandbox \"90628bfa2d613a995e92fc590cb71533c69860ed5f0cb403054015164fc4b36e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d094f46eb79f7377a564cf388da638a21d3e60296f048e022aac5341e1f01bbb\"" Mar 11 02:05:11.722767 containerd[1464]: time="2026-03-11T02:05:11.722496663Z" level=info msg="StartContainer for \"d094f46eb79f7377a564cf388da638a21d3e60296f048e022aac5341e1f01bbb\"" Mar 11 02:05:11.778674 systemd[1]: Started cri-containerd-d094f46eb79f7377a564cf388da638a21d3e60296f048e022aac5341e1f01bbb.scope - libcontainer container d094f46eb79f7377a564cf388da638a21d3e60296f048e022aac5341e1f01bbb. Mar 11 02:05:11.840399 containerd[1464]: time="2026-03-11T02:05:11.840166798Z" level=info msg="StartContainer for \"d094f46eb79f7377a564cf388da638a21d3e60296f048e022aac5341e1f01bbb\" returns successfully" Mar 11 02:05:11.915775 kubelet[2520]: E0311 02:05:11.915115 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:12.403506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652258916.mount: Deactivated successfully. Mar 11 02:05:12.484551 containerd[1464]: time="2026-03-11T02:05:12.484439127Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:05:12.486283 containerd[1464]: time="2026-03-11T02:05:12.486211888Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Mar 11 02:05:12.488932 containerd[1464]: time="2026-03-11T02:05:12.488773538Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:05:12.493771 containerd[1464]: time="2026-03-11T02:05:12.493708301Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:05:12.495211 containerd[1464]: time="2026-03-11T02:05:12.495127973Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 826.669936ms" Mar 11 02:05:12.495211 containerd[1464]: time="2026-03-11T02:05:12.495194466Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 11 02:05:12.504102 containerd[1464]: time="2026-03-11T02:05:12.503967366Z" level=info msg="CreateContainer within sandbox \"fdef50dba91764b67882f312c07eda2f86924f8d1a0ae9a52d13f830c646e304\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 11 02:05:12.526576 containerd[1464]: time="2026-03-11T02:05:12.526464307Z" level=info msg="CreateContainer within sandbox \"fdef50dba91764b67882f312c07eda2f86924f8d1a0ae9a52d13f830c646e304\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"37b1d2ca9482fa3cadf5c080e62998e46ff12bd2ca9cd887bb61bebe1a73ffce\"" Mar 11 02:05:12.527807 containerd[1464]: time="2026-03-11T02:05:12.527772872Z" level=info msg="StartContainer for \"37b1d2ca9482fa3cadf5c080e62998e46ff12bd2ca9cd887bb61bebe1a73ffce\"" Mar 11 02:05:12.584682 systemd[1]: Started cri-containerd-37b1d2ca9482fa3cadf5c080e62998e46ff12bd2ca9cd887bb61bebe1a73ffce.scope - libcontainer container 37b1d2ca9482fa3cadf5c080e62998e46ff12bd2ca9cd887bb61bebe1a73ffce. Mar 11 02:05:12.638586 systemd[1]: cri-containerd-37b1d2ca9482fa3cadf5c080e62998e46ff12bd2ca9cd887bb61bebe1a73ffce.scope: Deactivated successfully. Mar 11 02:05:12.640945 containerd[1464]: time="2026-03-11T02:05:12.640886192Z" level=info msg="StartContainer for \"37b1d2ca9482fa3cadf5c080e62998e46ff12bd2ca9cd887bb61bebe1a73ffce\" returns successfully" Mar 11 02:05:12.710573 kubelet[2520]: E0311 02:05:12.710137 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:12.716689 kubelet[2520]: E0311 02:05:12.716605 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:12.716689 kubelet[2520]: E0311 02:05:12.716617 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:12.724191 containerd[1464]: time="2026-03-11T02:05:12.723911543Z" level=info msg="shim disconnected" id=37b1d2ca9482fa3cadf5c080e62998e46ff12bd2ca9cd887bb61bebe1a73ffce namespace=k8s.io Mar 11 02:05:12.724191 containerd[1464]: time="2026-03-11T02:05:12.724075778Z" level=warning msg="cleaning up after shim disconnected" id=37b1d2ca9482fa3cadf5c080e62998e46ff12bd2ca9cd887bb61bebe1a73ffce namespace=k8s.io Mar 11 02:05:12.724191 containerd[1464]: time="2026-03-11T02:05:12.724091578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:05:13.352746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37b1d2ca9482fa3cadf5c080e62998e46ff12bd2ca9cd887bb61bebe1a73ffce-rootfs.mount: Deactivated successfully. Mar 11 02:05:13.721930 kubelet[2520]: E0311 02:05:13.721275 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:13.721930 kubelet[2520]: E0311 02:05:13.721666 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:13.721930 kubelet[2520]: E0311 02:05:13.721663 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:13.722795 containerd[1464]: time="2026-03-11T02:05:13.722495463Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 11 02:05:13.739259 kubelet[2520]: I0311 02:05:13.739130 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v75g6" podStartSLOduration=2.739113783 podStartE2EDuration="2.739113783s" podCreationTimestamp="2026-03-11 02:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:05:12.759848068 +0000 UTC m=+8.354020843" watchObservedRunningTime="2026-03-11 02:05:13.739113783 +0000 UTC m=+9.333286556" Mar 11 02:05:13.915146 kubelet[2520]: E0311 02:05:13.915074 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:14.722999 kubelet[2520]: E0311 02:05:14.722937 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:15.366991 containerd[1464]: time="2026-03-11T02:05:15.366877680Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:05:15.368993 containerd[1464]: time="2026-03-11T02:05:15.368862962Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Mar 11 02:05:15.370796 containerd[1464]: time="2026-03-11T02:05:15.370719242Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:05:15.376191 containerd[1464]: time="2026-03-11T02:05:15.376071755Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:05:15.378552 containerd[1464]: time="2026-03-11T02:05:15.378443190Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 1.655907662s" Mar 11 02:05:15.378552 containerd[1464]: time="2026-03-11T02:05:15.378494475Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 11 02:05:15.386151 containerd[1464]: time="2026-03-11T02:05:15.386079942Z" level=info msg="CreateContainer within sandbox \"fdef50dba91764b67882f312c07eda2f86924f8d1a0ae9a52d13f830c646e304\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 11 02:05:15.403788 containerd[1464]: time="2026-03-11T02:05:15.403668527Z" level=info msg="CreateContainer within sandbox \"fdef50dba91764b67882f312c07eda2f86924f8d1a0ae9a52d13f830c646e304\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3874f510484e0c1441da9c08ec4b8f0be05066175ba089c17282d3463c4a8dec\"" Mar 11 02:05:15.404876 containerd[1464]: time="2026-03-11T02:05:15.404682359Z" level=info msg="StartContainer for \"3874f510484e0c1441da9c08ec4b8f0be05066175ba089c17282d3463c4a8dec\"" Mar 11 02:05:15.451586 systemd[1]: Started cri-containerd-3874f510484e0c1441da9c08ec4b8f0be05066175ba089c17282d3463c4a8dec.scope - libcontainer container 3874f510484e0c1441da9c08ec4b8f0be05066175ba089c17282d3463c4a8dec. Mar 11 02:05:15.493514 systemd[1]: cri-containerd-3874f510484e0c1441da9c08ec4b8f0be05066175ba089c17282d3463c4a8dec.scope: Deactivated successfully. Mar 11 02:05:15.497904 containerd[1464]: time="2026-03-11T02:05:15.497751806Z" level=info msg="StartContainer for \"3874f510484e0c1441da9c08ec4b8f0be05066175ba089c17282d3463c4a8dec\" returns successfully" Mar 11 02:05:15.553564 kubelet[2520]: I0311 02:05:15.553440 2520 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 11 02:05:15.645630 containerd[1464]: time="2026-03-11T02:05:15.645357575Z" level=info msg="shim disconnected" id=3874f510484e0c1441da9c08ec4b8f0be05066175ba089c17282d3463c4a8dec namespace=k8s.io Mar 11 02:05:15.645630 containerd[1464]: time="2026-03-11T02:05:15.645500461Z" level=warning msg="cleaning up after shim disconnected" id=3874f510484e0c1441da9c08ec4b8f0be05066175ba089c17282d3463c4a8dec namespace=k8s.io Mar 11 02:05:15.645630 containerd[1464]: time="2026-03-11T02:05:15.645518615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:05:15.650389 systemd[1]: Created slice kubepods-burstable-pod9e2daf99_f9ea_4749_a5da_e2c4a54796cb.slice - libcontainer container kubepods-burstable-pod9e2daf99_f9ea_4749_a5da_e2c4a54796cb.slice. Mar 11 02:05:15.662433 systemd[1]: Created slice kubepods-burstable-pod0daead55_086a_4b3f_a937_3314966d43d0.slice - libcontainer container kubepods-burstable-pod0daead55_086a_4b3f_a937_3314966d43d0.slice. Mar 11 02:05:15.727453 kubelet[2520]: E0311 02:05:15.727403 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:15.735057 containerd[1464]: time="2026-03-11T02:05:15.733004493Z" level=info msg="CreateContainer within sandbox \"fdef50dba91764b67882f312c07eda2f86924f8d1a0ae9a52d13f830c646e304\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 11 02:05:15.751532 containerd[1464]: time="2026-03-11T02:05:15.751458212Z" level=info msg="CreateContainer within sandbox \"fdef50dba91764b67882f312c07eda2f86924f8d1a0ae9a52d13f830c646e304\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"1f4bbc908b5847f96fd440da0f9bcb07e8b1d7bfcc72e1270b05bb14ac1f4766\"" Mar 11 02:05:15.752041 containerd[1464]: time="2026-03-11T02:05:15.752001694Z" level=info msg="StartContainer for \"1f4bbc908b5847f96fd440da0f9bcb07e8b1d7bfcc72e1270b05bb14ac1f4766\"" Mar 11 02:05:15.771936 kubelet[2520]: I0311 02:05:15.771849 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e2daf99-f9ea-4749-a5da-e2c4a54796cb-config-volume\") pod \"coredns-66bc5c9577-pfhb6\" (UID: \"9e2daf99-f9ea-4749-a5da-e2c4a54796cb\") " pod="kube-system/coredns-66bc5c9577-pfhb6" Mar 11 02:05:15.771936 kubelet[2520]: I0311 02:05:15.771898 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0daead55-086a-4b3f-a937-3314966d43d0-config-volume\") pod \"coredns-66bc5c9577-kwv6t\" (UID: \"0daead55-086a-4b3f-a937-3314966d43d0\") " pod="kube-system/coredns-66bc5c9577-kwv6t" Mar 11 02:05:15.771936 kubelet[2520]: I0311 02:05:15.771919 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d822w\" (UniqueName: \"kubernetes.io/projected/0daead55-086a-4b3f-a937-3314966d43d0-kube-api-access-d822w\") pod \"coredns-66bc5c9577-kwv6t\" (UID: \"0daead55-086a-4b3f-a937-3314966d43d0\") " pod="kube-system/coredns-66bc5c9577-kwv6t" Mar 11 02:05:15.771936 kubelet[2520]: I0311 02:05:15.771940 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdtl7\" (UniqueName: \"kubernetes.io/projected/9e2daf99-f9ea-4749-a5da-e2c4a54796cb-kube-api-access-gdtl7\") pod \"coredns-66bc5c9577-pfhb6\" (UID: \"9e2daf99-f9ea-4749-a5da-e2c4a54796cb\") " pod="kube-system/coredns-66bc5c9577-pfhb6" Mar 11 02:05:15.791523 systemd[1]: Started cri-containerd-1f4bbc908b5847f96fd440da0f9bcb07e8b1d7bfcc72e1270b05bb14ac1f4766.scope - libcontainer container 1f4bbc908b5847f96fd440da0f9bcb07e8b1d7bfcc72e1270b05bb14ac1f4766. Mar 11 02:05:15.825668 containerd[1464]: time="2026-03-11T02:05:15.825406142Z" level=info msg="StartContainer for \"1f4bbc908b5847f96fd440da0f9bcb07e8b1d7bfcc72e1270b05bb14ac1f4766\" returns successfully" Mar 11 02:05:15.961643 kubelet[2520]: E0311 02:05:15.961427 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:15.962277 containerd[1464]: time="2026-03-11T02:05:15.962122028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pfhb6,Uid:9e2daf99-f9ea-4749-a5da-e2c4a54796cb,Namespace:kube-system,Attempt:0,}" Mar 11 02:05:15.970817 kubelet[2520]: E0311 02:05:15.970745 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:15.971490 containerd[1464]: time="2026-03-11T02:05:15.971435201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kwv6t,Uid:0daead55-086a-4b3f-a937-3314966d43d0,Namespace:kube-system,Attempt:0,}" Mar 11 02:05:16.021402 containerd[1464]: time="2026-03-11T02:05:16.021254357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pfhb6,Uid:9e2daf99-f9ea-4749-a5da-e2c4a54796cb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2e1f8492d5940b034cd4beb909e9690cc8023bbc901b5ad94ef679d4d6ab0f5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 11 02:05:16.023872 containerd[1464]: time="2026-03-11T02:05:16.023793104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kwv6t,Uid:0daead55-086a-4b3f-a937-3314966d43d0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee1f26e92669e2b19b6292bb05cb42aa47f96c8336cfb994606bbbf47d8294f4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 11 02:05:16.023962 kubelet[2520]: E0311 02:05:16.023812 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e1f8492d5940b034cd4beb909e9690cc8023bbc901b5ad94ef679d4d6ab0f5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 11 02:05:16.023962 kubelet[2520]: E0311 02:05:16.023912 2520 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e1f8492d5940b034cd4beb909e9690cc8023bbc901b5ad94ef679d4d6ab0f5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-pfhb6" Mar 11 02:05:16.023962 kubelet[2520]: E0311 02:05:16.023944 2520 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e1f8492d5940b034cd4beb909e9690cc8023bbc901b5ad94ef679d4d6ab0f5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-pfhb6" Mar 11 02:05:16.024090 kubelet[2520]: E0311 02:05:16.024025 2520 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pfhb6_kube-system(9e2daf99-f9ea-4749-a5da-e2c4a54796cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pfhb6_kube-system(9e2daf99-f9ea-4749-a5da-e2c4a54796cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2e1f8492d5940b034cd4beb909e9690cc8023bbc901b5ad94ef679d4d6ab0f5\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-pfhb6" podUID="9e2daf99-f9ea-4749-a5da-e2c4a54796cb" Mar 11 02:05:16.024175 kubelet[2520]: E0311 02:05:16.024103 2520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1f26e92669e2b19b6292bb05cb42aa47f96c8336cfb994606bbbf47d8294f4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 11 02:05:16.024243 kubelet[2520]: E0311 02:05:16.024188 2520 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1f26e92669e2b19b6292bb05cb42aa47f96c8336cfb994606bbbf47d8294f4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-kwv6t" Mar 11 02:05:16.024243 kubelet[2520]: E0311 02:05:16.024223 2520 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1f26e92669e2b19b6292bb05cb42aa47f96c8336cfb994606bbbf47d8294f4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-kwv6t" Mar 11 02:05:16.024399 kubelet[2520]: E0311 02:05:16.024274 2520 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kwv6t_kube-system(0daead55-086a-4b3f-a937-3314966d43d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kwv6t_kube-system(0daead55-086a-4b3f-a937-3314966d43d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee1f26e92669e2b19b6292bb05cb42aa47f96c8336cfb994606bbbf47d8294f4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-kwv6t" podUID="0daead55-086a-4b3f-a937-3314966d43d0" Mar 11 02:05:16.402006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3874f510484e0c1441da9c08ec4b8f0be05066175ba089c17282d3463c4a8dec-rootfs.mount: Deactivated successfully. Mar 11 02:05:16.733108 kubelet[2520]: E0311 02:05:16.732907 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:16.896863 systemd-networkd[1397]: flannel.1: Link UP Mar 11 02:05:16.896879 systemd-networkd[1397]: flannel.1: Gained carrier Mar 11 02:05:17.736210 kubelet[2520]: E0311 02:05:17.736087 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:17.998585 systemd-networkd[1397]: flannel.1: Gained IPv6LL Mar 11 02:05:28.612611 kubelet[2520]: E0311 02:05:28.612543 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:28.614952 containerd[1464]: time="2026-03-11T02:05:28.613905447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kwv6t,Uid:0daead55-086a-4b3f-a937-3314966d43d0,Namespace:kube-system,Attempt:0,}" Mar 11 02:05:28.640189 systemd-networkd[1397]: cni0: Link UP Mar 11 02:05:28.640203 systemd-networkd[1397]: cni0: Gained carrier Mar 11 02:05:28.645179 systemd-networkd[1397]: cni0: Lost carrier Mar 11 02:05:28.652432 systemd-networkd[1397]: veth0d61cebc: Link UP Mar 11 02:05:28.658741 kernel: cni0: port 1(veth0d61cebc) entered blocking state Mar 11 02:05:28.658810 kernel: cni0: port 1(veth0d61cebc) entered disabled state Mar 11 02:05:28.658833 kernel: veth0d61cebc: entered allmulticast mode Mar 11 02:05:28.660801 kernel: veth0d61cebc: entered promiscuous mode Mar 11 02:05:28.664937 kernel: cni0: port 1(veth0d61cebc) entered blocking state Mar 11 02:05:28.664985 kernel: cni0: port 1(veth0d61cebc) entered forwarding state Mar 11 02:05:28.667392 kernel: cni0: port 1(veth0d61cebc) entered disabled state Mar 11 02:05:28.680454 kernel: cni0: port 1(veth0d61cebc) entered blocking state Mar 11 02:05:28.680556 kernel: cni0: port 1(veth0d61cebc) entered forwarding state Mar 11 02:05:28.680660 systemd-networkd[1397]: veth0d61cebc: Gained carrier Mar 11 02:05:28.681000 systemd-networkd[1397]: cni0: Gained carrier Mar 11 02:05:28.686941 containerd[1464]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Mar 11 02:05:28.686941 containerd[1464]: delegateAdd: netconf sent to delegate plugin: Mar 11 02:05:28.721827 containerd[1464]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-11T02:05:28.721694938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:05:28.721971 containerd[1464]: time="2026-03-11T02:05:28.721791449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:05:28.721971 containerd[1464]: time="2026-03-11T02:05:28.721819631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:05:28.722054 containerd[1464]: time="2026-03-11T02:05:28.721976333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:05:28.758598 systemd[1]: Started cri-containerd-bbd9aebc73e51619c00b42a67c9d3455885d47bc96a009177e463a6f6af16024.scope - libcontainer container bbd9aebc73e51619c00b42a67c9d3455885d47bc96a009177e463a6f6af16024. Mar 11 02:05:28.775572 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:05:28.804697 containerd[1464]: time="2026-03-11T02:05:28.804617923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kwv6t,Uid:0daead55-086a-4b3f-a937-3314966d43d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbd9aebc73e51619c00b42a67c9d3455885d47bc96a009177e463a6f6af16024\"" Mar 11 02:05:28.805474 kubelet[2520]: E0311 02:05:28.805412 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:28.811174 containerd[1464]: time="2026-03-11T02:05:28.811102547Z" level=info msg="CreateContainer within sandbox \"bbd9aebc73e51619c00b42a67c9d3455885d47bc96a009177e463a6f6af16024\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 02:05:28.825827 containerd[1464]: time="2026-03-11T02:05:28.825749566Z" level=info msg="CreateContainer within sandbox \"bbd9aebc73e51619c00b42a67c9d3455885d47bc96a009177e463a6f6af16024\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa1b020d4df697898b6b00f7cee22c67fee6c32072e8340c3d2009bdc2a7b2d0\"" Mar 11 02:05:28.826487 containerd[1464]: time="2026-03-11T02:05:28.826445091Z" level=info msg="StartContainer for \"aa1b020d4df697898b6b00f7cee22c67fee6c32072e8340c3d2009bdc2a7b2d0\"" Mar 11 02:05:28.868571 systemd[1]: Started cri-containerd-aa1b020d4df697898b6b00f7cee22c67fee6c32072e8340c3d2009bdc2a7b2d0.scope - libcontainer container aa1b020d4df697898b6b00f7cee22c67fee6c32072e8340c3d2009bdc2a7b2d0. Mar 11 02:05:28.910433 containerd[1464]: time="2026-03-11T02:05:28.910397478Z" level=info msg="StartContainer for \"aa1b020d4df697898b6b00f7cee22c67fee6c32072e8340c3d2009bdc2a7b2d0\" returns successfully" Mar 11 02:05:29.613876 kubelet[2520]: E0311 02:05:29.613786 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:29.614894 containerd[1464]: time="2026-03-11T02:05:29.614790703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pfhb6,Uid:9e2daf99-f9ea-4749-a5da-e2c4a54796cb,Namespace:kube-system,Attempt:0,}" Mar 11 02:05:29.657367 systemd-networkd[1397]: vethab284289: Link UP Mar 11 02:05:29.662537 kernel: cni0: port 2(vethab284289) entered blocking state Mar 11 02:05:29.662597 kernel: cni0: port 2(vethab284289) entered disabled state Mar 11 02:05:29.662630 kernel: vethab284289: entered allmulticast mode Mar 11 02:05:29.666657 kernel: vethab284289: entered promiscuous mode Mar 11 02:05:29.669206 kernel: cni0: port 2(vethab284289) entered blocking state Mar 11 02:05:29.669252 kernel: cni0: port 2(vethab284289) entered forwarding state Mar 11 02:05:29.672375 kernel: cni0: port 2(vethab284289) entered disabled state Mar 11 02:05:29.706420 kernel: cni0: port 2(vethab284289) entered blocking state Mar 11 02:05:29.706511 kernel: cni0: port 2(vethab284289) entered forwarding state Mar 11 02:05:29.707243 systemd-networkd[1397]: vethab284289: Gained carrier Mar 11 02:05:29.710004 containerd[1464]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00008a950), "name":"cbr0", "type":"bridge"} Mar 11 02:05:29.710004 containerd[1464]: delegateAdd: netconf sent to delegate plugin: Mar 11 02:05:29.748411 containerd[1464]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-11T02:05:29.748064654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:05:29.748411 containerd[1464]: time="2026-03-11T02:05:29.748154672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:05:29.748411 containerd[1464]: time="2026-03-11T02:05:29.748166515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:05:29.748625 containerd[1464]: time="2026-03-11T02:05:29.748389129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:05:29.768711 kubelet[2520]: E0311 02:05:29.768632 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:29.781612 systemd[1]: Started cri-containerd-50ec9be9f9b2cd2f69e79ddaa1b1422da6acfb51c8260810e9f5eaddc481aac3.scope - libcontainer container 50ec9be9f9b2cd2f69e79ddaa1b1422da6acfb51c8260810e9f5eaddc481aac3. Mar 11 02:05:29.789240 kubelet[2520]: I0311 02:05:29.789140 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-9px68" podStartSLOduration=15.07689611 podStartE2EDuration="18.789109354s" podCreationTimestamp="2026-03-11 02:05:11 +0000 UTC" firstStartedPulling="2026-03-11 02:05:11.667851816 +0000 UTC m=+7.262024600" lastFinishedPulling="2026-03-11 02:05:15.38006507 +0000 UTC m=+10.974237844" observedRunningTime="2026-03-11 02:05:16.746452331 +0000 UTC m=+12.340625115" watchObservedRunningTime="2026-03-11 02:05:29.789109354 +0000 UTC m=+25.383282128" Mar 11 02:05:29.805740 kubelet[2520]: I0311 02:05:29.805621 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kwv6t" podStartSLOduration=18.805604008 podStartE2EDuration="18.805604008s" podCreationTimestamp="2026-03-11 02:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:05:29.789828715 +0000 UTC m=+25.384001489" watchObservedRunningTime="2026-03-11 02:05:29.805604008 +0000 UTC m=+25.399776782" Mar 11 02:05:29.806403 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:05:29.851840 containerd[1464]: time="2026-03-11T02:05:29.851748056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pfhb6,Uid:9e2daf99-f9ea-4749-a5da-e2c4a54796cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"50ec9be9f9b2cd2f69e79ddaa1b1422da6acfb51c8260810e9f5eaddc481aac3\"" Mar 11 02:05:29.854927 kubelet[2520]: E0311 02:05:29.854853 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:29.874239 containerd[1464]: time="2026-03-11T02:05:29.874122581Z" level=info msg="CreateContainer within sandbox \"50ec9be9f9b2cd2f69e79ddaa1b1422da6acfb51c8260810e9f5eaddc481aac3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 02:05:29.895973 containerd[1464]: time="2026-03-11T02:05:29.895833164Z" level=info msg="CreateContainer within sandbox \"50ec9be9f9b2cd2f69e79ddaa1b1422da6acfb51c8260810e9f5eaddc481aac3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"daabcb188e1e545c4af0c80263a6dc3ad7f2d4378d017551370cdf1ba5590855\"" Mar 11 02:05:29.896668 containerd[1464]: time="2026-03-11T02:05:29.896617131Z" level=info msg="StartContainer for \"daabcb188e1e545c4af0c80263a6dc3ad7f2d4378d017551370cdf1ba5590855\"" Mar 11 02:05:29.936527 systemd[1]: Started cri-containerd-daabcb188e1e545c4af0c80263a6dc3ad7f2d4378d017551370cdf1ba5590855.scope - libcontainer container daabcb188e1e545c4af0c80263a6dc3ad7f2d4378d017551370cdf1ba5590855. Mar 11 02:05:29.985933 containerd[1464]: time="2026-03-11T02:05:29.984736446Z" level=info msg="StartContainer for \"daabcb188e1e545c4af0c80263a6dc3ad7f2d4378d017551370cdf1ba5590855\" returns successfully" Mar 11 02:05:30.478559 systemd-networkd[1397]: veth0d61cebc: Gained IPv6LL Mar 11 02:05:30.479042 systemd-networkd[1397]: cni0: Gained IPv6LL Mar 11 02:05:30.771742 kubelet[2520]: E0311 02:05:30.771533 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:30.771742 kubelet[2520]: E0311 02:05:30.771660 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:30.785672 kubelet[2520]: I0311 02:05:30.785537 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pfhb6" podStartSLOduration=19.78552522 podStartE2EDuration="19.78552522s" podCreationTimestamp="2026-03-11 02:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:05:30.78516576 +0000 UTC m=+26.379338544" watchObservedRunningTime="2026-03-11 02:05:30.78552522 +0000 UTC m=+26.379697994" Mar 11 02:05:31.502663 systemd-networkd[1397]: vethab284289: Gained IPv6LL Mar 11 02:05:31.774689 kubelet[2520]: E0311 02:05:31.774550 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:31.774689 kubelet[2520]: E0311 02:05:31.774675 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:32.780380 kubelet[2520]: E0311 02:05:32.778542 2520 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:05:34.556484 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:43092.service - OpenSSH per-connection server daemon (10.0.0.1:43092). Mar 11 02:05:34.614466 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 43092 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:05:34.616120 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:05:34.622993 systemd-logind[1450]: New session 8 of user core. Mar 11 02:05:34.638518 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 11 02:05:34.771809 sshd[3474]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:34.775795 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:43092.service: Deactivated successfully. Mar 11 02:05:34.777838 systemd[1]: session-8.scope: Deactivated successfully. Mar 11 02:05:34.778808 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Mar 11 02:05:34.780340 systemd-logind[1450]: Removed session 8. Mar 11 02:05:39.786548 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:43098.service - OpenSSH per-connection server daemon (10.0.0.1:43098). Mar 11 02:05:39.850534 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 43098 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:05:39.853537 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:05:39.861574 systemd-logind[1450]: New session 9 of user core. Mar 11 02:05:39.876728 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 11 02:05:40.031472 sshd[3510]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:40.036873 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:43098.service: Deactivated successfully. Mar 11 02:05:40.039130 systemd[1]: session-9.scope: Deactivated successfully. Mar 11 02:05:40.040556 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Mar 11 02:05:40.042105 systemd-logind[1450]: Removed session 9. Mar 11 02:05:45.045948 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:51104.service - OpenSSH per-connection server daemon (10.0.0.1:51104). Mar 11 02:05:45.087155 sshd[3548]: Accepted publickey for core from 10.0.0.1 port 51104 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:05:45.089158 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:05:45.095381 systemd-logind[1450]: New session 10 of user core. Mar 11 02:05:45.105106 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 11 02:05:45.241805 sshd[3548]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:45.262399 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:51104.service: Deactivated successfully. Mar 11 02:05:45.264746 systemd[1]: session-10.scope: Deactivated successfully. Mar 11 02:05:45.266709 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Mar 11 02:05:45.272698 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:51108.service - OpenSSH per-connection server daemon (10.0.0.1:51108). Mar 11 02:05:45.274047 systemd-logind[1450]: Removed session 10. Mar 11 02:05:45.310988 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 51108 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:05:45.313876 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:05:45.321357 systemd-logind[1450]: New session 11 of user core. Mar 11 02:05:45.328723 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 11 02:05:45.506114 sshd[3564]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:45.514233 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:51108.service: Deactivated successfully. Mar 11 02:05:45.516655 systemd[1]: session-11.scope: Deactivated successfully. Mar 11 02:05:45.519131 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Mar 11 02:05:45.527823 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:51116.service - OpenSSH per-connection server daemon (10.0.0.1:51116). Mar 11 02:05:45.529236 systemd-logind[1450]: Removed session 11. Mar 11 02:05:45.592089 sshd[3576]: Accepted publickey for core from 10.0.0.1 port 51116 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:05:45.593784 sshd[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:05:45.599996 systemd-logind[1450]: New session 12 of user core. Mar 11 02:05:45.609524 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 11 02:05:45.737839 sshd[3576]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:45.742762 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:51116.service: Deactivated successfully. Mar 11 02:05:45.745073 systemd[1]: session-12.scope: Deactivated successfully. Mar 11 02:05:45.746064 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Mar 11 02:05:45.747643 systemd-logind[1450]: Removed session 12. Mar 11 02:05:50.750591 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:46376.service - OpenSSH per-connection server daemon (10.0.0.1:46376). Mar 11 02:05:50.799171 sshd[3610]: Accepted publickey for core from 10.0.0.1 port 46376 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:05:50.801438 sshd[3610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:05:50.807767 systemd-logind[1450]: New session 13 of user core. Mar 11 02:05:50.815760 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 11 02:05:50.945904 sshd[3610]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:50.963786 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:46376.service: Deactivated successfully. Mar 11 02:05:50.966223 systemd[1]: session-13.scope: Deactivated successfully. Mar 11 02:05:50.968878 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Mar 11 02:05:50.980873 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:46392.service - OpenSSH per-connection server daemon (10.0.0.1:46392). Mar 11 02:05:50.982710 systemd-logind[1450]: Removed session 13. Mar 11 02:05:51.014519 sshd[3625]: Accepted publickey for core from 10.0.0.1 port 46392 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:05:51.016984 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:05:51.023379 systemd-logind[1450]: New session 14 of user core. Mar 11 02:05:51.033529 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 11 02:05:51.253635 sshd[3625]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:51.267143 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:46392.service: Deactivated successfully. Mar 11 02:05:51.269250 systemd[1]: session-14.scope: Deactivated successfully. Mar 11 02:05:51.271448 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Mar 11 02:05:51.273251 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:46394.service - OpenSSH per-connection server daemon (10.0.0.1:46394). Mar 11 02:05:51.274563 systemd-logind[1450]: Removed session 14. Mar 11 02:05:51.320768 sshd[3637]: Accepted publickey for core from 10.0.0.1 port 46394 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:05:51.322998 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:05:51.329481 systemd-logind[1450]: New session 15 of user core. Mar 11 02:05:51.339547 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 11 02:05:51.950417 sshd[3637]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:51.963810 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:46394.service: Deactivated successfully. Mar 11 02:05:51.967039 systemd[1]: session-15.scope: Deactivated successfully. Mar 11 02:05:51.970376 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Mar 11 02:05:51.977701 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:46410.service - OpenSSH per-connection server daemon (10.0.0.1:46410). Mar 11 02:05:51.981537 systemd-logind[1450]: Removed session 15. Mar 11 02:05:52.041437 sshd[3655]: Accepted publickey for core from 10.0.0.1 port 46410 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:05:52.043212 sshd[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:05:52.049144 systemd-logind[1450]: New session 16 of user core. Mar 11 02:05:52.058628 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 11 02:05:52.322669 sshd[3655]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:52.330545 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:46410.service: Deactivated successfully. Mar 11 02:05:52.333817 systemd[1]: session-16.scope: Deactivated successfully. Mar 11 02:05:52.335387 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Mar 11 02:05:52.345794 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:46412.service - OpenSSH per-connection server daemon (10.0.0.1:46412). Mar 11 02:05:52.347623 systemd-logind[1450]: Removed session 16. Mar 11 02:05:52.388023 sshd[3687]: Accepted publickey for core from 10.0.0.1 port 46412 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:05:52.389812 sshd[3687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:05:52.396072 systemd-logind[1450]: New session 17 of user core. Mar 11 02:05:52.401581 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 11 02:05:52.529476 sshd[3687]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:52.533889 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:46412.service: Deactivated successfully. Mar 11 02:05:52.536390 systemd[1]: session-17.scope: Deactivated successfully. Mar 11 02:05:52.537442 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Mar 11 02:05:52.539220 systemd-logind[1450]: Removed session 17. Mar 11 02:05:57.542261 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:46416.service - OpenSSH per-connection server daemon (10.0.0.1:46416). Mar 11 02:05:57.582819 sshd[3725]: Accepted publickey for core from 10.0.0.1 port 46416 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:05:57.584712 sshd[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:05:57.590891 systemd-logind[1450]: New session 18 of user core. Mar 11 02:05:57.600596 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 11 02:05:57.718816 sshd[3725]: pam_unix(sshd:session): session closed for user core Mar 11 02:05:57.722843 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:46416.service: Deactivated successfully. Mar 11 02:05:57.724805 systemd[1]: session-18.scope: Deactivated successfully. Mar 11 02:05:57.725708 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Mar 11 02:05:57.726958 systemd-logind[1450]: Removed session 18. Mar 11 02:06:02.735163 systemd[1]: Started sshd@18-10.0.0.128:22-10.0.0.1:47266.service - OpenSSH per-connection server daemon (10.0.0.1:47266). Mar 11 02:06:02.797866 sshd[3759]: Accepted publickey for core from 10.0.0.1 port 47266 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:06:02.799813 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:06:02.806201 systemd-logind[1450]: New session 19 of user core. Mar 11 02:06:02.818557 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 11 02:06:02.942023 sshd[3759]: pam_unix(sshd:session): session closed for user core Mar 11 02:06:02.946895 systemd[1]: sshd@18-10.0.0.128:22-10.0.0.1:47266.service: Deactivated successfully. Mar 11 02:06:02.949281 systemd[1]: session-19.scope: Deactivated successfully. Mar 11 02:06:02.950257 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Mar 11 02:06:02.952097 systemd-logind[1450]: Removed session 19. Mar 11 02:06:07.957440 systemd[1]: Started sshd@19-10.0.0.128:22-10.0.0.1:47278.service - OpenSSH per-connection server daemon (10.0.0.1:47278). Mar 11 02:06:07.999109 sshd[3795]: Accepted publickey for core from 10.0.0.1 port 47278 ssh2: RSA SHA256:gH/5l4Mgi/Uj9RrwCdf8v/VwgItSVtkCkGvBRY4tjmE Mar 11 02:06:08.001005 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:06:08.008130 systemd-logind[1450]: New session 20 of user core. Mar 11 02:06:08.017683 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 11 02:06:08.149151 sshd[3795]: pam_unix(sshd:session): session closed for user core Mar 11 02:06:08.155224 systemd[1]: sshd@19-10.0.0.128:22-10.0.0.1:47278.service: Deactivated successfully. Mar 11 02:06:08.158645 systemd[1]: session-20.scope: Deactivated successfully. Mar 11 02:06:08.160684 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Mar 11 02:06:08.163556 systemd-logind[1450]: Removed session 20.