Mar 2 13:02:10.229012 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 13:02:10.229032 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:02:10.229042 kernel: BIOS-provided physical RAM map: Mar 2 13:02:10.229048 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 13:02:10.229053 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 13:02:10.229059 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 13:02:10.229065 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 13:02:10.229070 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 13:02:10.229075 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 2 13:02:10.229081 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 2 13:02:10.229088 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 2 13:02:10.229094 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 2 13:02:10.229099 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 2 13:02:10.229105 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 2 13:02:10.229111 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 2 13:02:10.229117 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 13:02:10.229125 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 2 13:02:10.229131 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 2 13:02:10.229137 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 13:02:10.229142 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 13:02:10.229148 kernel: NX (Execute Disable) protection: active Mar 2 13:02:10.229154 kernel: APIC: Static calls initialized Mar 2 13:02:10.229160 kernel: efi: EFI v2.7 by EDK II Mar 2 13:02:10.229166 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 2 13:02:10.229172 kernel: SMBIOS 2.8 present. Mar 2 13:02:10.229178 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 2 13:02:10.229183 kernel: Hypervisor detected: KVM Mar 2 13:02:10.229192 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 13:02:10.229198 kernel: kvm-clock: using sched offset of 7488204398 cycles Mar 2 13:02:10.229204 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 13:02:10.229210 kernel: tsc: Detected 2445.424 MHz processor Mar 2 13:02:10.229240 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 13:02:10.229248 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 13:02:10.229254 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 2 13:02:10.229260 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 2 13:02:10.229266 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 13:02:10.229275 kernel: Using GB pages for direct mapping Mar 2 13:02:10.229281 kernel: Secure boot disabled Mar 2 13:02:10.229287 kernel: ACPI: Early table checksum verification disabled Mar 2 13:02:10.229312 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 2 13:02:10.229323 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 2 13:02:10.229329 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:02:10.229355 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:02:10.229382 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 2 13:02:10.229388 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:02:10.229412 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:02:10.229418 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:02:10.229425 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:02:10.229460 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 2 13:02:10.229467 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 2 13:02:10.229492 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 2 13:02:10.229498 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 2 13:02:10.229505 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 2 13:02:10.229511 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 2 13:02:10.233552 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 2 13:02:10.233656 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 2 13:02:10.233679 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 2 13:02:10.233686 kernel: No NUMA configuration found Mar 2 13:02:10.233708 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 2 13:02:10.233747 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 2 13:02:10.233754 kernel: Zone ranges: Mar 2 13:02:10.233776 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 13:02:10.233797 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 2 13:02:10.233803 kernel: Normal empty Mar 2 13:02:10.233824 kernel: Movable zone start for each node Mar 2 13:02:10.233856 kernel: Early memory node ranges Mar 2 13:02:10.233862 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 2 13:02:10.233884 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 2 13:02:10.233891 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 2 13:02:10.233916 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 2 13:02:10.233937 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 2 13:02:10.233943 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 2 13:02:10.233965 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 2 13:02:10.233971 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:02:10.233992 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 2 13:02:10.233999 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 2 13:02:10.234021 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:02:10.234027 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 2 13:02:10.234033 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 2 13:02:10.234057 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 2 13:02:10.234064 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 13:02:10.234070 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 13:02:10.234091 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 13:02:10.234098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 13:02:10.234118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 13:02:10.234125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 13:02:10.234145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 13:02:10.234152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 13:02:10.234176 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 13:02:10.234183 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 13:02:10.234204 kernel: TSC deadline timer available Mar 2 13:02:10.234211 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 13:02:10.234217 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 13:02:10.234223 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 13:02:10.234229 kernel: kvm-guest: setup PV sched yield Mar 2 13:02:10.234250 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 2 13:02:10.234256 kernel: Booting paravirtualized kernel on KVM Mar 2 13:02:10.234265 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 13:02:10.234272 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 13:02:10.234278 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 13:02:10.234284 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 13:02:10.234290 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 13:02:10.234297 kernel: kvm-guest: PV spinlocks enabled Mar 2 13:02:10.234303 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 13:02:10.234310 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:02:10.234319 kernel: random: crng init done Mar 2 13:02:10.234325 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:02:10.234332 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:02:10.234338 kernel: Fallback order for Node 0: 0 Mar 2 13:02:10.234344 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 2 13:02:10.234350 kernel: Policy zone: DMA32 Mar 2 13:02:10.234356 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:02:10.234363 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 2 13:02:10.234369 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 13:02:10.234378 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 13:02:10.234384 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 13:02:10.234390 kernel: Dynamic Preempt: voluntary Mar 2 13:02:10.234397 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:02:10.234411 kernel: rcu: RCU event tracing is enabled. Mar 2 13:02:10.234420 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 13:02:10.234427 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:02:10.234433 kernel: Rude variant of Tasks RCU enabled. Mar 2 13:02:10.234440 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:02:10.234446 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:02:10.234470 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 13:02:10.234477 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 13:02:10.234486 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:02:10.234493 kernel: Console: colour dummy device 80x25 Mar 2 13:02:10.234499 kernel: printk: console [ttyS0] enabled Mar 2 13:02:10.234506 kernel: ACPI: Core revision 20230628 Mar 2 13:02:10.234512 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 13:02:10.234521 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 13:02:10.234528 kernel: x2apic enabled Mar 2 13:02:10.234534 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 13:02:10.234541 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 13:02:10.234548 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 13:02:10.234554 kernel: kvm-guest: setup PV IPIs Mar 2 13:02:10.234599 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 13:02:10.234606 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 13:02:10.234613 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 2 13:02:10.234622 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 13:02:10.234629 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 13:02:10.234636 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 13:02:10.234642 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 13:02:10.234649 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 13:02:10.234655 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 13:02:10.234662 kernel: Speculative Store Bypass: Vulnerable Mar 2 13:02:10.234668 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 13:02:10.234676 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 13:02:10.234684 kernel: active return thunk: srso_alias_return_thunk Mar 2 13:02:10.234691 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 13:02:10.234697 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 13:02:10.234704 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 13:02:10.234710 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 13:02:10.234717 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 13:02:10.234723 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 13:02:10.234730 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 13:02:10.234739 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 13:02:10.234745 kernel: Freeing SMP alternatives memory: 32K Mar 2 13:02:10.234752 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:02:10.234758 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:02:10.234765 kernel: landlock: Up and running. Mar 2 13:02:10.234771 kernel: SELinux: Initializing. Mar 2 13:02:10.234778 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:02:10.234784 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:02:10.234791 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 13:02:10.234800 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:02:10.234806 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:02:10.234813 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:02:10.234819 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 13:02:10.234849 kernel: signal: max sigframe size: 1776 Mar 2 13:02:10.234857 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:02:10.234864 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:02:10.234870 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 13:02:10.234877 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:02:10.234886 kernel: smpboot: x86: Booting SMP configuration: Mar 2 13:02:10.234893 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 13:02:10.234899 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 13:02:10.234906 kernel: smpboot: Max logical packages: 1 Mar 2 13:02:10.234912 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 2 13:02:10.234918 kernel: devtmpfs: initialized Mar 2 13:02:10.234925 kernel: x86/mm: Memory block size: 128MB Mar 2 13:02:10.234931 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 2 13:02:10.234938 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 2 13:02:10.234947 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 2 13:02:10.234953 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 2 13:02:10.234960 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 2 13:02:10.234967 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:02:10.234973 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 13:02:10.234980 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:02:10.234986 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:02:10.234993 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:02:10.234999 kernel: audit: type=2000 audit(1772456527.712:1): state=initialized audit_enabled=0 res=1 Mar 2 13:02:10.235008 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:02:10.235014 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 13:02:10.235021 kernel: cpuidle: using governor menu Mar 2 13:02:10.235028 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:02:10.235034 kernel: dca service started, version 1.12.1 Mar 2 13:02:10.235041 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 13:02:10.235047 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 13:02:10.235054 kernel: PCI: Using configuration type 1 for base access Mar 2 13:02:10.235060 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 13:02:10.235069 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:02:10.235075 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:02:10.235082 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:02:10.235088 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:02:10.235095 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:02:10.235101 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:02:10.235108 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:02:10.235114 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:02:10.235121 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 13:02:10.235129 kernel: ACPI: Interpreter enabled Mar 2 13:02:10.235136 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 13:02:10.235142 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 13:02:10.235149 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 13:02:10.235155 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 13:02:10.235161 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 13:02:10.235168 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 13:02:10.235354 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 13:02:10.235492 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 13:02:10.235694 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 13:02:10.235707 kernel: PCI host bridge to bus 0000:00 Mar 2 13:02:10.235880 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 13:02:10.235999 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 13:02:10.236137 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 13:02:10.236253 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 13:02:10.236370 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 13:02:10.236479 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 2 13:02:10.236719 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 13:02:10.239961 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 13:02:10.240178 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 13:02:10.240376 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 2 13:02:10.240708 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 2 13:02:10.240987 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 2 13:02:10.241116 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 2 13:02:10.241238 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 13:02:10.241393 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 13:02:10.241554 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 2 13:02:10.241767 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 2 13:02:10.241935 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 2 13:02:10.242069 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 13:02:10.242191 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 2 13:02:10.242310 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 2 13:02:10.242478 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 2 13:02:10.242763 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 13:02:10.242940 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 2 13:02:10.243063 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 2 13:02:10.243181 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 2 13:02:10.243299 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 2 13:02:10.243476 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 13:02:10.243709 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 13:02:10.243884 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 13:02:10.244017 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 2 13:02:10.244136 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 2 13:02:10.244265 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 13:02:10.244419 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 2 13:02:10.244439 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 13:02:10.244448 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 13:02:10.244455 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 13:02:10.244461 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 13:02:10.244472 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 13:02:10.244479 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 13:02:10.244485 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 13:02:10.244492 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 13:02:10.244498 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 13:02:10.244505 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 13:02:10.244511 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 13:02:10.244518 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 13:02:10.244525 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 13:02:10.244534 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 13:02:10.244540 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 13:02:10.244547 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 13:02:10.244553 kernel: iommu: Default domain type: Translated Mar 2 13:02:10.244639 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 13:02:10.244646 kernel: efivars: Registered efivars operations Mar 2 13:02:10.244653 kernel: PCI: Using ACPI for IRQ routing Mar 2 13:02:10.244660 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 13:02:10.244666 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 2 13:02:10.244677 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 2 13:02:10.244683 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 2 13:02:10.244690 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 2 13:02:10.244821 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 13:02:10.244985 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 13:02:10.245104 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 13:02:10.245113 kernel: vgaarb: loaded Mar 2 13:02:10.245120 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 13:02:10.245127 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 13:02:10.245138 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 13:02:10.245145 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:02:10.245152 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:02:10.245158 kernel: pnp: PnP ACPI init Mar 2 13:02:10.245287 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 13:02:10.245297 kernel: pnp: PnP ACPI: found 6 devices Mar 2 13:02:10.245304 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 13:02:10.245311 kernel: NET: Registered PF_INET protocol family Mar 2 13:02:10.245320 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:02:10.245327 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:02:10.245337 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:02:10.245350 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:02:10.245362 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:02:10.245374 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:02:10.245383 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:02:10.245397 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:02:10.245409 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:02:10.245427 kernel: NET: Registered PF_XDP protocol family Mar 2 13:02:10.245624 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 2 13:02:10.245775 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 2 13:02:10.245941 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 13:02:10.246052 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 13:02:10.246162 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 13:02:10.246270 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 13:02:10.246418 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 13:02:10.246550 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 2 13:02:10.246621 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:02:10.246629 kernel: Initialise system trusted keyrings Mar 2 13:02:10.246635 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:02:10.246642 kernel: Key type asymmetric registered Mar 2 13:02:10.246649 kernel: Asymmetric key parser 'x509' registered Mar 2 13:02:10.246656 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 13:02:10.246662 kernel: io scheduler mq-deadline registered Mar 2 13:02:10.246673 kernel: io scheduler kyber registered Mar 2 13:02:10.246680 kernel: io scheduler bfq registered Mar 2 13:02:10.246686 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 13:02:10.246694 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 13:02:10.246700 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 13:02:10.246707 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 13:02:10.246714 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:02:10.246721 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 13:02:10.246727 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 13:02:10.246736 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 13:02:10.246743 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 13:02:10.246911 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 13:02:10.246922 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 13:02:10.247037 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 13:02:10.247154 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T13:02:09 UTC (1772456529) Mar 2 13:02:10.247269 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 13:02:10.247277 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 13:02:10.247288 kernel: efifb: probing for efifb Mar 2 13:02:10.247295 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 2 13:02:10.247302 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 2 13:02:10.247308 kernel: efifb: scrolling: redraw Mar 2 13:02:10.247315 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 2 13:02:10.247321 kernel: Console: switching to colour frame buffer device 100x37 Mar 2 13:02:10.247328 kernel: fb0: EFI VGA frame buffer device Mar 2 13:02:10.247339 kernel: pstore: Using crash dump compression: deflate Mar 2 13:02:10.247352 kernel: pstore: Registered efi_pstore as persistent store backend Mar 2 13:02:10.247369 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:02:10.247379 kernel: Segment Routing with IPv6 Mar 2 13:02:10.247391 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:02:10.247403 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:02:10.247415 kernel: Key type dns_resolver registered Mar 2 13:02:10.247427 kernel: IPI shorthand broadcast: enabled Mar 2 13:02:10.247456 kernel: sched_clock: Marking stable (1375036269, 367280626)->(1889190536, -146873641) Mar 2 13:02:10.247465 kernel: registered taskstats version 1 Mar 2 13:02:10.247472 kernel: Loading compiled-in X.509 certificates Mar 2 13:02:10.247482 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 13:02:10.247489 kernel: Key type .fscrypt registered Mar 2 13:02:10.247495 kernel: Key type fscrypt-provisioning registered Mar 2 13:02:10.247502 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:02:10.247509 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:02:10.247516 kernel: ima: No architecture policies found Mar 2 13:02:10.247522 kernel: clk: Disabling unused clocks Mar 2 13:02:10.247529 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 13:02:10.247536 kernel: Write protecting the kernel read-only data: 36864k Mar 2 13:02:10.247546 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 13:02:10.247554 kernel: Run /init as init process Mar 2 13:02:10.247637 kernel: with arguments: Mar 2 13:02:10.247645 kernel: /init Mar 2 13:02:10.247652 kernel: with environment: Mar 2 13:02:10.247658 kernel: HOME=/ Mar 2 13:02:10.247665 kernel: TERM=linux Mar 2 13:02:10.247674 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:02:10.247688 systemd[1]: Detected virtualization kvm. Mar 2 13:02:10.247695 systemd[1]: Detected architecture x86-64. Mar 2 13:02:10.247702 systemd[1]: Running in initrd. Mar 2 13:02:10.247709 systemd[1]: No hostname configured, using default hostname. Mar 2 13:02:10.247716 systemd[1]: Hostname set to . Mar 2 13:02:10.247723 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:02:10.247730 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:02:10.247737 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:02:10.247747 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:02:10.247755 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:02:10.247762 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:02:10.247770 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:02:10.247780 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:02:10.247791 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:02:10.247798 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:02:10.247808 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:02:10.247815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:02:10.247822 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:02:10.247880 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:02:10.247896 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:02:10.247916 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:02:10.247929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:02:10.247941 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:02:10.247954 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:02:10.247967 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:02:10.247980 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:02:10.247992 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:02:10.248005 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:02:10.248022 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:02:10.248035 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:02:10.248048 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:02:10.248061 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:02:10.248075 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:02:10.248088 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:02:10.248101 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:02:10.248113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:02:10.248126 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:02:10.248176 systemd-journald[194]: Collecting audit messages is disabled. Mar 2 13:02:10.248204 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:02:10.248218 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:02:10.248239 systemd-journald[194]: Journal started Mar 2 13:02:10.248263 systemd-journald[194]: Runtime Journal (/run/log/journal/ecd8cc5cc27343aaa23ef1fd62001a0a) is 6.0M, max 48.3M, 42.2M free. Mar 2 13:02:10.236720 systemd-modules-load[195]: Inserted module 'overlay' Mar 2 13:02:10.258700 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:02:10.268634 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:02:10.272410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:02:10.277965 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:02:10.304633 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:02:10.308475 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 2 13:02:10.311722 kernel: Bridge firewalling registered Mar 2 13:02:10.314030 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:02:10.322161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:02:10.325309 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:02:10.325940 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:02:10.339245 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:02:10.343177 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:02:10.349819 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:02:10.373172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:02:10.381500 dracut-cmdline[225]: dracut-dracut-053 Mar 2 13:02:10.388524 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:02:10.381949 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:02:10.388868 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:02:10.420931 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:02:10.460771 systemd-resolved[264]: Positive Trust Anchors: Mar 2 13:02:10.460813 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:02:10.460897 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:02:10.463524 systemd-resolved[264]: Defaulting to hostname 'linux'. Mar 2 13:02:10.465062 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:02:10.474999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:02:10.513181 kernel: SCSI subsystem initialized Mar 2 13:02:10.523715 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:02:10.535697 kernel: iscsi: registered transport (tcp) Mar 2 13:02:10.566154 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:02:10.566243 kernel: QLogic iSCSI HBA Driver Mar 2 13:02:10.622965 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:02:10.644820 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:02:10.684656 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:02:10.684718 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:02:10.689456 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:02:10.741646 kernel: raid6: avx2x4 gen() 30991 MB/s Mar 2 13:02:10.759645 kernel: raid6: avx2x2 gen() 28275 MB/s Mar 2 13:02:10.780698 kernel: raid6: avx2x1 gen() 21285 MB/s Mar 2 13:02:10.780745 kernel: raid6: using algorithm avx2x4 gen() 30991 MB/s Mar 2 13:02:10.802058 kernel: raid6: .... xor() 4792 MB/s, rmw enabled Mar 2 13:02:10.802116 kernel: raid6: using avx2x2 recovery algorithm Mar 2 13:02:10.826676 kernel: xor: automatically using best checksumming function avx Mar 2 13:02:11.008694 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:02:11.022005 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:02:11.034739 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:02:11.051136 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 2 13:02:11.057356 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:02:11.079761 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:02:11.094382 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Mar 2 13:02:11.131263 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:02:11.153060 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:02:11.227655 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:02:11.239821 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:02:11.255979 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:02:11.265333 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:02:11.269641 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:02:11.269775 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:02:11.280902 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:02:11.291473 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:02:11.314608 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 13:02:11.320753 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 13:02:11.325697 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 13:02:11.333424 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 13:02:11.333483 kernel: GPT:9289727 != 19775487 Mar 2 13:02:11.333496 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 13:02:11.333607 kernel: GPT:9289727 != 19775487 Mar 2 13:02:11.333620 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 13:02:11.333630 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:02:11.332248 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:02:11.332446 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:02:11.354737 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:02:11.363308 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:02:11.364071 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:02:11.370870 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:02:11.389284 kernel: libata version 3.00 loaded. Mar 2 13:02:11.390279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:02:11.400295 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 13:02:11.406673 kernel: AES CTR mode by8 optimization enabled Mar 2 13:02:11.406708 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 13:02:11.409807 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 13:02:11.417814 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 13:02:11.418137 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 13:02:11.430828 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:02:11.446165 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (483) Mar 2 13:02:11.446220 kernel: scsi host0: ahci Mar 2 13:02:11.446537 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Mar 2 13:02:11.446556 kernel: scsi host1: ahci Mar 2 13:02:11.447045 kernel: scsi host2: ahci Mar 2 13:02:11.447902 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 13:02:11.457233 kernel: scsi host3: ahci Mar 2 13:02:11.460392 kernel: scsi host4: ahci Mar 2 13:02:11.460622 kernel: scsi host5: ahci Mar 2 13:02:11.462741 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 13:02:11.477009 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 2 13:02:11.477035 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 2 13:02:11.477052 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 2 13:02:11.477077 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 2 13:02:11.477093 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 2 13:02:11.484744 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 2 13:02:11.487700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:02:11.495951 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 13:02:11.499676 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 13:02:11.525884 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:02:11.531133 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:02:11.549663 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:02:11.549820 disk-uuid[569]: Primary Header is updated. Mar 2 13:02:11.549820 disk-uuid[569]: Secondary Entries is updated. Mar 2 13:02:11.549820 disk-uuid[569]: Secondary Header is updated. Mar 2 13:02:11.565300 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:02:11.565323 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:02:11.565461 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:02:11.798600 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 13:02:11.798665 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 13:02:11.799623 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 13:02:11.802784 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 13:02:11.808661 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 13:02:11.812693 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 13:02:11.816288 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 13:02:11.816309 kernel: ata3.00: applying bridge limits Mar 2 13:02:11.819683 kernel: ata3.00: configured for UDMA/100 Mar 2 13:02:11.823674 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 13:02:11.869630 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 13:02:11.869896 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:02:11.885879 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 13:02:12.565607 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:02:12.565663 disk-uuid[571]: The operation has completed successfully. Mar 2 13:02:12.599805 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:02:12.600154 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:02:12.646753 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:02:12.654101 sh[597]: Success Mar 2 13:02:12.670606 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 13:02:12.734551 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:02:12.753261 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:02:12.757215 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:02:12.777685 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 13:02:12.777713 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:02:12.777725 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:02:12.780974 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:02:12.784288 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:02:12.798087 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:02:12.798825 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:02:12.816819 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:02:12.821973 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:02:12.843878 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:02:12.843916 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:02:12.843935 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:02:12.853992 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:02:12.865707 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:02:12.872486 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:02:12.884226 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:02:12.894906 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:02:12.960392 ignition[701]: Ignition 2.19.0 Mar 2 13:02:12.960422 ignition[701]: Stage: fetch-offline Mar 2 13:02:12.960471 ignition[701]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:02:12.960484 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:02:12.960657 ignition[701]: parsed url from cmdline: "" Mar 2 13:02:12.960664 ignition[701]: no config URL provided Mar 2 13:02:12.960680 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:02:12.960698 ignition[701]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:02:12.960737 ignition[701]: op(1): [started] loading QEMU firmware config module Mar 2 13:02:12.960742 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 13:02:12.992156 ignition[701]: op(1): [finished] loading QEMU firmware config module Mar 2 13:02:13.004743 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:02:13.027763 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:02:13.054879 systemd-networkd[786]: lo: Link UP Mar 2 13:02:13.054915 systemd-networkd[786]: lo: Gained carrier Mar 2 13:02:13.057541 systemd-networkd[786]: Enumeration completed Mar 2 13:02:13.059108 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:02:13.059115 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:02:13.060735 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:02:13.060923 systemd-networkd[786]: eth0: Link UP Mar 2 13:02:13.060929 systemd-networkd[786]: eth0: Gained carrier Mar 2 13:02:13.060939 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:02:13.067822 systemd[1]: Reached target network.target - Network. Mar 2 13:02:13.126657 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:02:13.139940 ignition[701]: parsing config with SHA512: a08bf82c96fcb22aa668f0f86bc3867f4d5f110adb03a8fbba30c54d0a4e0032fceb037498762eda2dbc65214615b0b11a94c0e1edf9498db078a74cc58be361 Mar 2 13:02:13.144719 unknown[701]: fetched base config from "system" Mar 2 13:02:13.144760 unknown[701]: fetched user config from "qemu" Mar 2 13:02:13.145433 ignition[701]: fetch-offline: fetch-offline passed Mar 2 13:02:13.147802 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:02:13.145533 ignition[701]: Ignition finished successfully Mar 2 13:02:13.151898 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 13:02:13.162810 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:02:13.180446 ignition[790]: Ignition 2.19.0 Mar 2 13:02:13.180477 ignition[790]: Stage: kargs Mar 2 13:02:13.180783 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:02:13.180795 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:02:13.181502 ignition[790]: kargs: kargs passed Mar 2 13:02:13.181542 ignition[790]: Ignition finished successfully Mar 2 13:02:13.197990 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:02:13.213936 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:02:13.230257 ignition[797]: Ignition 2.19.0 Mar 2 13:02:13.230296 ignition[797]: Stage: disks Mar 2 13:02:13.230443 ignition[797]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:02:13.230455 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:02:13.231242 ignition[797]: disks: disks passed Mar 2 13:02:13.231281 ignition[797]: Ignition finished successfully Mar 2 13:02:13.241387 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:02:13.242795 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:02:13.247807 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:02:13.257946 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:02:13.264340 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:02:13.271151 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:02:13.291969 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:02:13.316513 systemd-resolved[264]: Detected conflict on linux IN A 10.0.0.65 Mar 2 13:02:13.318975 systemd-resolved[264]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Mar 2 13:02:13.326016 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 13:02:13.335092 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:02:13.353788 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:02:13.528807 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 13:02:13.530241 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:02:13.531315 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:02:13.555745 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:02:13.563122 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:02:13.563484 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 13:02:13.584042 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Mar 2 13:02:13.563527 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:02:13.620968 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:02:13.621001 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:02:13.621013 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:02:13.621023 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:02:13.563550 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:02:13.622749 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:02:13.650497 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:02:13.667988 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:02:13.738924 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:02:13.750160 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:02:13.758414 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:02:13.766757 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:02:13.916339 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:02:13.935751 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:02:13.941052 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:02:13.954287 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:02:13.947499 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:02:13.989122 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:02:13.997314 ignition[928]: INFO : Ignition 2.19.0 Mar 2 13:02:13.997314 ignition[928]: INFO : Stage: mount Mar 2 13:02:13.997314 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:02:13.997314 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:02:13.997314 ignition[928]: INFO : mount: mount passed Mar 2 13:02:13.997314 ignition[928]: INFO : Ignition finished successfully Mar 2 13:02:13.999362 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:02:14.020947 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:02:14.029265 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:02:14.058699 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Mar 2 13:02:14.058741 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:02:14.058754 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:02:14.058765 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:02:14.065695 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:02:14.067782 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:02:14.104406 ignition[959]: INFO : Ignition 2.19.0 Mar 2 13:02:14.104406 ignition[959]: INFO : Stage: files Mar 2 13:02:14.112202 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:02:14.112202 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:02:14.120037 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:02:14.124122 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:02:14.124122 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:02:14.135980 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:02:14.140645 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:02:14.140645 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:02:14.140645 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:02:14.140645 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 13:02:14.136917 unknown[959]: wrote ssh authorized keys file for user: core Mar 2 13:02:14.216903 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:02:14.329226 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:02:14.329226 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:02:14.329226 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:02:14.329226 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:02:14.329226 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:02:14.329226 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:02:14.329226 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:02:14.329226 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:02:14.329226 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:02:14.329226 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:02:14.442132 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:02:14.442132 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:02:14.442132 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:02:14.442132 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:02:14.442132 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 2 13:02:14.370827 systemd-networkd[786]: eth0: Gained IPv6LL Mar 2 13:02:14.640405 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 13:02:14.993316 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:02:14.993316 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 13:02:15.009326 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:02:15.019157 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:02:15.019157 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 13:02:15.019157 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 2 13:02:15.019157 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:02:15.039980 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:02:15.039980 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 2 13:02:15.039980 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 13:02:15.092449 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:02:15.098146 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:02:15.098146 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 13:02:15.098146 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:02:15.098146 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:02:15.124337 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:02:15.124337 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:02:15.124337 ignition[959]: INFO : files: files passed Mar 2 13:02:15.124337 ignition[959]: INFO : Ignition finished successfully Mar 2 13:02:15.142630 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:02:15.152794 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:02:15.158151 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:02:15.171215 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:02:15.174891 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:02:15.182721 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 13:02:15.187270 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:02:15.187270 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:02:15.197091 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:02:15.202189 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:02:15.223948 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:02:15.239794 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:02:15.270123 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:02:15.270352 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:02:15.274071 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:02:15.281911 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:02:15.288164 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:02:15.289185 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:02:15.332803 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:02:15.357046 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:02:15.376781 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:02:15.381236 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:02:15.389675 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:02:15.397359 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:02:15.397685 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:02:15.410379 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:02:15.417323 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:02:15.424404 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:02:15.432036 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:02:15.439337 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:02:15.446998 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:02:15.454677 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:02:15.463080 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:02:15.469935 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:02:15.477618 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:02:15.483977 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:02:15.484204 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:02:15.491385 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:02:15.497130 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:02:15.505481 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:02:15.512455 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:02:15.519445 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:02:15.519795 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:02:15.527545 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:02:15.527835 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:02:15.535805 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:02:15.541771 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:02:15.544214 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:02:15.551121 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:02:15.557492 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:02:15.565324 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:02:15.565537 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:02:15.572232 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:02:15.572398 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:02:15.643265 ignition[1013]: INFO : Ignition 2.19.0 Mar 2 13:02:15.643265 ignition[1013]: INFO : Stage: umount Mar 2 13:02:15.643265 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:02:15.643265 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:02:15.643265 ignition[1013]: INFO : umount: umount passed Mar 2 13:02:15.643265 ignition[1013]: INFO : Ignition finished successfully Mar 2 13:02:15.579390 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:02:15.579650 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:02:15.586420 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:02:15.586753 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:02:15.612083 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:02:15.617388 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:02:15.617688 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:02:15.626258 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:02:15.631144 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:02:15.631417 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:02:15.643451 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:02:15.643672 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:02:15.653973 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:02:15.654112 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:02:15.660931 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:02:15.662026 systemd[1]: Stopped target network.target - Network. Mar 2 13:02:15.667459 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:02:15.667644 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:02:15.673994 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:02:15.674055 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:02:15.681788 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:02:15.681905 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:02:15.689247 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:02:15.689302 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:02:15.693250 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:02:15.701327 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:02:15.718481 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:02:15.718705 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:02:15.718922 systemd-networkd[786]: eth0: DHCPv6 lease lost Mar 2 13:02:15.730002 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:02:15.730197 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:02:15.741900 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:02:15.742116 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:02:15.751241 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:02:15.751423 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:02:15.766766 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:02:15.992293 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 2 13:02:15.766833 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:02:15.777940 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:02:15.778028 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:02:15.802028 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:02:15.819817 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:02:15.819940 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:02:15.830637 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:02:15.830691 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:02:15.836145 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:02:15.836198 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:02:15.841834 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:02:15.841922 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:02:15.848501 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:02:15.876112 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:02:15.876379 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:02:15.883815 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:02:15.883979 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:02:15.886825 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:02:15.886931 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:02:15.887530 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:02:15.887739 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:02:15.889413 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:02:15.889478 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:02:15.891635 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:02:15.891712 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:02:15.894466 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:02:15.895318 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:02:15.895373 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:02:15.896466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:02:15.896516 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:02:15.898011 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:02:15.898150 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:02:15.925170 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:02:15.925316 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:02:15.925718 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:02:15.927065 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:02:15.944347 systemd[1]: Switching root. Mar 2 13:02:16.125985 systemd-journald[194]: Journal stopped Mar 2 13:02:17.576643 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:02:17.576749 kernel: SELinux: policy capability open_perms=1 Mar 2 13:02:17.576771 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:02:17.576795 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:02:17.576813 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:02:17.576829 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:02:17.576846 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:02:17.576917 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:02:17.576936 kernel: audit: type=1403 audit(1772456536.196:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:02:17.576963 systemd[1]: Successfully loaded SELinux policy in 58.409ms. Mar 2 13:02:17.576998 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.189ms. Mar 2 13:02:17.577018 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:02:17.577044 systemd[1]: Detected virtualization kvm. Mar 2 13:02:17.577070 systemd[1]: Detected architecture x86-64. Mar 2 13:02:17.577088 systemd[1]: Detected first boot. Mar 2 13:02:17.577106 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:02:17.577123 zram_generator::config[1055]: No configuration found. Mar 2 13:02:17.577143 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:02:17.577162 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:02:17.577188 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:02:17.577211 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:02:17.577231 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:02:17.577250 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:02:17.577269 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:02:17.577287 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:02:17.577306 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:02:17.577324 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:02:17.577343 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:02:17.577364 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:02:17.577387 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:02:17.577407 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:02:17.577425 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:02:17.577443 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:02:17.577462 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:02:17.577482 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:02:17.577500 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 13:02:17.577518 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:02:17.577537 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:02:17.577651 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:02:17.577677 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:02:17.577697 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:02:17.577716 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:02:17.577735 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:02:17.577753 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:02:17.577771 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:02:17.577795 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:02:17.577814 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:02:17.577832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:02:17.577898 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:02:17.577923 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:02:17.577941 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:02:17.577959 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:02:17.577978 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:02:17.577997 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:02:17.578021 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:02:17.578040 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:02:17.578057 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:02:17.578077 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:02:17.578101 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:02:17.578119 systemd[1]: Reached target machines.target - Containers. Mar 2 13:02:17.578137 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:02:17.578155 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:02:17.578174 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:02:17.578198 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:02:17.578217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:02:17.578235 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:02:17.578253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:02:17.578271 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:02:17.578289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:02:17.578307 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:02:17.578328 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:02:17.578351 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:02:17.578369 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:02:17.578388 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:02:17.578407 kernel: ACPI: bus type drm_connector registered Mar 2 13:02:17.578424 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:02:17.578442 kernel: loop: module loaded Mar 2 13:02:17.578459 kernel: fuse: init (API version 7.39) Mar 2 13:02:17.578484 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:02:17.578504 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:02:17.578529 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:02:17.578671 systemd-journald[1141]: Collecting audit messages is disabled. Mar 2 13:02:17.578714 systemd-journald[1141]: Journal started Mar 2 13:02:17.578747 systemd-journald[1141]: Runtime Journal (/run/log/journal/ecd8cc5cc27343aaa23ef1fd62001a0a) is 6.0M, max 48.3M, 42.2M free. Mar 2 13:02:16.947477 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:02:16.969073 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 13:02:16.970035 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:02:16.970460 systemd[1]: systemd-journald.service: Consumed 1.668s CPU time. Mar 2 13:02:17.588900 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:02:17.594798 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:02:17.594909 systemd[1]: Stopped verity-setup.service. Mar 2 13:02:17.605839 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:02:17.619706 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:02:17.624302 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:02:17.628147 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:02:17.632097 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:02:17.636009 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:02:17.640064 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:02:17.644111 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:02:17.647887 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:02:17.652740 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:02:17.657674 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:02:17.658021 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:02:17.662496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:02:17.662909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:02:17.667689 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:02:17.668015 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:02:17.672304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:02:17.672699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:02:17.677431 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:02:17.677795 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:02:17.682315 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:02:17.682686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:02:17.687247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:02:17.691812 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:02:17.697036 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:02:17.722026 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:02:17.745008 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:02:17.750910 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:02:17.757116 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:02:17.757203 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:02:17.761918 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 13:02:17.767361 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:02:17.772491 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:02:17.776032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:02:17.778741 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:02:17.784313 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:02:17.788128 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:02:17.792013 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:02:17.795902 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:02:17.797757 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:02:17.804226 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:02:17.838745 systemd-journald[1141]: Time spent on flushing to /var/log/journal/ecd8cc5cc27343aaa23ef1fd62001a0a is 28.398ms for 983 entries. Mar 2 13:02:17.838745 systemd-journald[1141]: System Journal (/var/log/journal/ecd8cc5cc27343aaa23ef1fd62001a0a) is 8.0M, max 195.6M, 187.6M free. Mar 2 13:02:17.894779 systemd-journald[1141]: Received client request to flush runtime journal. Mar 2 13:02:17.894829 kernel: loop0: detected capacity change from 0 to 142488 Mar 2 13:02:17.828932 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:02:17.845017 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:02:17.857452 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:02:17.861939 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:02:17.870447 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:02:17.875407 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:02:17.884967 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:02:17.896955 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 13:02:17.910155 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 13:02:17.928078 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:02:17.938784 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:02:17.944676 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:02:17.950930 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 2 13:02:17.957483 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:02:17.970983 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:02:17.977075 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:02:17.978917 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 13:02:17.997792 kernel: loop1: detected capacity change from 0 to 228704 Mar 2 13:02:18.011956 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 2 13:02:18.012024 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 2 13:02:18.036303 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:02:18.081723 kernel: loop2: detected capacity change from 0 to 140768 Mar 2 13:02:18.140773 kernel: loop3: detected capacity change from 0 to 142488 Mar 2 13:02:18.167717 kernel: loop4: detected capacity change from 0 to 228704 Mar 2 13:02:18.185386 kernel: loop5: detected capacity change from 0 to 140768 Mar 2 13:02:18.201425 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 13:02:18.202239 (sd-merge)[1196]: Merged extensions into '/usr'. Mar 2 13:02:18.225284 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:02:18.225338 systemd[1]: Reloading... Mar 2 13:02:18.338752 zram_generator::config[1222]: No configuration found. Mar 2 13:02:18.379034 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:02:18.467461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:02:18.522491 systemd[1]: Reloading finished in 296 ms. Mar 2 13:02:18.570972 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:02:18.576678 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:02:18.582485 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:02:18.611185 systemd[1]: Starting ensure-sysext.service... Mar 2 13:02:18.617180 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:02:18.624493 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:02:18.632779 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:02:18.632802 systemd[1]: Reloading... Mar 2 13:02:18.648380 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:02:18.649096 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:02:18.650534 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:02:18.651091 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 2 13:02:18.651291 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 2 13:02:18.656510 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:02:18.656727 systemd-tmpfiles[1261]: Skipping /boot Mar 2 13:02:18.676092 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:02:18.676109 systemd-tmpfiles[1261]: Skipping /boot Mar 2 13:02:18.686100 systemd-udevd[1262]: Using default interface naming scheme 'v255'. Mar 2 13:02:18.728666 zram_generator::config[1288]: No configuration found. Mar 2 13:02:18.830257 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1310) Mar 2 13:02:18.870911 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 2 13:02:18.888676 kernel: ACPI: button: Power Button [PWRF] Mar 2 13:02:18.898527 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 2 13:02:18.907048 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 13:02:18.907316 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 13:02:18.907754 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 13:02:18.898330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:02:18.973640 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 13:02:19.010215 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 13:02:19.019776 systemd[1]: Reloading finished in 386 ms. Mar 2 13:02:19.047516 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:02:19.052768 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:02:19.105783 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:02:19.163755 systemd[1]: Finished ensure-sysext.service. Mar 2 13:02:19.195759 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:02:19.200831 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:02:19.221836 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:02:19.241746 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:02:19.248646 kernel: kvm_amd: TSC scaling supported Mar 2 13:02:19.248709 kernel: kvm_amd: Nested Virtualization enabled Mar 2 13:02:19.248730 kernel: kvm_amd: Nested Paging enabled Mar 2 13:02:19.255693 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 13:02:19.255773 kernel: kvm_amd: PMU virtualization is disabled Mar 2 13:02:19.259910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:02:19.265794 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:02:19.277120 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:02:19.289187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:02:19.301216 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:02:19.321113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:02:19.327907 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:02:19.339719 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:02:19.360383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:02:19.381099 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:02:19.396772 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 13:02:19.408459 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:02:19.423783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:02:19.440752 kernel: EDAC MC: Ver: 3.0.0 Mar 2 13:02:19.427497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:02:19.428716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:02:19.429229 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:02:19.433949 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:02:19.434113 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:02:19.442428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:02:19.443039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:02:19.448650 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:02:19.448979 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:02:19.453144 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:02:19.461263 augenrules[1388]: No rules Mar 2 13:02:19.465329 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:02:19.469654 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:02:19.483798 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:02:19.484070 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:02:19.491288 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:02:19.501524 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:02:19.503455 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 13:02:19.527087 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 13:02:19.528916 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:02:19.531666 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:02:19.534255 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:02:19.542316 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:02:19.550824 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:02:19.645018 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:02:19.768016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:02:19.773187 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 13:02:19.781051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:02:19.794934 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 13:02:19.824433 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:02:19.850664 systemd-networkd[1380]: lo: Link UP Mar 2 13:02:19.850675 systemd-networkd[1380]: lo: Gained carrier Mar 2 13:02:19.852452 systemd-networkd[1380]: Enumeration completed Mar 2 13:02:19.852650 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:02:19.855492 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:02:19.855529 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:02:19.857649 systemd-networkd[1380]: eth0: Link UP Mar 2 13:02:19.857658 systemd-networkd[1380]: eth0: Gained carrier Mar 2 13:02:19.857670 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:02:19.871928 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:02:19.873613 systemd-resolved[1381]: Positive Trust Anchors: Mar 2 13:02:19.873931 systemd-resolved[1381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:02:19.874001 systemd-resolved[1381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:02:19.877526 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 13:02:19.883406 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:02:19.889252 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 13:02:19.889652 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:02:19.891906 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Mar 2 13:02:19.894274 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 13:02:19.894374 systemd-timesyncd[1383]: Initial clock synchronization to Mon 2026-03-02 13:02:19.700550 UTC. Mar 2 13:02:19.896376 systemd-resolved[1381]: Defaulting to hostname 'linux'. Mar 2 13:02:19.898817 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:02:19.905004 systemd[1]: Reached target network.target - Network. Mar 2 13:02:19.926262 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:02:19.932525 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:02:19.937033 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:02:19.942102 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:02:19.947990 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:02:19.952551 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:02:19.958178 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:02:19.963501 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:02:19.963673 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:02:19.967756 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:02:19.973027 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:02:19.980388 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:02:19.995436 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:02:19.999669 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:02:20.004061 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:02:20.009900 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:02:20.022015 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:02:20.022075 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:02:20.023863 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:02:20.029837 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:02:20.034075 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:02:20.040735 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:02:20.044969 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:02:20.046311 jq[1429]: false Mar 2 13:02:20.047888 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:02:20.056815 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:02:20.064037 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:02:20.071428 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:02:20.080948 dbus-daemon[1428]: [system] SELinux support is enabled Mar 2 13:02:20.080760 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:02:20.086110 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:02:20.088790 extend-filesystems[1430]: Found loop3 Mar 2 13:02:20.088790 extend-filesystems[1430]: Found loop4 Mar 2 13:02:20.088790 extend-filesystems[1430]: Found loop5 Mar 2 13:02:20.088790 extend-filesystems[1430]: Found sr0 Mar 2 13:02:20.088790 extend-filesystems[1430]: Found vda Mar 2 13:02:20.088790 extend-filesystems[1430]: Found vda1 Mar 2 13:02:20.088790 extend-filesystems[1430]: Found vda2 Mar 2 13:02:20.088790 extend-filesystems[1430]: Found vda3 Mar 2 13:02:20.088790 extend-filesystems[1430]: Found usr Mar 2 13:02:20.088790 extend-filesystems[1430]: Found vda4 Mar 2 13:02:20.088790 extend-filesystems[1430]: Found vda6 Mar 2 13:02:20.088790 extend-filesystems[1430]: Found vda7 Mar 2 13:02:20.088790 extend-filesystems[1430]: Found vda9 Mar 2 13:02:20.088790 extend-filesystems[1430]: Checking size of /dev/vda9 Mar 2 13:02:20.199763 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 13:02:20.199813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1334) Mar 2 13:02:20.086676 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:02:20.199887 extend-filesystems[1430]: Resized partition /dev/vda9 Mar 2 13:02:20.088431 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:02:20.205486 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Mar 2 13:02:20.094945 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:02:20.213909 update_engine[1443]: I20260302 13:02:20.126216 1443 main.cc:92] Flatcar Update Engine starting Mar 2 13:02:20.213909 update_engine[1443]: I20260302 13:02:20.127943 1443 update_check_scheduler.cc:74] Next update check in 6m17s Mar 2 13:02:20.100656 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:02:20.214240 jq[1446]: true Mar 2 13:02:20.133847 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:02:20.134164 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:02:20.134749 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:02:20.135034 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:02:20.152002 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:02:20.217476 jq[1454]: true Mar 2 13:02:20.152286 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:02:20.186490 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:02:20.186519 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:02:20.193865 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:02:20.193885 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:02:20.214907 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:02:20.229653 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:02:20.230372 tar[1452]: linux-amd64/LICENSE Mar 2 13:02:20.231154 tar[1452]: linux-amd64/helm Mar 2 13:02:20.244786 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:02:20.282082 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 13:02:20.282128 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:02:20.283929 systemd-logind[1441]: New seat seat0. Mar 2 13:02:20.284996 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:02:20.300659 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 13:02:20.329037 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 13:02:20.329037 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 13:02:20.329037 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 13:02:20.337973 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Mar 2 13:02:20.335240 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:02:20.335513 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:02:20.354172 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:02:20.360175 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:02:20.365413 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:02:20.376126 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:02:20.418890 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:02:20.454772 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:02:20.461194 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:02:20.493235 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:02:20.493720 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:02:20.507937 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:02:20.563691 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:02:20.577098 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:02:20.581864 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 13:02:20.585889 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:02:20.910243 containerd[1461]: time="2026-03-02T13:02:20.910017093Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 13:02:21.037704 containerd[1461]: time="2026-03-02T13:02:21.037328828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:02:21.041228 containerd[1461]: time="2026-03-02T13:02:21.041145980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:02:21.041228 containerd[1461]: time="2026-03-02T13:02:21.041205000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 13:02:21.041228 containerd[1461]: time="2026-03-02T13:02:21.041230137Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 13:02:21.041697 containerd[1461]: time="2026-03-02T13:02:21.041640796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 13:02:21.041697 containerd[1461]: time="2026-03-02T13:02:21.041688479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 13:02:21.041783 containerd[1461]: time="2026-03-02T13:02:21.041768153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:02:21.041814 containerd[1461]: time="2026-03-02T13:02:21.041783060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:02:21.042105 containerd[1461]: time="2026-03-02T13:02:21.042038225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:02:21.042105 containerd[1461]: time="2026-03-02T13:02:21.042083672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 13:02:21.042105 containerd[1461]: time="2026-03-02T13:02:21.042098942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:02:21.042198 containerd[1461]: time="2026-03-02T13:02:21.042109308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 13:02:21.042227 containerd[1461]: time="2026-03-02T13:02:21.042202359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:02:21.042642 containerd[1461]: time="2026-03-02T13:02:21.042501040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:02:21.042785 containerd[1461]: time="2026-03-02T13:02:21.042741082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:02:21.042785 containerd[1461]: time="2026-03-02T13:02:21.042778428Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 13:02:21.042913 containerd[1461]: time="2026-03-02T13:02:21.042872372Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 13:02:21.043054 containerd[1461]: time="2026-03-02T13:02:21.043016293Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:02:21.050246 containerd[1461]: time="2026-03-02T13:02:21.049855846Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 13:02:21.050246 containerd[1461]: time="2026-03-02T13:02:21.049973307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 13:02:21.050246 containerd[1461]: time="2026-03-02T13:02:21.050003033Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 13:02:21.050246 containerd[1461]: time="2026-03-02T13:02:21.050024943Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 13:02:21.050246 containerd[1461]: time="2026-03-02T13:02:21.050047754Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 13:02:21.050412 containerd[1461]: time="2026-03-02T13:02:21.050356055Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 13:02:21.052295 containerd[1461]: time="2026-03-02T13:02:21.052217247Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 13:02:21.052655 containerd[1461]: time="2026-03-02T13:02:21.052508739Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 13:02:21.052755 containerd[1461]: time="2026-03-02T13:02:21.052697980Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 13:02:21.052801 containerd[1461]: time="2026-03-02T13:02:21.052777723Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 13:02:21.052846 containerd[1461]: time="2026-03-02T13:02:21.052805262Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 13:02:21.052846 containerd[1461]: time="2026-03-02T13:02:21.052830005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 13:02:21.052993 containerd[1461]: time="2026-03-02T13:02:21.052851856Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 13:02:21.052993 containerd[1461]: time="2026-03-02T13:02:21.052884642Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 13:02:21.052993 containerd[1461]: time="2026-03-02T13:02:21.052908424Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 13:02:21.052993 containerd[1461]: time="2026-03-02T13:02:21.052927921Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 13:02:21.052993 containerd[1461]: time="2026-03-02T13:02:21.052981694Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 13:02:21.053158 containerd[1461]: time="2026-03-02T13:02:21.053003800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 13:02:21.053158 containerd[1461]: time="2026-03-02T13:02:21.053079502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053158 containerd[1461]: time="2026-03-02T13:02:21.053110169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053158 containerd[1461]: time="2026-03-02T13:02:21.053131441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053158 containerd[1461]: time="2026-03-02T13:02:21.053153213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053248 containerd[1461]: time="2026-03-02T13:02:21.053173544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053248 containerd[1461]: time="2026-03-02T13:02:21.053192855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053248 containerd[1461]: time="2026-03-02T13:02:21.053212420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053248 containerd[1461]: time="2026-03-02T13:02:21.053235633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053312 containerd[1461]: time="2026-03-02T13:02:21.053256808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053312 containerd[1461]: time="2026-03-02T13:02:21.053281159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053312 containerd[1461]: time="2026-03-02T13:02:21.053300852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053358 containerd[1461]: time="2026-03-02T13:02:21.053319770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053358 containerd[1461]: time="2026-03-02T13:02:21.053336403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053400 containerd[1461]: time="2026-03-02T13:02:21.053369493Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 13:02:21.053419 containerd[1461]: time="2026-03-02T13:02:21.053409154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053517 containerd[1461]: time="2026-03-02T13:02:21.053428268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053696 containerd[1461]: time="2026-03-02T13:02:21.053623070Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 13:02:21.053726 containerd[1461]: time="2026-03-02T13:02:21.053712748Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 13:02:21.053756 containerd[1461]: time="2026-03-02T13:02:21.053741503Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 13:02:21.053790 containerd[1461]: time="2026-03-02T13:02:21.053756693Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 13:02:21.053790 containerd[1461]: time="2026-03-02T13:02:21.053773641Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 13:02:21.053846 containerd[1461]: time="2026-03-02T13:02:21.053787350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.053846 containerd[1461]: time="2026-03-02T13:02:21.053807661Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 13:02:21.053846 containerd[1461]: time="2026-03-02T13:02:21.053823550Z" level=info msg="NRI interface is disabled by configuration." Mar 2 13:02:21.053846 containerd[1461]: time="2026-03-02T13:02:21.053838398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 13:02:21.054305 containerd[1461]: time="2026-03-02T13:02:21.054193293Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 13:02:21.054305 containerd[1461]: time="2026-03-02T13:02:21.054302997Z" level=info msg="Connect containerd service" Mar 2 13:02:21.054305 containerd[1461]: time="2026-03-02T13:02:21.054402462Z" level=info msg="using legacy CRI server" Mar 2 13:02:21.054305 containerd[1461]: time="2026-03-02T13:02:21.054413790Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:02:21.055250 containerd[1461]: time="2026-03-02T13:02:21.054520768Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 13:02:21.056135 containerd[1461]: time="2026-03-02T13:02:21.056072306Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:02:21.056688 containerd[1461]: time="2026-03-02T13:02:21.056522115Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:02:21.056748 containerd[1461]: time="2026-03-02T13:02:21.056628030Z" level=info msg="Start subscribing containerd event" Mar 2 13:02:21.057010 containerd[1461]: time="2026-03-02T13:02:21.056767588Z" level=info msg="Start recovering state" Mar 2 13:02:21.057010 containerd[1461]: time="2026-03-02T13:02:21.056664772Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:02:21.057855 containerd[1461]: time="2026-03-02T13:02:21.056873339Z" level=info msg="Start event monitor" Mar 2 13:02:21.057855 containerd[1461]: time="2026-03-02T13:02:21.057818131Z" level=info msg="Start snapshots syncer" Mar 2 13:02:21.057855 containerd[1461]: time="2026-03-02T13:02:21.057831999Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:02:21.057855 containerd[1461]: time="2026-03-02T13:02:21.057844474Z" level=info msg="Start streaming server" Mar 2 13:02:21.057991 containerd[1461]: time="2026-03-02T13:02:21.057977637Z" level=info msg="containerd successfully booted in 0.150222s" Mar 2 13:02:21.058246 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:02:21.195011 tar[1452]: linux-amd64/README.md Mar 2 13:02:21.219347 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:02:21.794147 systemd-networkd[1380]: eth0: Gained IPv6LL Mar 2 13:02:21.798614 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:02:21.803518 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:02:21.824157 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 13:02:21.831184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:02:21.837703 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:02:21.872023 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:02:21.881935 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 13:02:21.882354 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 13:02:21.887632 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:02:23.437884 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:02:23.451002 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:36282.service - OpenSSH per-connection server daemon (10.0.0.1:36282). Mar 2 13:02:23.534538 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 36282 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:02:23.667362 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:02:23.681745 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:02:23.698105 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:02:23.706937 systemd-logind[1441]: New session 1 of user core. Mar 2 13:02:23.746503 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:02:23.766152 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:02:23.778486 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:02:24.176927 systemd[1539]: Queued start job for default target default.target. Mar 2 13:02:24.192338 systemd[1539]: Created slice app.slice - User Application Slice. Mar 2 13:02:24.192371 systemd[1539]: Reached target paths.target - Paths. Mar 2 13:02:24.192385 systemd[1539]: Reached target timers.target - Timers. Mar 2 13:02:24.194414 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:02:24.259079 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:02:24.259405 systemd[1539]: Reached target sockets.target - Sockets. Mar 2 13:02:24.259425 systemd[1539]: Reached target basic.target - Basic System. Mar 2 13:02:24.259480 systemd[1539]: Reached target default.target - Main User Target. Mar 2 13:02:24.259524 systemd[1539]: Startup finished in 458ms. Mar 2 13:02:24.260875 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:02:24.315970 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:02:24.414762 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:36296.service - OpenSSH per-connection server daemon (10.0.0.1:36296). Mar 2 13:02:24.438745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:02:24.442394 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:02:24.445896 systemd[1]: Startup finished in 1.537s (kernel) + 6.394s (initrd) + 8.302s (userspace) = 16.235s. Mar 2 13:02:24.453246 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:02:24.476449 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 36296 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:02:24.478883 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:02:24.486151 systemd-logind[1441]: New session 2 of user core. Mar 2 13:02:24.495823 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:02:24.556266 sshd[1552]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:24.567775 systemd[1]: sshd@1-10.0.0.65:22-10.0.0.1:36296.service: Deactivated successfully. Mar 2 13:02:24.569702 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 13:02:24.571935 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Mar 2 13:02:24.573417 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:36310.service - OpenSSH per-connection server daemon (10.0.0.1:36310). Mar 2 13:02:24.574711 systemd-logind[1441]: Removed session 2. Mar 2 13:02:24.641053 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 36310 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:02:24.642888 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:02:24.649048 systemd-logind[1441]: New session 3 of user core. Mar 2 13:02:24.663920 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:02:24.718301 sshd[1572]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:24.727873 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:36310.service: Deactivated successfully. Mar 2 13:02:24.731048 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 13:02:24.733726 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Mar 2 13:02:24.740904 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:36314.service - OpenSSH per-connection server daemon (10.0.0.1:36314). Mar 2 13:02:24.742318 systemd-logind[1441]: Removed session 3. Mar 2 13:02:24.782443 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 36314 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:02:24.785693 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:02:24.793733 systemd-logind[1441]: New session 4 of user core. Mar 2 13:02:24.799830 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:02:24.860282 sshd[1579]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:24.867800 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:36314.service: Deactivated successfully. Mar 2 13:02:24.869792 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:02:24.871840 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:02:24.878130 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:36316.service - OpenSSH per-connection server daemon (10.0.0.1:36316). Mar 2 13:02:24.879807 systemd-logind[1441]: Removed session 4. Mar 2 13:02:24.913133 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 36316 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:02:24.915214 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:02:24.921284 systemd-logind[1441]: New session 5 of user core. Mar 2 13:02:24.930856 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:02:24.996700 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:02:24.997062 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:02:25.009212 kubelet[1556]: E0302 13:02:25.008850 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:02:25.013776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:02:25.014096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:02:25.014622 systemd[1]: kubelet.service: Consumed 2.835s CPU time. Mar 2 13:02:25.304980 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:02:25.305122 (dockerd)[1609]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:02:25.618835 dockerd[1609]: time="2026-03-02T13:02:25.618631460Z" level=info msg="Starting up" Mar 2 13:02:25.894678 dockerd[1609]: time="2026-03-02T13:02:25.894209290Z" level=info msg="Loading containers: start." Mar 2 13:02:26.086692 kernel: Initializing XFRM netlink socket Mar 2 13:02:26.261389 systemd-networkd[1380]: docker0: Link UP Mar 2 13:02:26.290885 dockerd[1609]: time="2026-03-02T13:02:26.290806250Z" level=info msg="Loading containers: done." Mar 2 13:02:26.346848 dockerd[1609]: time="2026-03-02T13:02:26.346778149Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:02:26.347013 dockerd[1609]: time="2026-03-02T13:02:26.346926912Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 13:02:26.347051 dockerd[1609]: time="2026-03-02T13:02:26.347026025Z" level=info msg="Daemon has completed initialization" Mar 2 13:02:26.399961 dockerd[1609]: time="2026-03-02T13:02:26.399871982Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:02:26.400078 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:02:27.500132 containerd[1461]: time="2026-03-02T13:02:27.499934124Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 2 13:02:28.053747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843670346.mount: Deactivated successfully. Mar 2 13:02:29.355136 containerd[1461]: time="2026-03-02T13:02:29.354972464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:29.356166 containerd[1461]: time="2026-03-02T13:02:29.356006938Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 2 13:02:29.357360 containerd[1461]: time="2026-03-02T13:02:29.357275170Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:29.360274 containerd[1461]: time="2026-03-02T13:02:29.360210061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:29.361365 containerd[1461]: time="2026-03-02T13:02:29.361321018Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.861203224s" Mar 2 13:02:29.361426 containerd[1461]: time="2026-03-02T13:02:29.361368789Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 2 13:02:29.362343 containerd[1461]: time="2026-03-02T13:02:29.362184522Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 2 13:02:30.744995 containerd[1461]: time="2026-03-02T13:02:30.744852415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:30.745985 containerd[1461]: time="2026-03-02T13:02:30.745935144Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 2 13:02:30.747423 containerd[1461]: time="2026-03-02T13:02:30.747353544Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:30.750382 containerd[1461]: time="2026-03-02T13:02:30.750314251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:30.752510 containerd[1461]: time="2026-03-02T13:02:30.752421842Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.390210518s" Mar 2 13:02:30.752510 containerd[1461]: time="2026-03-02T13:02:30.752469109Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 2 13:02:30.753205 containerd[1461]: time="2026-03-02T13:02:30.753111102Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 2 13:02:31.983837 containerd[1461]: time="2026-03-02T13:02:31.983744915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:31.985323 containerd[1461]: time="2026-03-02T13:02:31.985229054Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 2 13:02:31.987290 containerd[1461]: time="2026-03-02T13:02:31.987209027Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:31.993228 containerd[1461]: time="2026-03-02T13:02:31.993152330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:31.994729 containerd[1461]: time="2026-03-02T13:02:31.994683764Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.241521205s" Mar 2 13:02:31.994968 containerd[1461]: time="2026-03-02T13:02:31.994902753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 2 13:02:31.996130 containerd[1461]: time="2026-03-02T13:02:31.996067556Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 2 13:02:33.177108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352197786.mount: Deactivated successfully. Mar 2 13:02:33.589386 containerd[1461]: time="2026-03-02T13:02:33.589224807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:33.590246 containerd[1461]: time="2026-03-02T13:02:33.590178624Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 2 13:02:33.592682 containerd[1461]: time="2026-03-02T13:02:33.592531601Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:33.595481 containerd[1461]: time="2026-03-02T13:02:33.595373849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:33.596643 containerd[1461]: time="2026-03-02T13:02:33.596450729Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.600317163s" Mar 2 13:02:33.596643 containerd[1461]: time="2026-03-02T13:02:33.596605329Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 2 13:02:33.597704 containerd[1461]: time="2026-03-02T13:02:33.597548053Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 2 13:02:34.106351 kernel: hrtimer: interrupt took 4042075 ns Mar 2 13:02:34.468353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498447837.mount: Deactivated successfully. Mar 2 13:02:35.266430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:02:35.282006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:02:35.914796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:02:35.940252 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:02:36.310243 kubelet[1892]: E0302 13:02:36.309911 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:02:36.316696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:02:36.316911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:02:37.241831 containerd[1461]: time="2026-03-02T13:02:37.241553992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:37.243200 containerd[1461]: time="2026-03-02T13:02:37.242957941Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 2 13:02:37.244415 containerd[1461]: time="2026-03-02T13:02:37.244362104Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:37.250638 containerd[1461]: time="2026-03-02T13:02:37.250464295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:37.253239 containerd[1461]: time="2026-03-02T13:02:37.253139010Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.655438831s" Mar 2 13:02:37.253239 containerd[1461]: time="2026-03-02T13:02:37.253226605Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 2 13:02:37.257839 containerd[1461]: time="2026-03-02T13:02:37.257500588Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 2 13:02:37.995242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091762767.mount: Deactivated successfully. Mar 2 13:02:38.003942 containerd[1461]: time="2026-03-02T13:02:38.003783547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:38.005075 containerd[1461]: time="2026-03-02T13:02:38.004971143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 2 13:02:38.007154 containerd[1461]: time="2026-03-02T13:02:38.007070027Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:38.011672 containerd[1461]: time="2026-03-02T13:02:38.011536470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:38.012903 containerd[1461]: time="2026-03-02T13:02:38.012709985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 755.03982ms" Mar 2 13:02:38.012903 containerd[1461]: time="2026-03-02T13:02:38.012891787Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 2 13:02:38.015711 containerd[1461]: time="2026-03-02T13:02:38.015406236Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 2 13:02:38.594499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638508284.mount: Deactivated successfully. Mar 2 13:02:41.904762 containerd[1461]: time="2026-03-02T13:02:41.904526026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:41.905940 containerd[1461]: time="2026-03-02T13:02:41.905860303Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 2 13:02:41.907655 containerd[1461]: time="2026-03-02T13:02:41.907604030Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:41.915829 containerd[1461]: time="2026-03-02T13:02:41.915707361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:41.917697 containerd[1461]: time="2026-03-02T13:02:41.917606444Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 3.902097397s" Mar 2 13:02:41.917825 containerd[1461]: time="2026-03-02T13:02:41.917702342Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 2 13:02:46.570674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:02:46.582886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:02:46.856532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:02:46.863490 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:02:47.247386 kubelet[2000]: E0302 13:02:47.247143 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:02:47.251295 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:02:47.251754 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:02:48.023932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:02:48.043069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:02:48.082538 systemd[1]: Reloading requested from client PID 2017 ('systemctl') (unit session-5.scope)... Mar 2 13:02:48.082648 systemd[1]: Reloading... Mar 2 13:02:48.203709 zram_generator::config[2059]: No configuration found. Mar 2 13:02:48.366011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:02:48.444251 systemd[1]: Reloading finished in 361 ms. Mar 2 13:02:48.496022 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 13:02:48.496150 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 13:02:48.496472 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:02:48.498978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:02:48.702945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:02:48.709032 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:02:48.797782 kubelet[2104]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:02:48.798230 kubelet[2104]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:02:48.798230 kubelet[2104]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:02:48.798230 kubelet[2104]: I0302 13:02:48.797896 2104 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:02:49.041921 kubelet[2104]: I0302 13:02:49.041729 2104 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:02:49.041921 kubelet[2104]: I0302 13:02:49.041777 2104 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:02:49.042107 kubelet[2104]: I0302 13:02:49.042058 2104 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:02:49.643363 kubelet[2104]: E0302 13:02:49.643033 2104 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:02:49.645277 kubelet[2104]: I0302 13:02:49.645082 2104 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:02:49.657168 kubelet[2104]: E0302 13:02:49.657020 2104 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:02:49.657168 kubelet[2104]: I0302 13:02:49.657077 2104 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 13:02:49.667611 kubelet[2104]: I0302 13:02:49.667447 2104 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:02:49.669315 kubelet[2104]: I0302 13:02:49.669160 2104 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:02:49.669538 kubelet[2104]: I0302 13:02:49.669253 2104 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:02:49.669538 kubelet[2104]: I0302 13:02:49.669505 2104 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:02:49.669538 kubelet[2104]: I0302 13:02:49.669525 2104 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:02:49.670650 kubelet[2104]: I0302 13:02:49.669837 2104 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:02:49.677221 kubelet[2104]: I0302 13:02:49.677074 2104 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:02:49.677221 kubelet[2104]: I0302 13:02:49.677205 2104 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:02:49.677308 kubelet[2104]: I0302 13:02:49.677252 2104 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:02:49.679508 kubelet[2104]: I0302 13:02:49.679490 2104 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:02:49.690614 kubelet[2104]: E0302 13:02:49.690183 2104 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:02:49.690614 kubelet[2104]: I0302 13:02:49.690334 2104 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:02:49.692297 kubelet[2104]: E0302 13:02:49.692198 2104 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:02:49.693193 kubelet[2104]: I0302 13:02:49.693086 2104 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:02:49.694282 kubelet[2104]: W0302 13:02:49.694222 2104 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:02:49.702257 kubelet[2104]: I0302 13:02:49.702194 2104 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:02:49.702326 kubelet[2104]: I0302 13:02:49.702294 2104 server.go:1289] "Started kubelet" Mar 2 13:02:49.705404 kubelet[2104]: I0302 13:02:49.703092 2104 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:02:49.705404 kubelet[2104]: I0302 13:02:49.703626 2104 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:02:49.705404 kubelet[2104]: I0302 13:02:49.703682 2104 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:02:49.705404 kubelet[2104]: I0302 13:02:49.704208 2104 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:02:49.705404 kubelet[2104]: I0302 13:02:49.705159 2104 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:02:49.714015 kubelet[2104]: E0302 13:02:49.710425 2104 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189907dbf8e8b88b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:02:49.702234251 +0000 UTC m=+0.986987703,LastTimestamp:2026-03-02 13:02:49.702234251 +0000 UTC m=+0.986987703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:02:49.714963 kubelet[2104]: E0302 13:02:49.714943 2104 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:02:49.716346 kubelet[2104]: I0302 13:02:49.716008 2104 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:02:49.717947 kubelet[2104]: I0302 13:02:49.717930 2104 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:02:49.719389 kubelet[2104]: I0302 13:02:49.719094 2104 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:02:49.720271 kubelet[2104]: I0302 13:02:49.720258 2104 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:02:49.736662 kubelet[2104]: E0302 13:02:49.731448 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="200ms" Mar 2 13:02:49.736662 kubelet[2104]: I0302 13:02:49.733150 2104 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:02:49.736662 kubelet[2104]: I0302 13:02:49.733230 2104 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:02:49.736662 kubelet[2104]: E0302 13:02:49.733420 2104 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:02:49.736662 kubelet[2104]: I0302 13:02:49.734203 2104 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:02:49.737168 kubelet[2104]: E0302 13:02:49.737125 2104 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:02:49.846252 kubelet[2104]: E0302 13:02:49.845869 2104 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:02:49.866087 kubelet[2104]: I0302 13:02:49.863068 2104 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:02:49.875351 kubelet[2104]: I0302 13:02:49.875149 2104 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:02:49.875351 kubelet[2104]: I0302 13:02:49.875407 2104 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:02:49.875805 kubelet[2104]: I0302 13:02:49.875645 2104 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:02:49.875805 kubelet[2104]: I0302 13:02:49.875677 2104 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:02:49.882655 kubelet[2104]: E0302 13:02:49.882415 2104 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:02:49.883105 kubelet[2104]: E0302 13:02:49.882755 2104 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:02:49.954326 kubelet[2104]: E0302 13:02:49.953779 2104 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:02:49.957541 kubelet[2104]: E0302 13:02:49.957411 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="400ms" Mar 2 13:02:49.958266 kubelet[2104]: I0302 13:02:49.958196 2104 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:02:49.958266 kubelet[2104]: I0302 13:02:49.958210 2104 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:02:49.958266 kubelet[2104]: I0302 13:02:49.958246 2104 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:02:49.983796 kubelet[2104]: E0302 13:02:49.983687 2104 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:02:50.048163 kubelet[2104]: I0302 13:02:50.048050 2104 policy_none.go:49] "None policy: Start" Mar 2 13:02:50.048163 kubelet[2104]: I0302 13:02:50.048148 2104 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:02:50.048163 kubelet[2104]: I0302 13:02:50.048174 2104 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:02:50.054189 kubelet[2104]: E0302 13:02:50.054135 2104 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:02:50.059763 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:02:50.078175 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:02:50.128778 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:02:50.149701 kubelet[2104]: E0302 13:02:50.149618 2104 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:02:50.150333 kubelet[2104]: I0302 13:02:50.150313 2104 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:02:50.151653 kubelet[2104]: I0302 13:02:50.150402 2104 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:02:50.151653 kubelet[2104]: I0302 13:02:50.151097 2104 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:02:50.175543 kubelet[2104]: E0302 13:02:50.175501 2104 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:02:50.179732 kubelet[2104]: E0302 13:02:50.179212 2104 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:02:50.296955 kubelet[2104]: I0302 13:02:50.296267 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46a6979e891c03a4e1fdf141f2af7551-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"46a6979e891c03a4e1fdf141f2af7551\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:02:50.297311 kubelet[2104]: I0302 13:02:50.297251 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46a6979e891c03a4e1fdf141f2af7551-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"46a6979e891c03a4e1fdf141f2af7551\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:02:50.297692 kubelet[2104]: I0302 13:02:50.297398 2104 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:02:50.301053 kubelet[2104]: E0302 13:02:50.300993 2104 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Mar 2 13:02:50.335991 systemd[1]: Created slice kubepods-burstable-pod46a6979e891c03a4e1fdf141f2af7551.slice - libcontainer container kubepods-burstable-pod46a6979e891c03a4e1fdf141f2af7551.slice. Mar 2 13:02:50.346413 kubelet[2104]: E0302 13:02:50.346272 2104 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:02:50.351308 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 2 13:02:50.359188 kubelet[2104]: E0302 13:02:50.359043 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="800ms" Mar 2 13:02:50.363772 kubelet[2104]: E0302 13:02:50.363649 2104 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:02:50.367860 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 2 13:02:50.370478 kubelet[2104]: E0302 13:02:50.370378 2104 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:02:50.399023 kubelet[2104]: I0302 13:02:50.398932 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46a6979e891c03a4e1fdf141f2af7551-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"46a6979e891c03a4e1fdf141f2af7551\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:02:50.399023 kubelet[2104]: I0302 13:02:50.398996 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:02:50.399228 kubelet[2104]: I0302 13:02:50.399020 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:02:50.399228 kubelet[2104]: I0302 13:02:50.399072 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:02:50.399228 kubelet[2104]: I0302 13:02:50.399149 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:02:50.399378 kubelet[2104]: I0302 13:02:50.399241 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:02:50.399378 kubelet[2104]: I0302 13:02:50.399267 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:02:50.528299 kubelet[2104]: I0302 13:02:50.528134 2104 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:02:50.529459 kubelet[2104]: E0302 13:02:50.529344 2104 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Mar 2 13:02:50.555953 kubelet[2104]: E0302 13:02:50.555552 2104 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:02:50.648949 kubelet[2104]: E0302 13:02:50.648779 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:50.650997 containerd[1461]: time="2026-03-02T13:02:50.650823222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:46a6979e891c03a4e1fdf141f2af7551,Namespace:kube-system,Attempt:0,}" Mar 2 13:02:50.665259 kubelet[2104]: E0302 13:02:50.665160 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:50.666106 containerd[1461]: time="2026-03-02T13:02:50.665972480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 2 13:02:50.671652 kubelet[2104]: E0302 13:02:50.671617 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:50.672621 containerd[1461]: time="2026-03-02T13:02:50.672502293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 2 13:02:50.804075 kubelet[2104]: E0302 13:02:50.803977 2104 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:02:50.824506 kubelet[2104]: E0302 13:02:50.824323 2104 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:02:50.932292 kubelet[2104]: I0302 13:02:50.932240 2104 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:02:50.933017 kubelet[2104]: E0302 13:02:50.932926 2104 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Mar 2 13:02:51.163148 kubelet[2104]: E0302 13:02:51.162838 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="1.6s" Mar 2 13:02:51.274669 kubelet[2104]: E0302 13:02:51.274521 2104 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:02:51.342953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844083503.mount: Deactivated successfully. Mar 2 13:02:51.352260 containerd[1461]: time="2026-03-02T13:02:51.352122000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:02:51.357723 containerd[1461]: time="2026-03-02T13:02:51.357537088Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 13:02:51.359120 containerd[1461]: time="2026-03-02T13:02:51.359043811Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:02:51.360772 containerd[1461]: time="2026-03-02T13:02:51.360649845Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:02:51.361431 containerd[1461]: time="2026-03-02T13:02:51.361349916Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:02:51.362838 containerd[1461]: time="2026-03-02T13:02:51.362779574Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:02:51.363915 containerd[1461]: time="2026-03-02T13:02:51.363845814Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:02:51.366375 containerd[1461]: time="2026-03-02T13:02:51.366277004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:02:51.369062 containerd[1461]: time="2026-03-02T13:02:51.368916041Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 717.957134ms" Mar 2 13:02:51.373325 containerd[1461]: time="2026-03-02T13:02:51.373271691Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 707.191828ms" Mar 2 13:02:51.374354 containerd[1461]: time="2026-03-02T13:02:51.374305595Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 701.654241ms" Mar 2 13:02:51.690428 kubelet[2104]: E0302 13:02:51.690220 2104 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:02:51.802160 kubelet[2104]: I0302 13:02:51.802001 2104 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:02:51.807666 kubelet[2104]: E0302 13:02:51.807391 2104 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Mar 2 13:02:52.339632 containerd[1461]: time="2026-03-02T13:02:52.338987835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:52.339632 containerd[1461]: time="2026-03-02T13:02:52.339302967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:52.339632 containerd[1461]: time="2026-03-02T13:02:52.339319966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:52.340383 containerd[1461]: time="2026-03-02T13:02:52.340091005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:52.346617 containerd[1461]: time="2026-03-02T13:02:52.343930953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:52.346617 containerd[1461]: time="2026-03-02T13:02:52.344037528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:52.346617 containerd[1461]: time="2026-03-02T13:02:52.344085025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:52.346617 containerd[1461]: time="2026-03-02T13:02:52.344270263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:52.353194 containerd[1461]: time="2026-03-02T13:02:52.351744893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:52.353194 containerd[1461]: time="2026-03-02T13:02:52.352072368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:52.353194 containerd[1461]: time="2026-03-02T13:02:52.352218991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:52.353194 containerd[1461]: time="2026-03-02T13:02:52.352471421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:52.386818 systemd[1]: Started cri-containerd-77866f18a3023cacbea445e77784f04d9f8a9263aa47868b83b14287fd934819.scope - libcontainer container 77866f18a3023cacbea445e77784f04d9f8a9263aa47868b83b14287fd934819. Mar 2 13:02:52.389118 systemd[1]: Started cri-containerd-b1f8de1385d49ff2c9e07eae77979f5673790fbef40328fbad42f53822006582.scope - libcontainer container b1f8de1385d49ff2c9e07eae77979f5673790fbef40328fbad42f53822006582. Mar 2 13:02:52.392008 systemd[1]: Started cri-containerd-c588caa8b0bd23249b77b99eea1e3a5f773afd7723185fb41a318e09ff1a6b4f.scope - libcontainer container c588caa8b0bd23249b77b99eea1e3a5f773afd7723185fb41a318e09ff1a6b4f. Mar 2 13:02:52.463252 containerd[1461]: time="2026-03-02T13:02:52.463198381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:46a6979e891c03a4e1fdf141f2af7551,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1f8de1385d49ff2c9e07eae77979f5673790fbef40328fbad42f53822006582\"" Mar 2 13:02:52.468417 kubelet[2104]: E0302 13:02:52.468332 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:52.480529 containerd[1461]: time="2026-03-02T13:02:52.480414357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"c588caa8b0bd23249b77b99eea1e3a5f773afd7723185fb41a318e09ff1a6b4f\"" Mar 2 13:02:52.480824 containerd[1461]: time="2026-03-02T13:02:52.480746230Z" level=info msg="CreateContainer within sandbox \"b1f8de1385d49ff2c9e07eae77979f5673790fbef40328fbad42f53822006582\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:02:52.484470 containerd[1461]: time="2026-03-02T13:02:52.484164605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"77866f18a3023cacbea445e77784f04d9f8a9263aa47868b83b14287fd934819\"" Mar 2 13:02:52.486008 kubelet[2104]: E0302 13:02:52.485941 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:52.488828 kubelet[2104]: E0302 13:02:52.488669 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:52.493229 kubelet[2104]: E0302 13:02:52.493130 2104 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:02:52.493747 containerd[1461]: time="2026-03-02T13:02:52.493620302Z" level=info msg="CreateContainer within sandbox \"c588caa8b0bd23249b77b99eea1e3a5f773afd7723185fb41a318e09ff1a6b4f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:02:52.497185 containerd[1461]: time="2026-03-02T13:02:52.497114537Z" level=info msg="CreateContainer within sandbox \"77866f18a3023cacbea445e77784f04d9f8a9263aa47868b83b14287fd934819\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:02:52.530382 containerd[1461]: time="2026-03-02T13:02:52.530291215Z" level=info msg="CreateContainer within sandbox \"c588caa8b0bd23249b77b99eea1e3a5f773afd7723185fb41a318e09ff1a6b4f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5caa692d308bd57b31cbd5cff162ab551882b0ba820d5e58f0f9da41a0c4bd6e\"" Mar 2 13:02:52.531372 containerd[1461]: time="2026-03-02T13:02:52.531337432Z" level=info msg="StartContainer for \"5caa692d308bd57b31cbd5cff162ab551882b0ba820d5e58f0f9da41a0c4bd6e\"" Mar 2 13:02:52.538209 containerd[1461]: time="2026-03-02T13:02:52.538136583Z" level=info msg="CreateContainer within sandbox \"b1f8de1385d49ff2c9e07eae77979f5673790fbef40328fbad42f53822006582\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8e775f0d631e12656d4131d9ae55a4a16390abd4cde2ee0e5edad0c38d816e8c\"" Mar 2 13:02:52.538701 containerd[1461]: time="2026-03-02T13:02:52.538643301Z" level=info msg="StartContainer for \"8e775f0d631e12656d4131d9ae55a4a16390abd4cde2ee0e5edad0c38d816e8c\"" Mar 2 13:02:52.541552 containerd[1461]: time="2026-03-02T13:02:52.541438471Z" level=info msg="CreateContainer within sandbox \"77866f18a3023cacbea445e77784f04d9f8a9263aa47868b83b14287fd934819\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cac4e924883934331b3e411881e4d04d14c308ca881844580b33fbe60292fbf4\"" Mar 2 13:02:52.543491 containerd[1461]: time="2026-03-02T13:02:52.542116838Z" level=info msg="StartContainer for \"cac4e924883934331b3e411881e4d04d14c308ca881844580b33fbe60292fbf4\"" Mar 2 13:02:52.563808 systemd[1]: Started cri-containerd-5caa692d308bd57b31cbd5cff162ab551882b0ba820d5e58f0f9da41a0c4bd6e.scope - libcontainer container 5caa692d308bd57b31cbd5cff162ab551882b0ba820d5e58f0f9da41a0c4bd6e. Mar 2 13:02:52.575745 systemd[1]: Started cri-containerd-cac4e924883934331b3e411881e4d04d14c308ca881844580b33fbe60292fbf4.scope - libcontainer container cac4e924883934331b3e411881e4d04d14c308ca881844580b33fbe60292fbf4. Mar 2 13:02:52.579810 systemd[1]: Started cri-containerd-8e775f0d631e12656d4131d9ae55a4a16390abd4cde2ee0e5edad0c38d816e8c.scope - libcontainer container 8e775f0d631e12656d4131d9ae55a4a16390abd4cde2ee0e5edad0c38d816e8c. Mar 2 13:02:52.654301 containerd[1461]: time="2026-03-02T13:02:52.653717517Z" level=info msg="StartContainer for \"5caa692d308bd57b31cbd5cff162ab551882b0ba820d5e58f0f9da41a0c4bd6e\" returns successfully" Mar 2 13:02:52.654301 containerd[1461]: time="2026-03-02T13:02:52.653718366Z" level=info msg="StartContainer for \"8e775f0d631e12656d4131d9ae55a4a16390abd4cde2ee0e5edad0c38d816e8c\" returns successfully" Mar 2 13:02:52.666096 containerd[1461]: time="2026-03-02T13:02:52.666008306Z" level=info msg="StartContainer for \"cac4e924883934331b3e411881e4d04d14c308ca881844580b33fbe60292fbf4\" returns successfully" Mar 2 13:02:53.519368 kubelet[2104]: I0302 13:02:53.519235 2104 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:02:53.558193 kubelet[2104]: E0302 13:02:53.555770 2104 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:02:53.559815 kubelet[2104]: E0302 13:02:53.559741 2104 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:02:53.560084 kubelet[2104]: E0302 13:02:53.560026 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:53.560847 kubelet[2104]: E0302 13:02:53.560704 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:53.566021 kubelet[2104]: E0302 13:02:53.565999 2104 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:02:53.566509 kubelet[2104]: E0302 13:02:53.566422 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:54.571045 kubelet[2104]: E0302 13:02:54.571015 2104 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:02:54.571045 kubelet[2104]: E0302 13:02:54.571162 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:54.571987 kubelet[2104]: E0302 13:02:54.571955 2104 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:02:54.572087 kubelet[2104]: E0302 13:02:54.572073 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:55.578878 kubelet[2104]: E0302 13:02:55.578808 2104 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:02:55.579381 kubelet[2104]: E0302 13:02:55.579020 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:57.663080 kubelet[2104]: E0302 13:02:57.663008 2104 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 13:02:57.826899 kubelet[2104]: I0302 13:02:57.826801 2104 apiserver.go:52] "Watching apiserver" Mar 2 13:02:57.854259 kubelet[2104]: I0302 13:02:57.853041 2104 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:02:57.854259 kubelet[2104]: E0302 13:02:57.853122 2104 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 2 13:02:57.921627 kubelet[2104]: I0302 13:02:57.921296 2104 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 13:02:57.925928 kubelet[2104]: I0302 13:02:57.925822 2104 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:02:57.934413 kubelet[2104]: E0302 13:02:57.934363 2104 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 13:02:57.934413 kubelet[2104]: I0302 13:02:57.934419 2104 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:02:57.936848 kubelet[2104]: E0302 13:02:57.936743 2104 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:02:57.936848 kubelet[2104]: I0302 13:02:57.936799 2104 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:02:57.939500 kubelet[2104]: E0302 13:02:57.939376 2104 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 13:02:58.383759 kubelet[2104]: I0302 13:02:58.383514 2104 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:02:58.386859 kubelet[2104]: E0302 13:02:58.386757 2104 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:02:58.387047 kubelet[2104]: E0302 13:02:58.386981 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:59.044921 kubelet[2104]: I0302 13:02:59.044866 2104 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:02:59.052709 kubelet[2104]: E0302 13:02:59.052514 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:59.770047 kubelet[2104]: E0302 13:02:59.769960 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:01.213555 systemd[1]: Reloading requested from client PID 2393 ('systemctl') (unit session-5.scope)... Mar 2 13:03:01.213702 systemd[1]: Reloading... Mar 2 13:03:01.561716 zram_generator::config[2433]: No configuration found. Mar 2 13:03:01.741733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:03:01.969097 systemd[1]: Reloading finished in 754 ms. Mar 2 13:03:02.018182 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:03:02.042339 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:03:02.042722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:03:02.042807 systemd[1]: kubelet.service: Consumed 5.326s CPU time, 134.3M memory peak, 0B memory swap peak. Mar 2 13:03:02.064080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:03:02.365835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:03:02.371700 (kubelet)[2478]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:03:02.448934 kubelet[2478]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:03:02.448934 kubelet[2478]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:03:02.448934 kubelet[2478]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:03:02.449273 kubelet[2478]: I0302 13:03:02.449012 2478 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:03:02.459955 kubelet[2478]: I0302 13:03:02.459929 2478 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:03:02.461328 kubelet[2478]: I0302 13:03:02.460041 2478 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:03:02.461328 kubelet[2478]: I0302 13:03:02.460320 2478 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:03:02.463145 kubelet[2478]: I0302 13:03:02.463107 2478 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:03:02.465459 kubelet[2478]: I0302 13:03:02.465426 2478 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:03:02.471483 kubelet[2478]: E0302 13:03:02.471437 2478 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:03:02.471483 kubelet[2478]: I0302 13:03:02.471485 2478 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 13:03:02.525448 kubelet[2478]: I0302 13:03:02.525367 2478 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:03:02.526194 kubelet[2478]: I0302 13:03:02.526042 2478 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:03:02.526774 kubelet[2478]: I0302 13:03:02.526153 2478 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:03:02.526774 kubelet[2478]: I0302 13:03:02.526749 2478 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:03:02.527093 kubelet[2478]: I0302 13:03:02.526782 2478 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:03:02.527093 kubelet[2478]: I0302 13:03:02.526841 2478 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:03:02.527159 kubelet[2478]: I0302 13:03:02.527097 2478 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:03:02.527159 kubelet[2478]: I0302 13:03:02.527109 2478 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:03:02.527159 kubelet[2478]: I0302 13:03:02.527136 2478 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:03:02.527159 kubelet[2478]: I0302 13:03:02.527150 2478 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:03:02.564966 kubelet[2478]: I0302 13:03:02.564925 2478 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:03:02.568924 kubelet[2478]: I0302 13:03:02.566862 2478 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:03:02.601202 kubelet[2478]: I0302 13:03:02.601143 2478 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:03:02.601421 kubelet[2478]: I0302 13:03:02.601306 2478 server.go:1289] "Started kubelet" Mar 2 13:03:02.601674 kubelet[2478]: I0302 13:03:02.601450 2478 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:03:02.602163 kubelet[2478]: I0302 13:03:02.602053 2478 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:03:02.611388 kubelet[2478]: I0302 13:03:02.610033 2478 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:03:02.628202 kubelet[2478]: I0302 13:03:02.628039 2478 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:03:02.631850 kubelet[2478]: I0302 13:03:02.631790 2478 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:03:02.632794 kubelet[2478]: I0302 13:03:02.629013 2478 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:03:02.632934 kubelet[2478]: I0302 13:03:02.632923 2478 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:03:02.633914 kubelet[2478]: I0302 13:03:02.633818 2478 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:03:02.634540 kubelet[2478]: I0302 13:03:02.634520 2478 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:03:02.635029 kubelet[2478]: I0302 13:03:02.635014 2478 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:03:02.635608 kubelet[2478]: I0302 13:03:02.635484 2478 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:03:02.640541 kubelet[2478]: E0302 13:03:02.640450 2478 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:03:02.644800 kubelet[2478]: I0302 13:03:02.644782 2478 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:03:02.706148 kubelet[2478]: I0302 13:03:02.706027 2478 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:03:02.709018 kubelet[2478]: I0302 13:03:02.708997 2478 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:03:02.709548 kubelet[2478]: I0302 13:03:02.709182 2478 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:03:02.709548 kubelet[2478]: I0302 13:03:02.709265 2478 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:03:02.709548 kubelet[2478]: I0302 13:03:02.709273 2478 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:03:02.709548 kubelet[2478]: E0302 13:03:02.709325 2478 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:03:02.751255 kubelet[2478]: I0302 13:03:02.751059 2478 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:03:02.751255 kubelet[2478]: I0302 13:03:02.751078 2478 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:03:02.751255 kubelet[2478]: I0302 13:03:02.751097 2478 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:03:02.752618 kubelet[2478]: I0302 13:03:02.751702 2478 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 13:03:02.752618 kubelet[2478]: I0302 13:03:02.751717 2478 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 13:03:02.752618 kubelet[2478]: I0302 13:03:02.751759 2478 policy_none.go:49] "None policy: Start" Mar 2 13:03:02.752618 kubelet[2478]: I0302 13:03:02.751800 2478 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:03:02.752618 kubelet[2478]: I0302 13:03:02.751814 2478 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:03:02.752618 kubelet[2478]: I0302 13:03:02.751926 2478 state_mem.go:75] "Updated machine memory state" Mar 2 13:03:02.758756 kubelet[2478]: E0302 13:03:02.758738 2478 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:03:02.759354 kubelet[2478]: I0302 13:03:02.759338 2478 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:03:02.759538 kubelet[2478]: I0302 13:03:02.759418 2478 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:03:02.759997 kubelet[2478]: I0302 13:03:02.759983 2478 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:03:02.760952 kubelet[2478]: E0302 13:03:02.760932 2478 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:03:02.811341 kubelet[2478]: I0302 13:03:02.811223 2478 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:03:02.811498 kubelet[2478]: I0302 13:03:02.811473 2478 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:03:02.812786 kubelet[2478]: I0302 13:03:02.812298 2478 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:03:02.821747 kubelet[2478]: E0302 13:03:02.821714 2478 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 13:03:02.871375 kubelet[2478]: I0302 13:03:02.871324 2478 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:03:02.883408 kubelet[2478]: I0302 13:03:02.883233 2478 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 2 13:03:02.883408 kubelet[2478]: I0302 13:03:02.883357 2478 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:03:02.941708 kubelet[2478]: I0302 13:03:02.941309 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46a6979e891c03a4e1fdf141f2af7551-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"46a6979e891c03a4e1fdf141f2af7551\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:03:02.942474 kubelet[2478]: I0302 13:03:02.941743 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:03:02.942474 kubelet[2478]: I0302 13:03:02.941932 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:03:02.942474 kubelet[2478]: I0302 13:03:02.941977 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:03:02.942474 kubelet[2478]: I0302 13:03:02.942073 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:03:02.942474 kubelet[2478]: I0302 13:03:02.942154 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:03:02.943824 kubelet[2478]: I0302 13:03:02.942169 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46a6979e891c03a4e1fdf141f2af7551-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"46a6979e891c03a4e1fdf141f2af7551\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:03:02.943824 kubelet[2478]: I0302 13:03:02.942182 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:03:02.943824 kubelet[2478]: I0302 13:03:02.942196 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46a6979e891c03a4e1fdf141f2af7551-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"46a6979e891c03a4e1fdf141f2af7551\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:03:03.122122 kubelet[2478]: E0302 13:03:03.122032 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:03.122122 kubelet[2478]: E0302 13:03:03.122193 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:03.122122 kubelet[2478]: E0302 13:03:03.122270 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:03.532083 kubelet[2478]: I0302 13:03:03.532013 2478 apiserver.go:52] "Watching apiserver" Mar 2 13:03:03.635705 kubelet[2478]: I0302 13:03:03.635674 2478 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 13:03:03.662780 kubelet[2478]: I0302 13:03:03.662682 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.662667066 podStartE2EDuration="1.662667066s" podCreationTimestamp="2026-03-02 13:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:03:03.661003205 +0000 UTC m=+1.280278301" watchObservedRunningTime="2026-03-02 13:03:03.662667066 +0000 UTC m=+1.281942162" Mar 2 13:03:03.687094 kubelet[2478]: I0302 13:03:03.686993 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6869738970000001 podStartE2EDuration="1.686973897s" podCreationTimestamp="2026-03-02 13:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:03:03.675155439 +0000 UTC m=+1.294430535" watchObservedRunningTime="2026-03-02 13:03:03.686973897 +0000 UTC m=+1.306248993" Mar 2 13:03:03.733169 kubelet[2478]: I0302 13:03:03.733043 2478 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:03:03.733370 kubelet[2478]: E0302 13:03:03.733180 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:03.734891 kubelet[2478]: E0302 13:03:03.734806 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:03.743656 kubelet[2478]: E0302 13:03:03.743605 2478 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 13:03:03.743952 kubelet[2478]: E0302 13:03:03.743806 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:03.912745 sudo[1590]: pam_unix(sudo:session): session closed for user root Mar 2 13:03:03.915133 sshd[1587]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:03.920765 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:36316.service: Deactivated successfully. Mar 2 13:03:03.923355 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:03:03.923627 systemd[1]: session-5.scope: Consumed 9.432s CPU time, 162.0M memory peak, 0B memory swap peak. Mar 2 13:03:03.924656 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:03:03.926384 systemd-logind[1441]: Removed session 5. Mar 2 13:03:04.734689 kubelet[2478]: E0302 13:03:04.734634 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:04.735146 kubelet[2478]: E0302 13:03:04.734824 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:05.212543 update_engine[1443]: I20260302 13:03:05.212357 1443 update_attempter.cc:509] Updating boot flags... Mar 2 13:03:05.244687 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2552) Mar 2 13:03:05.290256 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2554) Mar 2 13:03:05.755931 kubelet[2478]: E0302 13:03:05.754799 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:05.909147 kubelet[2478]: I0302 13:03:05.908161 2478 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:03:05.939646 containerd[1461]: time="2026-03-02T13:03:05.936920931Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:03:05.964119 kubelet[2478]: I0302 13:03:05.964061 2478 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:03:06.459786 systemd[1]: Created slice kubepods-besteffort-pod70ecf705_04e2_4d93_8faf_59f2d657b129.slice - libcontainer container kubepods-besteffort-pod70ecf705_04e2_4d93_8faf_59f2d657b129.slice. Mar 2 13:03:06.478439 systemd[1]: Created slice kubepods-burstable-pod486c4598_928b_43be_a255_4f3cc1f7e05a.slice - libcontainer container kubepods-burstable-pod486c4598_928b_43be_a255_4f3cc1f7e05a.slice. Mar 2 13:03:06.553636 kubelet[2478]: I0302 13:03:06.550418 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70ecf705-04e2-4d93-8faf-59f2d657b129-xtables-lock\") pod \"kube-proxy-btj9b\" (UID: \"70ecf705-04e2-4d93-8faf-59f2d657b129\") " pod="kube-system/kube-proxy-btj9b" Mar 2 13:03:06.553636 kubelet[2478]: I0302 13:03:06.552229 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/486c4598-928b-43be-a255-4f3cc1f7e05a-cni\") pod \"kube-flannel-ds-tvsnb\" (UID: \"486c4598-928b-43be-a255-4f3cc1f7e05a\") " pod="kube-flannel/kube-flannel-ds-tvsnb" Mar 2 13:03:06.553636 kubelet[2478]: I0302 13:03:06.552335 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48mgt\" (UniqueName: \"kubernetes.io/projected/486c4598-928b-43be-a255-4f3cc1f7e05a-kube-api-access-48mgt\") pod \"kube-flannel-ds-tvsnb\" (UID: \"486c4598-928b-43be-a255-4f3cc1f7e05a\") " pod="kube-flannel/kube-flannel-ds-tvsnb" Mar 2 13:03:06.553636 kubelet[2478]: I0302 13:03:06.552498 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70ecf705-04e2-4d93-8faf-59f2d657b129-kube-proxy\") pod \"kube-proxy-btj9b\" (UID: \"70ecf705-04e2-4d93-8faf-59f2d657b129\") " pod="kube-system/kube-proxy-btj9b" Mar 2 13:03:06.553636 kubelet[2478]: I0302 13:03:06.552521 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz9bm\" (UniqueName: \"kubernetes.io/projected/70ecf705-04e2-4d93-8faf-59f2d657b129-kube-api-access-vz9bm\") pod \"kube-proxy-btj9b\" (UID: \"70ecf705-04e2-4d93-8faf-59f2d657b129\") " pod="kube-system/kube-proxy-btj9b" Mar 2 13:03:06.554186 kubelet[2478]: I0302 13:03:06.552536 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/486c4598-928b-43be-a255-4f3cc1f7e05a-cni-plugin\") pod \"kube-flannel-ds-tvsnb\" (UID: \"486c4598-928b-43be-a255-4f3cc1f7e05a\") " pod="kube-flannel/kube-flannel-ds-tvsnb" Mar 2 13:03:06.554186 kubelet[2478]: I0302 13:03:06.552613 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/486c4598-928b-43be-a255-4f3cc1f7e05a-xtables-lock\") pod \"kube-flannel-ds-tvsnb\" (UID: \"486c4598-928b-43be-a255-4f3cc1f7e05a\") " pod="kube-flannel/kube-flannel-ds-tvsnb" Mar 2 13:03:06.554186 kubelet[2478]: I0302 13:03:06.552639 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70ecf705-04e2-4d93-8faf-59f2d657b129-lib-modules\") pod \"kube-proxy-btj9b\" (UID: \"70ecf705-04e2-4d93-8faf-59f2d657b129\") " pod="kube-system/kube-proxy-btj9b" Mar 2 13:03:06.554186 kubelet[2478]: I0302 13:03:06.552837 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/486c4598-928b-43be-a255-4f3cc1f7e05a-run\") pod \"kube-flannel-ds-tvsnb\" (UID: \"486c4598-928b-43be-a255-4f3cc1f7e05a\") " pod="kube-flannel/kube-flannel-ds-tvsnb" Mar 2 13:03:06.554186 kubelet[2478]: I0302 13:03:06.552928 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/486c4598-928b-43be-a255-4f3cc1f7e05a-flannel-cfg\") pod \"kube-flannel-ds-tvsnb\" (UID: \"486c4598-928b-43be-a255-4f3cc1f7e05a\") " pod="kube-flannel/kube-flannel-ds-tvsnb" Mar 2 13:03:06.733457 kubelet[2478]: E0302 13:03:06.733029 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:06.752822 kubelet[2478]: E0302 13:03:06.750937 2478 projected.go:289] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 2 13:03:06.752822 kubelet[2478]: E0302 13:03:06.750973 2478 projected.go:194] Error preparing data for projected volume kube-api-access-48mgt for pod kube-flannel/kube-flannel-ds-tvsnb: configmap "kube-root-ca.crt" not found Mar 2 13:03:06.752822 kubelet[2478]: E0302 13:03:06.752690 2478 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/486c4598-928b-43be-a255-4f3cc1f7e05a-kube-api-access-48mgt podName:486c4598-928b-43be-a255-4f3cc1f7e05a nodeName:}" failed. No retries permitted until 2026-03-02 13:03:07.252623472 +0000 UTC m=+4.871898568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-48mgt" (UniqueName: "kubernetes.io/projected/486c4598-928b-43be-a255-4f3cc1f7e05a-kube-api-access-48mgt") pod "kube-flannel-ds-tvsnb" (UID: "486c4598-928b-43be-a255-4f3cc1f7e05a") : configmap "kube-root-ca.crt" not found Mar 2 13:03:06.753107 kubelet[2478]: E0302 13:03:06.752946 2478 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 2 13:03:06.753107 kubelet[2478]: E0302 13:03:06.752970 2478 projected.go:194] Error preparing data for projected volume kube-api-access-vz9bm for pod kube-system/kube-proxy-btj9b: configmap "kube-root-ca.crt" not found Mar 2 13:03:06.753107 kubelet[2478]: E0302 13:03:06.753024 2478 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70ecf705-04e2-4d93-8faf-59f2d657b129-kube-api-access-vz9bm podName:70ecf705-04e2-4d93-8faf-59f2d657b129 nodeName:}" failed. No retries permitted until 2026-03-02 13:03:07.253006695 +0000 UTC m=+4.872281921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vz9bm" (UniqueName: "kubernetes.io/projected/70ecf705-04e2-4d93-8faf-59f2d657b129-kube-api-access-vz9bm") pod "kube-proxy-btj9b" (UID: "70ecf705-04e2-4d93-8faf-59f2d657b129") : configmap "kube-root-ca.crt" not found Mar 2 13:03:06.756282 kubelet[2478]: E0302 13:03:06.756203 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:06.757502 kubelet[2478]: E0302 13:03:06.756841 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:07.307273 kubelet[2478]: E0302 13:03:07.307227 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:07.373996 kubelet[2478]: E0302 13:03:07.373930 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:07.374885 containerd[1461]: time="2026-03-02T13:03:07.374719575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-btj9b,Uid:70ecf705-04e2-4d93-8faf-59f2d657b129,Namespace:kube-system,Attempt:0,}" Mar 2 13:03:07.396701 kubelet[2478]: E0302 13:03:07.396429 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:07.397456 containerd[1461]: time="2026-03-02T13:03:07.397276442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tvsnb,Uid:486c4598-928b-43be-a255-4f3cc1f7e05a,Namespace:kube-flannel,Attempt:0,}" Mar 2 13:03:07.502902 containerd[1461]: time="2026-03-02T13:03:07.501793118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:03:07.502902 containerd[1461]: time="2026-03-02T13:03:07.502446002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:03:07.502902 containerd[1461]: time="2026-03-02T13:03:07.502683601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:03:07.503650 containerd[1461]: time="2026-03-02T13:03:07.503519543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:03:07.516012 containerd[1461]: time="2026-03-02T13:03:07.515827754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:03:07.517068 containerd[1461]: time="2026-03-02T13:03:07.517008045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:03:07.517167 containerd[1461]: time="2026-03-02T13:03:07.517098329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:03:07.517341 containerd[1461]: time="2026-03-02T13:03:07.517249295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:03:07.654880 systemd[1]: Started cri-containerd-a499c5a097823681d92a25378ccb9783664dd9634fb38e2b0f8c488cf4fca3b2.scope - libcontainer container a499c5a097823681d92a25378ccb9783664dd9634fb38e2b0f8c488cf4fca3b2. Mar 2 13:03:07.690901 systemd[1]: Started cri-containerd-078fa31ff2cf21c33b936e04129bd328f02d9e3e6f4fe498f4ad599f1b9e0af9.scope - libcontainer container 078fa31ff2cf21c33b936e04129bd328f02d9e3e6f4fe498f4ad599f1b9e0af9. Mar 2 13:03:07.732707 containerd[1461]: time="2026-03-02T13:03:07.731950469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-btj9b,Uid:70ecf705-04e2-4d93-8faf-59f2d657b129,Namespace:kube-system,Attempt:0,} returns sandbox id \"078fa31ff2cf21c33b936e04129bd328f02d9e3e6f4fe498f4ad599f1b9e0af9\"" Mar 2 13:03:07.734890 kubelet[2478]: E0302 13:03:07.734763 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:07.912943 containerd[1461]: time="2026-03-02T13:03:07.912495364Z" level=info msg="CreateContainer within sandbox \"078fa31ff2cf21c33b936e04129bd328f02d9e3e6f4fe498f4ad599f1b9e0af9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:03:07.924608 kubelet[2478]: E0302 13:03:07.923276 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:07.939174 containerd[1461]: time="2026-03-02T13:03:07.939075592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tvsnb,Uid:486c4598-928b-43be-a255-4f3cc1f7e05a,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"a499c5a097823681d92a25378ccb9783664dd9634fb38e2b0f8c488cf4fca3b2\"" Mar 2 13:03:07.940912 kubelet[2478]: E0302 13:03:07.940889 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:07.943638 containerd[1461]: time="2026-03-02T13:03:07.943403879Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 2 13:03:07.949723 containerd[1461]: time="2026-03-02T13:03:07.949687005Z" level=info msg="CreateContainer within sandbox \"078fa31ff2cf21c33b936e04129bd328f02d9e3e6f4fe498f4ad599f1b9e0af9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95c6dc0761530376990b2ce7ca68bc48b0aaeca1d42f77088bc37944a35b5ddb\"" Mar 2 13:03:07.950299 containerd[1461]: time="2026-03-02T13:03:07.950224358Z" level=info msg="StartContainer for \"95c6dc0761530376990b2ce7ca68bc48b0aaeca1d42f77088bc37944a35b5ddb\"" Mar 2 13:03:08.069500 systemd[1]: Started cri-containerd-95c6dc0761530376990b2ce7ca68bc48b0aaeca1d42f77088bc37944a35b5ddb.scope - libcontainer container 95c6dc0761530376990b2ce7ca68bc48b0aaeca1d42f77088bc37944a35b5ddb. Mar 2 13:03:08.303186 containerd[1461]: time="2026-03-02T13:03:08.302864171Z" level=info msg="StartContainer for \"95c6dc0761530376990b2ce7ca68bc48b0aaeca1d42f77088bc37944a35b5ddb\" returns successfully" Mar 2 13:03:08.885905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028493147.mount: Deactivated successfully. Mar 2 13:03:08.972549 kubelet[2478]: E0302 13:03:08.972482 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:09.045368 kubelet[2478]: I0302 13:03:09.044400 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-btj9b" podStartSLOduration=3.044317744 podStartE2EDuration="3.044317744s" podCreationTimestamp="2026-03-02 13:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:03:09.044271341 +0000 UTC m=+6.663546457" watchObservedRunningTime="2026-03-02 13:03:09.044317744 +0000 UTC m=+6.663592840" Mar 2 13:03:09.098441 containerd[1461]: time="2026-03-02T13:03:09.097497014Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:03:09.099769 containerd[1461]: time="2026-03-02T13:03:09.099242639Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Mar 2 13:03:09.104539 containerd[1461]: time="2026-03-02T13:03:09.104450416Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:03:09.113252 containerd[1461]: time="2026-03-02T13:03:09.113139017Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:03:09.115114 containerd[1461]: time="2026-03-02T13:03:09.115064044Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.171557424s" Mar 2 13:03:09.115190 containerd[1461]: time="2026-03-02T13:03:09.115113799Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 2 13:03:09.126258 containerd[1461]: time="2026-03-02T13:03:09.126136835Z" level=info msg="CreateContainer within sandbox \"a499c5a097823681d92a25378ccb9783664dd9634fb38e2b0f8c488cf4fca3b2\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 2 13:03:09.151776 containerd[1461]: time="2026-03-02T13:03:09.151685129Z" level=info msg="CreateContainer within sandbox \"a499c5a097823681d92a25378ccb9783664dd9634fb38e2b0f8c488cf4fca3b2\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"30f88c5fb65b8b431ac08c4e0f9c752ed0ac36adb729d8d6f9a6fd241b548c96\"" Mar 2 13:03:09.154026 containerd[1461]: time="2026-03-02T13:03:09.153869309Z" level=info msg="StartContainer for \"30f88c5fb65b8b431ac08c4e0f9c752ed0ac36adb729d8d6f9a6fd241b548c96\"" Mar 2 13:03:09.219788 systemd[1]: Started cri-containerd-30f88c5fb65b8b431ac08c4e0f9c752ed0ac36adb729d8d6f9a6fd241b548c96.scope - libcontainer container 30f88c5fb65b8b431ac08c4e0f9c752ed0ac36adb729d8d6f9a6fd241b548c96. Mar 2 13:03:09.257131 systemd[1]: cri-containerd-30f88c5fb65b8b431ac08c4e0f9c752ed0ac36adb729d8d6f9a6fd241b548c96.scope: Deactivated successfully. Mar 2 13:03:09.262924 containerd[1461]: time="2026-03-02T13:03:09.262850483Z" level=info msg="StartContainer for \"30f88c5fb65b8b431ac08c4e0f9c752ed0ac36adb729d8d6f9a6fd241b548c96\" returns successfully" Mar 2 13:03:09.335177 containerd[1461]: time="2026-03-02T13:03:09.335065318Z" level=info msg="shim disconnected" id=30f88c5fb65b8b431ac08c4e0f9c752ed0ac36adb729d8d6f9a6fd241b548c96 namespace=k8s.io Mar 2 13:03:09.335177 containerd[1461]: time="2026-03-02T13:03:09.335172063Z" level=warning msg="cleaning up after shim disconnected" id=30f88c5fb65b8b431ac08c4e0f9c752ed0ac36adb729d8d6f9a6fd241b548c96 namespace=k8s.io Mar 2 13:03:09.335177 containerd[1461]: time="2026-03-02T13:03:09.335186530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:03:10.010884 kubelet[2478]: E0302 13:03:10.010760 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:10.010884 kubelet[2478]: E0302 13:03:10.010774 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:10.011906 containerd[1461]: time="2026-03-02T13:03:10.011827040Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 2 13:03:12.854935 containerd[1461]: time="2026-03-02T13:03:12.854816556Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:03:12.856077 containerd[1461]: time="2026-03-02T13:03:12.855988921Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Mar 2 13:03:12.858139 containerd[1461]: time="2026-03-02T13:03:12.858066442Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:03:12.864819 containerd[1461]: time="2026-03-02T13:03:12.864721497Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:03:12.871208 containerd[1461]: time="2026-03-02T13:03:12.871166508Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 2.859296852s" Mar 2 13:03:12.871430 containerd[1461]: time="2026-03-02T13:03:12.871328130Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 2 13:03:12.880103 containerd[1461]: time="2026-03-02T13:03:12.879462156Z" level=info msg="CreateContainer within sandbox \"a499c5a097823681d92a25378ccb9783664dd9634fb38e2b0f8c488cf4fca3b2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 13:03:12.914922 containerd[1461]: time="2026-03-02T13:03:12.914826535Z" level=info msg="CreateContainer within sandbox \"a499c5a097823681d92a25378ccb9783664dd9634fb38e2b0f8c488cf4fca3b2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a8ce7ab11f6fa881643930610c1519b34fc1fd74b2ed59e0b929a861987333b0\"" Mar 2 13:03:12.915715 containerd[1461]: time="2026-03-02T13:03:12.915540607Z" level=info msg="StartContainer for \"a8ce7ab11f6fa881643930610c1519b34fc1fd74b2ed59e0b929a861987333b0\"" Mar 2 13:03:12.959893 systemd[1]: Started cri-containerd-a8ce7ab11f6fa881643930610c1519b34fc1fd74b2ed59e0b929a861987333b0.scope - libcontainer container a8ce7ab11f6fa881643930610c1519b34fc1fd74b2ed59e0b929a861987333b0. Mar 2 13:03:13.019285 systemd[1]: cri-containerd-a8ce7ab11f6fa881643930610c1519b34fc1fd74b2ed59e0b929a861987333b0.scope: Deactivated successfully. Mar 2 13:03:13.022096 containerd[1461]: time="2026-03-02T13:03:13.022040387Z" level=info msg="StartContainer for \"a8ce7ab11f6fa881643930610c1519b34fc1fd74b2ed59e0b929a861987333b0\" returns successfully" Mar 2 13:03:13.027976 kubelet[2478]: I0302 13:03:13.027918 2478 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 2 13:03:13.064007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8ce7ab11f6fa881643930610c1519b34fc1fd74b2ed59e0b929a861987333b0-rootfs.mount: Deactivated successfully. Mar 2 13:03:13.072078 kubelet[2478]: E0302 13:03:13.069935 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:13.101622 containerd[1461]: time="2026-03-02T13:03:13.101463039Z" level=info msg="shim disconnected" id=a8ce7ab11f6fa881643930610c1519b34fc1fd74b2ed59e0b929a861987333b0 namespace=k8s.io Mar 2 13:03:13.101622 containerd[1461]: time="2026-03-02T13:03:13.101541897Z" level=warning msg="cleaning up after shim disconnected" id=a8ce7ab11f6fa881643930610c1519b34fc1fd74b2ed59e0b929a861987333b0 namespace=k8s.io Mar 2 13:03:13.101622 containerd[1461]: time="2026-03-02T13:03:13.101555137Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:03:13.117696 systemd[1]: Created slice kubepods-burstable-pod33f77bbe_c50a_4d61_b4a7_af7e264f6d0a.slice - libcontainer container kubepods-burstable-pod33f77bbe_c50a_4d61_b4a7_af7e264f6d0a.slice. Mar 2 13:03:13.125047 systemd[1]: Created slice kubepods-burstable-podde073428_2e1e_43f2_b74f_c79fcd2347e9.slice - libcontainer container kubepods-burstable-podde073428_2e1e_43f2_b74f_c79fcd2347e9.slice. Mar 2 13:03:13.256282 kubelet[2478]: I0302 13:03:13.256215 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmnjz\" (UniqueName: \"kubernetes.io/projected/33f77bbe-c50a-4d61-b4a7-af7e264f6d0a-kube-api-access-kmnjz\") pod \"coredns-674b8bbfcf-wkv5m\" (UID: \"33f77bbe-c50a-4d61-b4a7-af7e264f6d0a\") " pod="kube-system/coredns-674b8bbfcf-wkv5m" Mar 2 13:03:13.256437 kubelet[2478]: I0302 13:03:13.256365 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de073428-2e1e-43f2-b74f-c79fcd2347e9-config-volume\") pod \"coredns-674b8bbfcf-zc66c\" (UID: \"de073428-2e1e-43f2-b74f-c79fcd2347e9\") " pod="kube-system/coredns-674b8bbfcf-zc66c" Mar 2 13:03:13.256437 kubelet[2478]: I0302 13:03:13.256404 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhfxl\" (UniqueName: \"kubernetes.io/projected/de073428-2e1e-43f2-b74f-c79fcd2347e9-kube-api-access-rhfxl\") pod \"coredns-674b8bbfcf-zc66c\" (UID: \"de073428-2e1e-43f2-b74f-c79fcd2347e9\") " pod="kube-system/coredns-674b8bbfcf-zc66c" Mar 2 13:03:13.256437 kubelet[2478]: I0302 13:03:13.256429 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33f77bbe-c50a-4d61-b4a7-af7e264f6d0a-config-volume\") pod \"coredns-674b8bbfcf-wkv5m\" (UID: \"33f77bbe-c50a-4d61-b4a7-af7e264f6d0a\") " pod="kube-system/coredns-674b8bbfcf-wkv5m" Mar 2 13:03:13.429640 kubelet[2478]: E0302 13:03:13.429463 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:13.430406 containerd[1461]: time="2026-03-02T13:03:13.430356297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zc66c,Uid:de073428-2e1e-43f2-b74f-c79fcd2347e9,Namespace:kube-system,Attempt:0,}" Mar 2 13:03:13.430683 kubelet[2478]: E0302 13:03:13.430536 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:13.431347 containerd[1461]: time="2026-03-02T13:03:13.431180473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wkv5m,Uid:33f77bbe-c50a-4d61-b4a7-af7e264f6d0a,Namespace:kube-system,Attempt:0,}" Mar 2 13:03:13.520901 containerd[1461]: time="2026-03-02T13:03:13.520756182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wkv5m,Uid:33f77bbe-c50a-4d61-b4a7-af7e264f6d0a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6519f46554067d13b4eca4d5dc19c9d4ccceb8331675d5400804ebb7cd428140\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 2 13:03:13.521296 kubelet[2478]: E0302 13:03:13.521051 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6519f46554067d13b4eca4d5dc19c9d4ccceb8331675d5400804ebb7cd428140\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 2 13:03:13.521296 kubelet[2478]: E0302 13:03:13.521168 2478 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6519f46554067d13b4eca4d5dc19c9d4ccceb8331675d5400804ebb7cd428140\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-wkv5m" Mar 2 13:03:13.521296 kubelet[2478]: E0302 13:03:13.521248 2478 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6519f46554067d13b4eca4d5dc19c9d4ccceb8331675d5400804ebb7cd428140\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-wkv5m" Mar 2 13:03:13.521456 kubelet[2478]: E0302 13:03:13.521303 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wkv5m_kube-system(33f77bbe-c50a-4d61-b4a7-af7e264f6d0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wkv5m_kube-system(33f77bbe-c50a-4d61-b4a7-af7e264f6d0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6519f46554067d13b4eca4d5dc19c9d4ccceb8331675d5400804ebb7cd428140\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-wkv5m" podUID="33f77bbe-c50a-4d61-b4a7-af7e264f6d0a" Mar 2 13:03:13.522315 containerd[1461]: time="2026-03-02T13:03:13.522233569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zc66c,Uid:de073428-2e1e-43f2-b74f-c79fcd2347e9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e0fa57fd8792a9cff991411c777053ea4e749f7f03cd71d6b7b71b617e95dee\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 2 13:03:13.522532 kubelet[2478]: E0302 13:03:13.522492 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e0fa57fd8792a9cff991411c777053ea4e749f7f03cd71d6b7b71b617e95dee\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 2 13:03:13.522652 kubelet[2478]: E0302 13:03:13.522549 2478 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e0fa57fd8792a9cff991411c777053ea4e749f7f03cd71d6b7b71b617e95dee\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-zc66c" Mar 2 13:03:13.522652 kubelet[2478]: E0302 13:03:13.522642 2478 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e0fa57fd8792a9cff991411c777053ea4e749f7f03cd71d6b7b71b617e95dee\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-zc66c" Mar 2 13:03:13.522821 kubelet[2478]: E0302 13:03:13.522698 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zc66c_kube-system(de073428-2e1e-43f2-b74f-c79fcd2347e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zc66c_kube-system(de073428-2e1e-43f2-b74f-c79fcd2347e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e0fa57fd8792a9cff991411c777053ea4e749f7f03cd71d6b7b71b617e95dee\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-zc66c" podUID="de073428-2e1e-43f2-b74f-c79fcd2347e9" Mar 2 13:03:14.087347 kubelet[2478]: E0302 13:03:14.087299 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:14.095187 containerd[1461]: time="2026-03-02T13:03:14.095068052Z" level=info msg="CreateContainer within sandbox \"a499c5a097823681d92a25378ccb9783664dd9634fb38e2b0f8c488cf4fca3b2\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 2 13:03:14.118253 containerd[1461]: time="2026-03-02T13:03:14.118149934Z" level=info msg="CreateContainer within sandbox \"a499c5a097823681d92a25378ccb9783664dd9634fb38e2b0f8c488cf4fca3b2\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"4f888447048b94ec6248f1aac66fca3f6aa5c45cc7bfb5513ce899cdafb7217c\"" Mar 2 13:03:14.119506 containerd[1461]: time="2026-03-02T13:03:14.119380160Z" level=info msg="StartContainer for \"4f888447048b94ec6248f1aac66fca3f6aa5c45cc7bfb5513ce899cdafb7217c\"" Mar 2 13:03:14.170064 systemd[1]: Started cri-containerd-4f888447048b94ec6248f1aac66fca3f6aa5c45cc7bfb5513ce899cdafb7217c.scope - libcontainer container 4f888447048b94ec6248f1aac66fca3f6aa5c45cc7bfb5513ce899cdafb7217c. Mar 2 13:03:14.234224 containerd[1461]: time="2026-03-02T13:03:14.234067858Z" level=info msg="StartContainer for \"4f888447048b94ec6248f1aac66fca3f6aa5c45cc7bfb5513ce899cdafb7217c\" returns successfully" Mar 2 13:03:15.108435 kubelet[2478]: E0302 13:03:15.108396 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:15.125713 kubelet[2478]: I0302 13:03:15.125629 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-tvsnb" podStartSLOduration=4.1945531129999996 podStartE2EDuration="9.125551857s" podCreationTimestamp="2026-03-02 13:03:06 +0000 UTC" firstStartedPulling="2026-03-02 13:03:07.94252575 +0000 UTC m=+5.561800847" lastFinishedPulling="2026-03-02 13:03:12.873524495 +0000 UTC m=+10.492799591" observedRunningTime="2026-03-02 13:03:15.125283173 +0000 UTC m=+12.744558299" watchObservedRunningTime="2026-03-02 13:03:15.125551857 +0000 UTC m=+12.744826963" Mar 2 13:03:15.346011 systemd-networkd[1380]: flannel.1: Link UP Mar 2 13:03:15.346024 systemd-networkd[1380]: flannel.1: Gained carrier Mar 2 13:03:16.098478 kubelet[2478]: E0302 13:03:16.098358 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:16.961919 systemd-networkd[1380]: flannel.1: Gained IPv6LL Mar 2 13:03:24.711030 kubelet[2478]: E0302 13:03:24.710874 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:24.711787 containerd[1461]: time="2026-03-02T13:03:24.711493176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wkv5m,Uid:33f77bbe-c50a-4d61-b4a7-af7e264f6d0a,Namespace:kube-system,Attempt:0,}" Mar 2 13:03:24.750745 systemd-networkd[1380]: cni0: Link UP Mar 2 13:03:24.750759 systemd-networkd[1380]: cni0: Gained carrier Mar 2 13:03:24.754634 systemd-networkd[1380]: cni0: Lost carrier Mar 2 13:03:24.782272 systemd-networkd[1380]: vethc2773faf: Link UP Mar 2 13:03:24.784095 kernel: cni0: port 1(vethc2773faf) entered blocking state Mar 2 13:03:24.784205 kernel: cni0: port 1(vethc2773faf) entered disabled state Mar 2 13:03:24.784239 kernel: vethc2773faf: entered allmulticast mode Mar 2 13:03:24.789350 kernel: vethc2773faf: entered promiscuous mode Mar 2 13:03:24.789394 kernel: cni0: port 1(vethc2773faf) entered blocking state Mar 2 13:03:24.793523 kernel: cni0: port 1(vethc2773faf) entered forwarding state Mar 2 13:03:24.793797 kernel: cni0: port 1(vethc2773faf) entered disabled state Mar 2 13:03:24.807109 kernel: cni0: port 1(vethc2773faf) entered blocking state Mar 2 13:03:24.807202 kernel: cni0: port 1(vethc2773faf) entered forwarding state Mar 2 13:03:24.807364 systemd-networkd[1380]: vethc2773faf: Gained carrier Mar 2 13:03:24.808240 systemd-networkd[1380]: cni0: Gained carrier Mar 2 13:03:24.812653 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000104950), "name":"cbr0", "type":"bridge"} Mar 2 13:03:24.812653 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Mar 2 13:03:24.846714 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-02T13:03:24.846550251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:03:24.846714 containerd[1461]: time="2026-03-02T13:03:24.846665211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:03:24.846714 containerd[1461]: time="2026-03-02T13:03:24.846678046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:03:24.846946 containerd[1461]: time="2026-03-02T13:03:24.846768269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:03:24.891870 systemd[1]: Started cri-containerd-31bf3275c2e44cebaa9a7442fdc877264475e18ca8b063d603bd1c755663563a.scope - libcontainer container 31bf3275c2e44cebaa9a7442fdc877264475e18ca8b063d603bd1c755663563a. Mar 2 13:03:24.906756 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:03:24.942165 containerd[1461]: time="2026-03-02T13:03:24.942034298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wkv5m,Uid:33f77bbe-c50a-4d61-b4a7-af7e264f6d0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"31bf3275c2e44cebaa9a7442fdc877264475e18ca8b063d603bd1c755663563a\"" Mar 2 13:03:24.943282 kubelet[2478]: E0302 13:03:24.943223 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:24.949246 containerd[1461]: time="2026-03-02T13:03:24.949079205Z" level=info msg="CreateContainer within sandbox \"31bf3275c2e44cebaa9a7442fdc877264475e18ca8b063d603bd1c755663563a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:03:24.991970 containerd[1461]: time="2026-03-02T13:03:24.991759053Z" level=info msg="CreateContainer within sandbox \"31bf3275c2e44cebaa9a7442fdc877264475e18ca8b063d603bd1c755663563a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f520f02376c28484247610cf39bfdc462fba33f3c2675349793ce9c08dc4eaa\"" Mar 2 13:03:24.993516 containerd[1461]: time="2026-03-02T13:03:24.993348470Z" level=info msg="StartContainer for \"8f520f02376c28484247610cf39bfdc462fba33f3c2675349793ce9c08dc4eaa\"" Mar 2 13:03:25.035822 systemd[1]: Started cri-containerd-8f520f02376c28484247610cf39bfdc462fba33f3c2675349793ce9c08dc4eaa.scope - libcontainer container 8f520f02376c28484247610cf39bfdc462fba33f3c2675349793ce9c08dc4eaa. Mar 2 13:03:25.106774 containerd[1461]: time="2026-03-02T13:03:25.106682316Z" level=info msg="StartContainer for \"8f520f02376c28484247610cf39bfdc462fba33f3c2675349793ce9c08dc4eaa\" returns successfully" Mar 2 13:03:25.127778 kubelet[2478]: E0302 13:03:25.127667 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:25.728469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295861784.mount: Deactivated successfully. Mar 2 13:03:25.922058 systemd-networkd[1380]: cni0: Gained IPv6LL Mar 2 13:03:26.130669 kubelet[2478]: E0302 13:03:26.129880 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:26.149265 kubelet[2478]: I0302 13:03:26.148904 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wkv5m" podStartSLOduration=20.148886797 podStartE2EDuration="20.148886797s" podCreationTimestamp="2026-03-02 13:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:03:25.151086369 +0000 UTC m=+22.770361465" watchObservedRunningTime="2026-03-02 13:03:26.148886797 +0000 UTC m=+23.768161894" Mar 2 13:03:26.712653 kubelet[2478]: E0302 13:03:26.712228 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:26.713667 containerd[1461]: time="2026-03-02T13:03:26.713275902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zc66c,Uid:de073428-2e1e-43f2-b74f-c79fcd2347e9,Namespace:kube-system,Attempt:0,}" Mar 2 13:03:26.751374 systemd-networkd[1380]: veth0c072236: Link UP Mar 2 13:03:26.757907 kernel: cni0: port 2(veth0c072236) entered blocking state Mar 2 13:03:26.757991 kernel: cni0: port 2(veth0c072236) entered disabled state Mar 2 13:03:26.764018 kernel: veth0c072236: entered allmulticast mode Mar 2 13:03:26.770113 kernel: veth0c072236: entered promiscuous mode Mar 2 13:03:26.789918 kernel: cni0: port 2(veth0c072236) entered blocking state Mar 2 13:03:26.789978 kernel: cni0: port 2(veth0c072236) entered forwarding state Mar 2 13:03:26.789773 systemd-networkd[1380]: veth0c072236: Gained carrier Mar 2 13:03:26.793703 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Mar 2 13:03:26.793703 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Mar 2 13:03:26.817830 systemd-networkd[1380]: vethc2773faf: Gained IPv6LL Mar 2 13:03:26.836366 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-02T13:03:26.835946881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:03:26.836366 containerd[1461]: time="2026-03-02T13:03:26.836238395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:03:26.836366 containerd[1461]: time="2026-03-02T13:03:26.836261212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:03:26.836762 containerd[1461]: time="2026-03-02T13:03:26.836410689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:03:26.903911 systemd[1]: Started cri-containerd-5ae9a3097af9eece1637332f7039d0c8ff296e8aaeed2d3de93c58923a8a2f3b.scope - libcontainer container 5ae9a3097af9eece1637332f7039d0c8ff296e8aaeed2d3de93c58923a8a2f3b. Mar 2 13:03:26.920156 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:03:26.975935 containerd[1461]: time="2026-03-02T13:03:26.975757636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zc66c,Uid:de073428-2e1e-43f2-b74f-c79fcd2347e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae9a3097af9eece1637332f7039d0c8ff296e8aaeed2d3de93c58923a8a2f3b\"" Mar 2 13:03:26.977034 kubelet[2478]: E0302 13:03:26.976997 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:26.983863 containerd[1461]: time="2026-03-02T13:03:26.983799206Z" level=info msg="CreateContainer within sandbox \"5ae9a3097af9eece1637332f7039d0c8ff296e8aaeed2d3de93c58923a8a2f3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:03:27.004741 containerd[1461]: time="2026-03-02T13:03:27.004657103Z" level=info msg="CreateContainer within sandbox \"5ae9a3097af9eece1637332f7039d0c8ff296e8aaeed2d3de93c58923a8a2f3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa9c5c6812cc4e4673adc8e0a0fc7ba7c819c3a16f4d28e709f6409ffe14de6c\"" Mar 2 13:03:27.007313 containerd[1461]: time="2026-03-02T13:03:27.005809255Z" level=info msg="StartContainer for \"aa9c5c6812cc4e4673adc8e0a0fc7ba7c819c3a16f4d28e709f6409ffe14de6c\"" Mar 2 13:03:27.076517 systemd[1]: Started cri-containerd-aa9c5c6812cc4e4673adc8e0a0fc7ba7c819c3a16f4d28e709f6409ffe14de6c.scope - libcontainer container aa9c5c6812cc4e4673adc8e0a0fc7ba7c819c3a16f4d28e709f6409ffe14de6c. Mar 2 13:03:27.196441 kubelet[2478]: E0302 13:03:27.196091 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:27.215099 containerd[1461]: time="2026-03-02T13:03:27.215022159Z" level=info msg="StartContainer for \"aa9c5c6812cc4e4673adc8e0a0fc7ba7c819c3a16f4d28e709f6409ffe14de6c\" returns successfully" Mar 2 13:03:27.733542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950644757.mount: Deactivated successfully. Mar 2 13:03:28.185760 kubelet[2478]: E0302 13:03:28.185551 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:28.201146 kubelet[2478]: I0302 13:03:28.200940 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zc66c" podStartSLOduration=22.200916726 podStartE2EDuration="22.200916726s" podCreationTimestamp="2026-03-02 13:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:03:28.20029324 +0000 UTC m=+25.819568366" watchObservedRunningTime="2026-03-02 13:03:28.200916726 +0000 UTC m=+25.820191822" Mar 2 13:03:28.545981 systemd-networkd[1380]: veth0c072236: Gained IPv6LL Mar 2 13:03:29.188492 kubelet[2478]: E0302 13:03:29.188359 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:30.191160 kubelet[2478]: E0302 13:03:30.191055 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:54.274820 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:52244.service - OpenSSH per-connection server daemon (10.0.0.1:52244). Mar 2 13:03:54.331998 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 52244 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:54.334309 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:54.343243 systemd-logind[1441]: New session 6 of user core. Mar 2 13:03:54.350853 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:03:54.521302 sshd[3529]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:54.526812 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:52244.service: Deactivated successfully. Mar 2 13:03:54.529119 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:03:54.530204 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:03:54.531714 systemd-logind[1441]: Removed session 6. Mar 2 13:03:59.545079 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:52256.service - OpenSSH per-connection server daemon (10.0.0.1:52256). Mar 2 13:03:59.618207 sshd[3568]: Accepted publickey for core from 10.0.0.1 port 52256 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:59.620146 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:59.625981 systemd-logind[1441]: New session 7 of user core. Mar 2 13:03:59.631915 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:03:59.806938 sshd[3568]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:59.812135 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:52256.service: Deactivated successfully. Mar 2 13:03:59.815090 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:03:59.816317 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:03:59.818900 systemd-logind[1441]: Removed session 7. Mar 2 13:04:04.839957 systemd[1]: Started sshd@7-10.0.0.65:22-10.0.0.1:44754.service - OpenSSH per-connection server daemon (10.0.0.1:44754). Mar 2 13:04:04.929255 sshd[3605]: Accepted publickey for core from 10.0.0.1 port 44754 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:04.931817 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:04.939133 systemd-logind[1441]: New session 8 of user core. Mar 2 13:04:04.947840 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:04:05.113971 sshd[3605]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:05.120874 systemd[1]: sshd@7-10.0.0.65:22-10.0.0.1:44754.service: Deactivated successfully. Mar 2 13:04:05.123846 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:04:05.125258 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:04:05.127864 systemd-logind[1441]: Removed session 8. Mar 2 13:04:07.723151 kubelet[2478]: E0302 13:04:07.723037 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:04:10.118063 systemd[1]: Started sshd@8-10.0.0.65:22-10.0.0.1:44758.service - OpenSSH per-connection server daemon (10.0.0.1:44758). Mar 2 13:04:10.160013 sshd[3643]: Accepted publickey for core from 10.0.0.1 port 44758 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:10.162130 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:10.168115 systemd-logind[1441]: New session 9 of user core. Mar 2 13:04:10.174719 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:04:10.301930 sshd[3643]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:10.312987 systemd[1]: sshd@8-10.0.0.65:22-10.0.0.1:44758.service: Deactivated successfully. Mar 2 13:04:10.314748 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:04:10.316851 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:04:10.318106 systemd[1]: Started sshd@9-10.0.0.65:22-10.0.0.1:44762.service - OpenSSH per-connection server daemon (10.0.0.1:44762). Mar 2 13:04:10.320020 systemd-logind[1441]: Removed session 9. Mar 2 13:04:10.380965 sshd[3659]: Accepted publickey for core from 10.0.0.1 port 44762 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:10.382653 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:10.388896 systemd-logind[1441]: New session 10 of user core. Mar 2 13:04:10.400760 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:04:10.572223 sshd[3659]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:10.577893 systemd[1]: sshd@9-10.0.0.65:22-10.0.0.1:44762.service: Deactivated successfully. Mar 2 13:04:10.579923 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:04:10.583330 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:04:10.590094 systemd[1]: Started sshd@10-10.0.0.65:22-10.0.0.1:44764.service - OpenSSH per-connection server daemon (10.0.0.1:44764). Mar 2 13:04:10.593056 systemd-logind[1441]: Removed session 10. Mar 2 13:04:10.625975 sshd[3671]: Accepted publickey for core from 10.0.0.1 port 44764 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:10.627852 sshd[3671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:10.633345 systemd-logind[1441]: New session 11 of user core. Mar 2 13:04:10.643788 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:04:10.712158 kubelet[2478]: E0302 13:04:10.711125 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:04:10.762902 sshd[3671]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:10.769359 systemd[1]: sshd@10-10.0.0.65:22-10.0.0.1:44764.service: Deactivated successfully. Mar 2 13:04:10.772156 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:04:10.773426 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:04:10.775324 systemd-logind[1441]: Removed session 11. Mar 2 13:04:15.785204 systemd[1]: Started sshd@11-10.0.0.65:22-10.0.0.1:49066.service - OpenSSH per-connection server daemon (10.0.0.1:49066). Mar 2 13:04:15.857053 sshd[3712]: Accepted publickey for core from 10.0.0.1 port 49066 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:15.859704 sshd[3712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:15.866724 systemd-logind[1441]: New session 12 of user core. Mar 2 13:04:15.875857 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:04:16.007410 sshd[3712]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:16.015118 systemd[1]: sshd@11-10.0.0.65:22-10.0.0.1:49066.service: Deactivated successfully. Mar 2 13:04:16.025349 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:04:16.027318 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:04:16.029828 systemd-logind[1441]: Removed session 12. Mar 2 13:04:21.053045 systemd[1]: Started sshd@12-10.0.0.65:22-10.0.0.1:49068.service - OpenSSH per-connection server daemon (10.0.0.1:49068). Mar 2 13:04:21.098427 sshd[3762]: Accepted publickey for core from 10.0.0.1 port 49068 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:21.101401 sshd[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:21.115096 systemd-logind[1441]: New session 13 of user core. Mar 2 13:04:21.124656 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:04:21.414154 sshd[3762]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:21.430741 systemd[1]: sshd@12-10.0.0.65:22-10.0.0.1:49068.service: Deactivated successfully. Mar 2 13:04:21.437400 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:04:21.444284 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:04:21.449447 systemd-logind[1441]: Removed session 13. Mar 2 13:04:21.716988 kubelet[2478]: E0302 13:04:21.716701 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:04:23.712388 kubelet[2478]: E0302 13:04:23.712249 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:04:26.432737 systemd[1]: Started sshd@13-10.0.0.65:22-10.0.0.1:40044.service - OpenSSH per-connection server daemon (10.0.0.1:40044). Mar 2 13:04:26.476534 sshd[3797]: Accepted publickey for core from 10.0.0.1 port 40044 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:26.479056 sshd[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:26.485300 systemd-logind[1441]: New session 14 of user core. Mar 2 13:04:26.493098 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:04:26.643390 sshd[3797]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:26.658202 systemd[1]: sshd@13-10.0.0.65:22-10.0.0.1:40044.service: Deactivated successfully. Mar 2 13:04:26.661387 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:04:26.663928 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:04:26.675418 systemd[1]: Started sshd@14-10.0.0.65:22-10.0.0.1:40048.service - OpenSSH per-connection server daemon (10.0.0.1:40048). Mar 2 13:04:26.677151 systemd-logind[1441]: Removed session 14. Mar 2 13:04:26.712172 sshd[3812]: Accepted publickey for core from 10.0.0.1 port 40048 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:26.715479 sshd[3812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:26.724498 systemd-logind[1441]: New session 15 of user core. Mar 2 13:04:26.733884 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:04:27.001105 sshd[3812]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:27.023743 systemd[1]: sshd@14-10.0.0.65:22-10.0.0.1:40048.service: Deactivated successfully. Mar 2 13:04:27.026116 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:04:27.028282 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:04:27.033928 systemd[1]: Started sshd@15-10.0.0.65:22-10.0.0.1:40058.service - OpenSSH per-connection server daemon (10.0.0.1:40058). Mar 2 13:04:27.035083 systemd-logind[1441]: Removed session 15. Mar 2 13:04:27.073595 sshd[3824]: Accepted publickey for core from 10.0.0.1 port 40058 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:27.075410 sshd[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:27.082546 systemd-logind[1441]: New session 16 of user core. Mar 2 13:04:27.099865 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:04:27.758922 sshd[3824]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:27.769310 systemd[1]: sshd@15-10.0.0.65:22-10.0.0.1:40058.service: Deactivated successfully. Mar 2 13:04:27.775710 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:04:27.778392 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:04:27.782080 systemd-logind[1441]: Removed session 16. Mar 2 13:04:27.796293 systemd[1]: Started sshd@16-10.0.0.65:22-10.0.0.1:40062.service - OpenSSH per-connection server daemon (10.0.0.1:40062). Mar 2 13:04:27.834169 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 40062 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:27.836151 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:27.842042 systemd-logind[1441]: New session 17 of user core. Mar 2 13:04:27.848897 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:04:28.097192 sshd[3845]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:28.110456 systemd[1]: sshd@16-10.0.0.65:22-10.0.0.1:40062.service: Deactivated successfully. Mar 2 13:04:28.117418 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:04:28.120936 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:04:28.133962 systemd[1]: Started sshd@17-10.0.0.65:22-10.0.0.1:40066.service - OpenSSH per-connection server daemon (10.0.0.1:40066). Mar 2 13:04:28.135251 systemd-logind[1441]: Removed session 17. Mar 2 13:04:28.173506 sshd[3857]: Accepted publickey for core from 10.0.0.1 port 40066 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:28.175539 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:28.181431 systemd-logind[1441]: New session 18 of user core. Mar 2 13:04:28.194767 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:04:28.337799 sshd[3857]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:28.343276 systemd[1]: sshd@17-10.0.0.65:22-10.0.0.1:40066.service: Deactivated successfully. Mar 2 13:04:28.346337 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:04:28.347932 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:04:28.349982 systemd-logind[1441]: Removed session 18. Mar 2 13:04:33.357658 systemd[1]: Started sshd@18-10.0.0.65:22-10.0.0.1:41802.service - OpenSSH per-connection server daemon (10.0.0.1:41802). Mar 2 13:04:33.401655 sshd[3891]: Accepted publickey for core from 10.0.0.1 port 41802 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:33.404066 sshd[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:33.410484 systemd-logind[1441]: New session 19 of user core. Mar 2 13:04:33.425796 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:04:33.564686 sshd[3891]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:33.569516 systemd[1]: sshd@18-10.0.0.65:22-10.0.0.1:41802.service: Deactivated successfully. Mar 2 13:04:33.571853 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:04:33.572874 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:04:33.574927 systemd-logind[1441]: Removed session 19. Mar 2 13:04:35.711138 kubelet[2478]: E0302 13:04:35.711050 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:04:38.584552 systemd[1]: Started sshd@19-10.0.0.65:22-10.0.0.1:41804.service - OpenSSH per-connection server daemon (10.0.0.1:41804). Mar 2 13:04:38.656430 sshd[3928]: Accepted publickey for core from 10.0.0.1 port 41804 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:38.658790 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:38.666091 systemd-logind[1441]: New session 20 of user core. Mar 2 13:04:38.674817 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:04:38.847794 sshd[3928]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:38.853958 systemd[1]: sshd@19-10.0.0.65:22-10.0.0.1:41804.service: Deactivated successfully. Mar 2 13:04:38.856717 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:04:38.857855 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:04:38.859792 systemd-logind[1441]: Removed session 20. Mar 2 13:04:43.875893 systemd[1]: Started sshd@20-10.0.0.65:22-10.0.0.1:58294.service - OpenSSH per-connection server daemon (10.0.0.1:58294). Mar 2 13:04:43.913392 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 58294 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:43.915765 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:43.923383 systemd-logind[1441]: New session 21 of user core. Mar 2 13:04:43.933883 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:04:44.075832 sshd[3964]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:44.081615 systemd[1]: sshd@20-10.0.0.65:22-10.0.0.1:58294.service: Deactivated successfully. Mar 2 13:04:44.083800 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:04:44.084801 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:04:44.086155 systemd-logind[1441]: Removed session 21. Mar 2 13:04:44.711369 kubelet[2478]: E0302 13:04:44.711224 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:04:49.092946 systemd[1]: Started sshd@21-10.0.0.65:22-10.0.0.1:58300.service - OpenSSH per-connection server daemon (10.0.0.1:58300). Mar 2 13:04:49.163127 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 58300 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:49.165744 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:49.171658 systemd-logind[1441]: New session 22 of user core. Mar 2 13:04:49.187810 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:04:49.316037 sshd[3999]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:49.322076 systemd[1]: sshd@21-10.0.0.65:22-10.0.0.1:58300.service: Deactivated successfully. Mar 2 13:04:49.324977 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:04:49.327468 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:04:49.330274 systemd-logind[1441]: Removed session 22. Mar 2 13:04:50.710525 kubelet[2478]: E0302 13:04:50.710400 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:04:54.340904 systemd[1]: Started sshd@22-10.0.0.65:22-10.0.0.1:56172.service - OpenSSH per-connection server daemon (10.0.0.1:56172). Mar 2 13:04:54.385468 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 56172 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:54.387964 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:54.393659 systemd-logind[1441]: New session 23 of user core. Mar 2 13:04:54.402913 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:04:54.541192 sshd[4033]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:54.545078 systemd[1]: sshd@22-10.0.0.65:22-10.0.0.1:56172.service: Deactivated successfully. Mar 2 13:04:54.547182 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:04:54.548112 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:04:54.549497 systemd-logind[1441]: Removed session 23.